Appian Forum: Nokia Siemens

Nick Deacon, Global Head of BPM for consulting and systems integration within Nokia Siemens Networks, a global network communications services firm. The consulting and systems integration group, with a staff of 3,500-4000 and annual sales of 400M Euro, has the usual problems of managing a workforce of service providers, and were looking for a BPM solution — easy to use, relatively low cost and easy to customize — to help them better manage what he referred to as the Mean Time Between Surprises. They were looking to quickly implement their core processes of sales, service execution, and resource and competence management, before global IT noticed what they were doing and turned it into a mega-project.

Since the project started in February (yes, this February), they have implemented their first module (service delivery process) and rolled it out to 400 users across all of their global regions, including portals and dashboards for analyzing and managing the business. At the same time, they were working on the resource and competence management process module, which is about to start into testing, and the sales and technical support processes will be ready for deployment in November. Product and portfolio management will follow in December, and offshore delivery management in February. Basically, that means that they will have deployed BPM across all of their major business processes within 12 months.

Through reduced data entry, increase sharing of information and increased reuse of project assets, they expect productivity savings of 12-16M Euro per year, which (I hope 🙂 ) provide an ROI of much less than a year.

There’s now interest from other areas within NSN, and their projects are becoming a sort of proof of concept for BPMS across the much larger organization, not just within the consulting and systems integration group.

Deacon had nothing but good things to say about Appian in terms of both the product and how their professional services has worked with NSN to deliver the right business functionality on a tight schedule across a global enterprise. He sees them as being aligned with NSN’s vision and strategy for BPM, and have been a true partner on their implementation. They looked at larger BPM vendors, but found their solutions too rigid and too expensive.

Appian Forum: Matt Calkins

Appian’s CEO was up for the only vendor executive presentation of the conference, to discuss Appian and its community of customers and partners. As a somewhat late entrant to the BPM market, they had only about 15 customers in 2004 growing to almost 80 (active) customers in 2007, and expanded from a primarily government focus to include many other industry verticals.

Appian’s view of BPM is that although it’s becoming mainstream, email still owns 99% of what could be the BPM space through the implementation of ad hoc processes. Because of that, it’s essential for BPMS to easy for all types of users — both designers and end users — and provide very little resistance to adoption. A fully web-based product suite is one part of this, and Appian is one of the few vendors to provide a web-based process designer, and their move into a hosted model reduces the frictional costs further. He discussed a number of their technical innovations, stating “we didn’t do this just because we’re nerds”, but sees them as essential to providing a good BPMS.

With the downturn in the US market, Appian and other vendors are being forced to look outside their borders for new customers, and finding — surprise! — that there are significant international opportunities. Their EMEA sales grew by over 300% year-over-year, and they’re seeing more potential business there.

He also announced Appian ShareBase, a wiki (his word, but actually more of a shared repository) of code objects pertaining to Appian, including process models, rules, smart nodes and any other design objects that can be shared, all of it available free for other Appian customers to reuse. Appian will be seeding ShareBase with a substantial amount of their own intellectual property. No word on the licensing ramifications here, but based on the “free to reuse” statement, I assume that it’s pretty open.

He also discussed their new partnership with MEGA for process modeling and enterprise architecture, more of which will be discussed later in the day.

Appian Forum: Connie Moore keynote

Three days ago, I was in Rome — original home of the Roman Forum and the Appian Way — and now I’m at Appian Forum: Appian‘s first user conference. Samir Gulati, VP of Marketing, delivered some short opening remarks including the “Sandy Kemsley Conference Checklist”, showing how they measured up on my basic requirements for conferences: wifi, online agenda, good content, frequent networking breaks, and other good stuff. They missed on the power plugs at the tables, but other than that, I have to give them full marks.

They had about 150 people sign up for the conference, although I don’t think that were are that many in the room this morning; this was not a paid conference, which tends to result in a higher number of no-shows, but there’s a good cross-section of Appian’s customers and partners, as well as analysts.

After Samir’s short introduction, he turned it over to Connie Moore of Forrester for a keynote on Design for People, Build for Change (wait, this sounds familiar…). She had a great graphic that expand on some of the things that I’ve heard Forrester people talk about in the past, highlighting the “design for people” part of the equation through social networking and other techniques, whereas we’ve often focused (maybe too much) on the “build for change” part of business innovation.

She discussed four factors creating the “perfect storm” that’s led to the current situation:

  • Design evolution, where more products are being designed for optimal use and customer experience, rather than the conveniences of the manufacturer or based on the preconceived notions of the designer. There are many consumer products that illustrate this, but it holds equally true with business computer systems.
  • Process evolution, where we do more continuous improvement than big bang reengineering for both technical and cultural reasons. The current range of BPM products, with monitoring and optimization built in, allow for this sort of continuous improvement in ways that were not previously possible, which has helped to facilitate this shift.
  • Workforce evolution, with the boomers — along with their knowledge of business processes — starting to retire, and the systems developed for those boomers not really suitable for the millenials who grew up digital. This forces the move to different computing paradigms, particularly social networking, as well as different corporate culture in order to attract and retain the talent.
  • Software evolution, moving from a traditional model to software as a service, Web 2.0, open source and free software in both consumer and enterprise environments.

All of this means that we need to bridge between structured human activities and system-intensive processes that we’ve dealt with in traditional enterprise systems, and the ad hoc, messy, chaotic human activities that we see in the new generation of dynamic business applications. Earning her keep, she highlighted how Appian brings content and collaboration to the usual BPM functionality seen with other vendors, then walked through an example of a dynamic business application.

She discussed the need to forge partnerships between stakeholders, preferably by collocating the business and IT people on a project team so that they create a more seamless project. I’ve seen a lot of projects where there is a serious disconnect between the business and IT participants, and having them sit and work together could only help that situation.

Forrester went out to a number of enterprises to see how they build for change, and saw a few different models:

  • An IT-focused model where the technical team always makes changes to the process (hopefully based on conversations with the business)
  • A blended model where the business owners meet with the project team on a regular basis, and the process changes are made by business analysts or technical team members, depending on the requriement

There needs to be a change model that allows for both continuous change — every 1-2 weeks for process tuning — and for new process versions — every 2-6 months for new processes and major changes. This change model needs to be incorporated from the beginning in any process project to allow for continuous improvement, or you’ll end up with inflexible processes; at the very least, plan on a minimum of 3 iterations shortly after implementation before the process is even remotely correct. At the same time, you need to consider forming a process center of excellence to help with overall process improvement, and consider the link to SOA in order to provide a technical framework for dynamic business applications.

When Forrester asked enterprise architects about the primary benefit of BPM, the largest response (24%) was increased productivity, with process visibility (18%) and agility (15%) following. Other benefits included the ability to model processes, consistent processes across business units/geographies, and reduced reliance on IT for process improvement. By looking at the perceived degree of success and the existence of a BPM center of excellence, they found a clear correlation: about half of those who said that BPM was a rousing success had a COE, whereas less than 5% of the failing efforts had a COE.

Her experience — which matches mine — shows that going with a large systems integrator is not a good way to build the necessary skills within an enterprise to achieve ongoing process improvement, and sees direct skills transfer from the BPM vendor has a greater degree of success. Business analysts need to become process analysts, and developers need to become assemblers of dynamic applications. She finished up with several suggestions on how to get started, for business people, IT and executives.

Although there was a lot of repetition from earlier versions of this message that I’ve heard her deliver, I do see some evolution and refinement of the message. Some of the stats and ideas go by pretty fast, however; the audience might benefit from a bit less of a PowerPoint karaoke feeling.

There was an audience question about how Web 2.0 concepts and products — mostly being developed by tiny companies — will be integrated with traditional BPM products from larger companies; Moore didn’t really answer the question, but discussed how the BPM platform vendors are building their own Web 2.0 functionality, and many other BPM vendors are partnering with SharePoint or other collaborative tools. I think that there’s a lot of room for the Enterprise 2.0 vendors and the non-platform BPM vendors to get together to create social networking-enabled processes that are far beyond what’s available from any of the platform vendors (although IBM is doing some pretty innovative stuff), or through SharePoint integration.

This week in BPM conferences

Last week and this week saw some very difficult choices for conference attending: I went to the International BPM conference in Milan last week, but missed Office 2.0; this week, I’m attending Appian’s user conference and Gartner’s BPM summit in Washington DC, but missing SAP’s TechEd and all my Enterprise Irregulars peeps (although I won’t at all miss going to Las Vegas).

Watch for my coverage of the Appian user conference tomorrow, then Gartner starting on Wednesday.

Feed stats

A few weeks ago, I switched over to the Google version of Feedburner for my RSS feed (since Google owns Feedburner now, they’re transitioning to feedburner.google.com), and my subscriber numbers instantly dropped by about 20%. Either the stats on one or the other are screwed up, or they dropped a bunch of my readers.

Anyone else seeing this phenomenon?

BPM Milan: The Future of BPM

Peter Dadam of University of Ulm opened the last day of the conference (and my last session, since I’m headed out at the morning break) with a keynote on the future of BPM: Flyin with the Eagles, or Scratching with the Chickens?

He went through some of his history in getting into research (in the IBM DB2 area), with a conclusion when you ask current users about what they want, they tend to use the current technology as a given, and only request workarounds within the constraints of the existing solution. The role of research is, in part, to disseminate knowledge about what is possible: the new paradigm for the future. Anyone who has worked on the bleeding edge of innovation recognizes this, and realizes that you first have to educate the market on what’s possible before you can begin to start developing the use cases for it.

He discussed the nature of university research versus industrial research, where the pendulum has swung from research being done in universities, to the more significant research efforts being done (or being perceived as being done) in industrial research centers, to the closing of many industrial research labs and a refocusing on pragmatic, product-oriented research by the rest. This puts the universities back in the position of being able to offer more visionary research, but there is a risk of just being the research tail that the industry dog wags.

Moving on to BPM, and looking at it against a historical background, we have the current SOA frenzy in industry, but many enterprises implementing it are hard-pressed to say why their current SOA infrastructure provides anything for them that CORBA didn’t. There’s a big push to bring in BPM tools, particularly modeling tools, without considering the consequences of putting tools like this in the hands of users who don’t understand the impact of certain design decisions. We need to keep both the manual and automated processes in mind, and consider that exceptions are often not predictable; enterprises cannot take the risk of becoming less flexible through the implementation of BPM because they make the mistake of designing completely structured and rigid processes.

There’s also the issue of how the nature of web services can trivialize the larger relationship between a company and its suppliers: realistically, you don’t replace one supplier with another just because they have the same web services interface, without significant other changes (the exception to this is, of course, when the product provided by the supplier is the web service itself).

He sees that there is a significant risk that BPM technology will not develop properly, and that the current commercial systems are not suitable for advanced applications. He described several challenges in implementing BPM (e.g., complex structured processes; exceptions cannot be completely anticipated), and the implications in terms of what must exist in the system in order to overcome this challenge (e.g., expressive process meta model; ad-hoc deviations from the pre-planned execution sequence must be possible). He discussed their research (more than 10 years ago now) in addressing these issues, considering a number of different tools and approaches, how that resulted in the ADEPT process meta model and eventually the AristaFlow process management system. He then gave us a demo of the AristaFlow process modeler — not something that you see often in a keynote — before moving on to discuss how some of the previously stated challenges are handled, and how the original ADEPT research projects fed into the AristaFlow project. The AristaFlow website describes the motivation for this joint university-industry project:

In particular, in dynamic environments it must be possible to quickly implement and deploy new processes, to enable ad-hoc modifications of single process instances at runtime (e.g. to add, delete or shift process steps), and to support process schema evolution with instance migration, i.e. to propagate process schema changes to already running instances. These requirements must be met without affecting process consistency and by preserving the robustness of the process management system.

Although lagging behind many commercial systems in terms of user interface and some functionality, this provides much more dynamic functionality in areas such as allowing a user to add make minor modifications to the process instance that they are currently running.

He concluded with the idea that BPM technology could become as important as database technology, if done correctly, but it’s a very complex issue due to the impact on the work habits of the people involved, and the desire not to limit flexibility while still providing the benefits of process automation and governance. It’s difficult to predict what real-world process exceptions will occur and therefore what type of flexibility will be required during execution. By providing a process template rather than a rigidly-structured process instance, some of this flexibility can be achieved within the framework of the BPMS rather than forcing the users to break the process in order to handle exceptions.

BPM Milan: Managing Process Variability and Compliance

We finished the day with a panel on Managing Process Variability and Compliance in the Enterprise – An Opportunity Not To Be Missed, or a Fools Errand? This was moderated by Heiko Ludwig & Chris Ward of IBM Research, and included Manfred Reichert, University of Ulm, Schahram Dustdar of Vienna University of Technology, Jyoti Bhat of Infosys, and Claudio Bartolini of HP.

Any multinational company ends up with tools and business processes that are specific to each region or country, adopted typically to respond to the local regulatory environment. This presents challenges in establishing enterprise-wide best practices, process standardization and compliance: the issue is to either establish compliance, or accept and manage variability.

The consensus seems to be “it depends”: compliance provides better auditability on high-value processes, whereas variability provides benefits for processes that need to be highly flexible and agile, and you may not be able to apply the same principles across all business processes. It’s only possible to enforce enterprise-wide process compliance when there is a vital business need; it’s not something to be taken on lightly, since it will almost certainly decrease process agility, which will not have the support of regional management. Even with “compliant” processes, there will be variability across regions, particularly those greatly different in size; compliance may then be defined in terms of certain milestones and quality standards being met rather than a step-by-step identical process.

The panel was run in my least favorite form, namely serial individual presentations (which were fairly repetitive), followed by direct questions from the moderator to each of the panelists. Very little interaction between panelists, no fisticuffs, and not enough stimulating conversation.

BPM Milan: Diagnosing Differences between Business Process Models

Remco Dijkman of the Technical of Technology of Eindhoven presented a paper on Diagnosing Differences between Business Process Models, focusing on behavioral differences rather than the structural differences that were examined in the previous paper by IBM. The problem is the same: there are two process models, likely two versions of the same model, and there is a need to detect and characterize the differences between them.

He developed a taxonomy of differences between processes, both from similar processes in practice and from completed trace inequivalences. This includes skipped functions, different conditions (gateway type with same number of paths traversed), additional conditions (gateway conditions with a potentially larger number of paths traversed), additional start condition, different dependencies, and iterative versus once-off.

You can tell it’s getting near the end of the day — my posts are getting shorter and shorter — and we have only a panel left to finish off.

BPM Milan: Detecting and Resolving Process Model Differences

Jochen Kuester of IBM Zurich Research presented a paper on Detecting and Resolving Process Model Differences in the Absence of a Change Log, co-authored by Christian Gerth, Alexander Foerster and Gregor Engels. Detecting differences would be done in the case where a process model is changed, and there is a need to detect and resolve the differences between the models. They focus on detection, visualization and resolution of differences between the process models.

Detection of differences between process model, which involves reconstructing the change log that transforms one version to another. This is done by computing fragments for the process models similar to the process structure tree methods that we saw from other IBM researches yesterday, then identifying elements that are identical in both models (even if in a different part of the model), elements that are in the first model but not the second, and those that are in the second model but not the first. This allows correspondences to be derived for the fragments in the process structure tree. From there, they can detect differences in actions/fragments, whether an insertion, deletion or move of an action within or between fragments.

They have a grammar of compound operations describing these differences, which can now be used to create a change log by creating a joint process structure tree formed by combining the process structure tree of both models, tagging the nodes with the operations, and determining the position parameters of each of the operations.

They’ve prototyped this in IBM WebSphere Process Modeler.

BPM Milan: Workflow Simulation for Operational Decision Support

The afternoon started with the section on Quantitative Analysis, beginning with a presentation by Anne Rozinat from the Technical University of Eindhoven on Workfow Simulation for Operational Decision Support Using Design, Historic and State Information, with the paper co-authored by Moe Wynn, Wil van der Aalst, Arthur ter Hofstede and Colin Fidge.

As she points out, few organizations are using simulation in a structured and organized way; I’ve definitely seen this in practice, where process simulation is used much more during the sales demo than in the customer implementations. She sees three issues with how simulation is done now: resources are modeled incorrectly, simulation models may have to be created from scratch, and there is more of a focus on design than on operational decisions through lack of integration of historical operational data back into the model. I am seeing these last two problems solved in many commercial systems already: rarely is it necessary to separately model the simulation from the process model, and some number of modeling systems allow for the reintegration of historical execution data to drive the simulation.

Their approach uses three types of information:

  • design information, from the original process model
  • historic information, from historical execution data
  • state information, from currently executed workflows, primarily for setting the initial state of the simulation

They have created a prototype of this using YAWL and ProM, and she walked through the specifics of how this information is extracted from the systems, how the simulation model is generated, and how the current state is loaded without changing the simulation model: this latter step can happen often in order to create a new starting point for the simulation that corresponds to the current state in the operational system.

This last factor has the potential to turn simulation into a much more interactive and frequently-used capability, if you consider the capability of being able to run a simulation forward from the current state in the operational system in order to predict behavior over the upcoming period of time: consider, for example, being able to use the current state as the initial properties of the simulation, then adding resources to predict how long it will take to clear the actual backlog of work in order to determine the optimal number of people to add to a process at this point in time. This turns short-term simulation into a operational decision-making tool, rather than just a design tool.