Appian Forum: MEGA Partnership

Terry Lee, MEGA’s VP of North American operations, gave us an overview of MEGA, both in terms of their business process analysis and enterprise architecture capabilities. He stated the real reason for using a BPA tool, rather than just the modeling environment within the BPMS, is the ability to analyze the processes within a larger context: relative to risk analysis, enterprise architecture, and corporate performance management.

A process is analyzed and some level of design is completed in MEGA, then the process is handed off to Appian through a manual export/import for further work in their process designer — no mention of round-tripping, although I happened to be sitting beside Dan Hebda (MEGA’s VP of technology) and passed him a note asking about this. He replied that round-tripping would be supported, but isn’t in the current prototype that they’re demonstrating here at the conference. I have a meeting with Dan tomorrow at Gartner, where I’m sure to get a lot more information on this, and Terry summed at the end of his presentation by mentioning that the future integration would include round-tripping.

Metrics for the executing process are captured using the MEGA Advisor and fed back to the models in MEGA to allow for process analysis and optimization.

I believe that there’s a lot of benefit for many organizations in using a BPA tool such as MEGA for modeling processes — especially the portions of processes that are not automated, hence may not be represented in the process models within a BPMS — and the larger enterprise architecture context. For success, however, this requires two key areas of integration: a seamless bidirectional exchange of process models, and the ability to load executing process logging data back into the BPA. It appears that Appian and MEGA are working hard to achieve both of these.

Appian Forum: Enterprise Rent-A-Car

Pat Steinmann and Dion Beuckman of Enterprise Rent-A-Car presented on their Appian implementation; I didn’t realize that not only are they the largest rental car company in North America, but are family-owned. Steinmann is from corporate IT, and Beuckman is with an operating unit in southern California, and they talked about two independent implementations of Appian within Enterprise.

Steinmann started off the presentation, discussing how the focus of their implementation was on the requests for IT services. Originally, they had an AS/400-based request system that was highly manual and error-prone, and they often could not meet the requirement to have a request successfully submitted to IT within 3 days from the original request. It was costing them $212/request in administrative overhead, or $600k/year, any increase in volume required a linear increase in staff, and the time required to train a new employee could stretch to 9 months. Over 200 services were available for request, with the work executed by 60 teams.

They created their Request Online system with a vision to automate the workflow and task execution where possible, allowing the human participants to just do the value-added activities; provide more visibility into the process; and reduce training time for new employees. This led to two basic design goals:

  • Translate a submitted request into specific, actionable tasks
  • Ensure that submitted requests are acted upon

Not only did this require automated processes, but also integration with other systems, rules to define what type of requests could be made by specific users, and process agility.

Four of their own developers and 3 Appian resources created their implementation in 6 months, using joint application design (JAD) sessions with business and IT for what Steinmann described as a very agile development environment: so much so that they had only one design change after they went live.

They have been able to be more proactive with their support of users, for example, by collecting a list of users who were interested in Outlook-iPhone integration, then automatically notifying them when new information was available. They’ve also made the portal easy to use so that users don’t need much (or any) formal training on how to use it, since it’s available across the organization.

Beuckman then went through another implementation that they did, for the vehicle maintenance payables process. They process over 18,000 invoices per month, all paper-based and manually processed, and have a geographically-dispersed workforce that might be involved in approving invoices before payment. Since there are rarely purchase orders for the invoices, there are some fairly complex business rules required for validation of the invoice.

Their aim was to automate non-value-added tasks, expedite cross-departmental (and likely geographically distributed) workflow, centralize A/P processing while maintaining distributed decision-making, reduce error rates, increase process consistency, reduce handoffs and touchpoints, and increase productivity.

Their Appian implementation, used by about 60 employees, deployed their first process about 6 months from the start of development using 1 internal and 1 Appian resource. They did an 8-week pilot with one region, then rolled out the remaining 12 regions over 12 weeks so that now 100% of their maintenance invoices are processed through the application. Their second application, body shop payables, then only took them 6 weeks.

They reduce the human steps in the process from 9-19 down to 1-6, with 40% now auto-approved and loaded directly to PeopleSoft. They have real-time visibility into the process, and can easily identify system bottlenecks and locate missing invoices to determine process problems.

On the surface, this is a pretty standard A/P imaging and workflow application, with one major difference from most similar implementations: they had it up and running in less than 6 months.

Appian Forum: Archstone

The next presentation was from David Carpenter, Director of BPM at Archstone, a residential apartment investor and operator with about 2,600 employees. One of their main challenges was a high employee turnover rate, and the necessity to reduce the learning curve for new people. They wanted to move away from paper-based forms and manual workflow to a more automated workflow with dynamic online forms, with the goal of processes becoming more consistent and coordinated. At the same time, they needed people to be able to use the system with little or no formal training.

They selected Appian because it is a completely web-based solution, has dynamic forms, and requires a minimum of IT resources since there was more configuration than coding involved in the implementation. Their first focus was a broad but shallow implementation that supported their 2,200 field associates, and have settled into 90-day development cycles with the goal of delivering 3-6 new processes in that time. They have done this with a team of three non-IT staff, supported by in-house IT resources and Appian’s professional services.

They rolled this out to the field by going on a country-wide road show to present what they were doing and collect feedback, then rolled that feedback into the implementation — he sees this constant communication and integration of the users’ ideas as a key part of the end-user acceptance of the system. They signed a contract in November of 2006, and rolled out 3 critical processes to their 2,200 field associates by the end of April 2007. As of the end of July 2008, they’ve implemented 25 processes, representing 2,000 process instances per month, plus a customized portal for each community, and a central forms repository containing 300 standardized forms.

They’ve just delivered phase 4 of the project, targeted at both community and corporate users, and are planning for phase 5 and beyond: truly a continuous improvement model of change.

Carpenter sees Appian becoming one of their core applications: just like users login to their email first thing in the morning, they login to Appian as well, since that drives their work processes each day.

Like Nokia Siemens, they’re using a non-IT group since IT just doesn’t deliver fast enough (that’s my assessment, he didn’t say that): in my experience, this is a very common problem when BPM projects are run by IT, and a product like Appian that can be selected and implemented by non-IT resources has a huge advantage as long as the corporate culture supports business-led technology implementations (most don’t).

Appian Forum: Product Update from Malcolm Ross

Malcolm Ross, Director of Product Management and someone who I once referred to as an über demo god, gave us an update on the Appian product. He started with their product development philosophy:

  • flexibility
  • ease of use
  • comprehensive
  • build for the future, which is how they position their web-based AJAX process modeler, in contrast with most of the competitions’ Eclipse-based desktop process modelers
  • listen to customers

He reviewed the enhancements in their latest version, Appian Enterprise 5.7:

  • improved web services handling
  • custom data types, allowing for complex data sets based on XML structures
  • WSRP portlet consumption for creating mashups directly within an Appian dashboard (which, of course, he illustrated by showing my RSS feed integrated into a dashboard)
  • improved security for outside-the-firewall applications
  • a number of smaller enhancements, mostly around usability for designers

They’ve also released Appian for SharePoint, providing single sign-on and the ability to snap a number of different Appian views into a SharePoint page, and access SharePoint content from the Appian environment.

He gave a bit more detail on the Appian-MEGA integration, although I think that it’s still early days on this, and he didn’t comment on round-tripping, although he implied that it was possible. He described being able to discover Appian processes from MEGA — the opposite of what is usually done with BPA-BPM tools integration — and I’m waiting for the (I hope) more in-depth details in this afternoon’s session. Their overall goal is to integrate process models into a larger enterprise architecture picture, allowing for risk analysis and other corporate performance analysis and management.

Appian Anywhere, their software as a service solution, is based on the same core code base as Appian Enterprise, so these enhancements will be available there as well.

He gave us a brief summary of what’s coming up in Appian Enterprise 6.0 in the first half of 2009, including new end-user and application designer interfaces, and support for managing distinct process-based applications within their environment.

Appian Forum: Nokia Siemens

Nick Deacon, Global Head of BPM for consulting and systems integration within Nokia Siemens Networks, a global network communications services firm. The consulting and systems integration group, with a staff of 3,500-4000 and annual sales of 400M Euro, has the usual problems of managing a workforce of service providers, and were looking for a BPM solution — easy to use, relatively low cost and easy to customize — to help them better manage what he referred to as the Mean Time Between Surprises. They were looking to quickly implement their core processes of sales, service execution, and resource and competence management, before global IT noticed what they were doing and turned it into a mega-project.

Since the project started in February (yes, this February), they have implemented their first module (service delivery process) and rolled it out to 400 users across all of their global regions, including portals and dashboards for analyzing and managing the business. At the same time, they were working on the resource and competence management process module, which is about to start into testing, and the sales and technical support processes will be ready for deployment in November. Product and portfolio management will follow in December, and offshore delivery management in February. Basically, that means that they will have deployed BPM across all of their major business processes within 12 months.

Through reduced data entry, increase sharing of information and increased reuse of project assets, they expect productivity savings of 12-16M Euro per year, which (I hope 🙂 ) provide an ROI of much less than a year.

There’s now interest from other areas within NSN, and their projects are becoming a sort of proof of concept for BPMS across the much larger organization, not just within the consulting and systems integration group.

Deacon had nothing but good things to say about Appian in terms of both the product and how their professional services has worked with NSN to deliver the right business functionality on a tight schedule across a global enterprise. He sees them as being aligned with NSN’s vision and strategy for BPM, and have been a true partner on their implementation. They looked at larger BPM vendors, but found their solutions too rigid and too expensive.

Appian Forum: Matt Calkins

Appian’s CEO was up for the only vendor executive presentation of the conference, to discuss Appian and its community of customers and partners. As a somewhat late entrant to the BPM market, they had only about 15 customers in 2004 growing to almost 80 (active) customers in 2007, and expanded from a primarily government focus to include many other industry verticals.

Appian’s view of BPM is that although it’s becoming mainstream, email still owns 99% of what could be the BPM space through the implementation of ad hoc processes. Because of that, it’s essential for BPMS to easy for all types of users — both designers and end users — and provide very little resistance to adoption. A fully web-based product suite is one part of this, and Appian is one of the few vendors to provide a web-based process designer, and their move into a hosted model reduces the frictional costs further. He discussed a number of their technical innovations, stating “we didn’t do this just because we’re nerds”, but sees them as essential to providing a good BPMS.

With the downturn in the US market, Appian and other vendors are being forced to look outside their borders for new customers, and finding — surprise! — that there are significant international opportunities. Their EMEA sales grew by over 300% year-over-year, and they’re seeing more potential business there.

He also announced Appian ShareBase, a wiki (his word, but actually more of a shared repository) of code objects pertaining to Appian, including process models, rules, smart nodes and any other design objects that can be shared, all of it available free for other Appian customers to reuse. Appian will be seeding ShareBase with a substantial amount of their own intellectual property. No word on the licensing ramifications here, but based on the “free to reuse” statement, I assume that it’s pretty open.

He also discussed their new partnership with MEGA for process modeling and enterprise architecture, more of which will be discussed later in the day.

Appian Forum: Connie Moore keynote

Three days ago, I was in Rome — original home of the Roman Forum and the Appian Way — and now I’m at Appian Forum: Appian‘s first user conference. Samir Gulati, VP of Marketing, delivered some short opening remarks including the “Sandy Kemsley Conference Checklist”, showing how they measured up on my basic requirements for conferences: wifi, online agenda, good content, frequent networking breaks, and other good stuff. They missed on the power plugs at the tables, but other than that, I have to give them full marks.

They had about 150 people sign up for the conference, although I don’t think that were are that many in the room this morning; this was not a paid conference, which tends to result in a higher number of no-shows, but there’s a good cross-section of Appian’s customers and partners, as well as analysts.

After Samir’s short introduction, he turned it over to Connie Moore of Forrester for a keynote on Design for People, Build for Change (wait, this sounds familiar…). She had a great graphic that expand on some of the things that I’ve heard Forrester people talk about in the past, highlighting the “design for people” part of the equation through social networking and other techniques, whereas we’ve often focused (maybe too much) on the “build for change” part of business innovation.

She discussed four factors creating the “perfect storm” that’s led to the current situation:

  • Design evolution, where more products are being designed for optimal use and customer experience, rather than the conveniences of the manufacturer or based on the preconceived notions of the designer. There are many consumer products that illustrate this, but it holds equally true with business computer systems.
  • Process evolution, where we do more continuous improvement than big bang reengineering for both technical and cultural reasons. The current range of BPM products, with monitoring and optimization built in, allow for this sort of continuous improvement in ways that were not previously possible, which has helped to facilitate this shift.
  • Workforce evolution, with the boomers — along with their knowledge of business processes — starting to retire, and the systems developed for those boomers not really suitable for the millenials who grew up digital. This forces the move to different computing paradigms, particularly social networking, as well as different corporate culture in order to attract and retain the talent.
  • Software evolution, moving from a traditional model to software as a service, Web 2.0, open source and free software in both consumer and enterprise environments.

All of this means that we need to bridge between structured human activities and system-intensive processes that we’ve dealt with in traditional enterprise systems, and the ad hoc, messy, chaotic human activities that we see in the new generation of dynamic business applications. Earning her keep, she highlighted how Appian brings content and collaboration to the usual BPM functionality seen with other vendors, then walked through an example of a dynamic business application.

She discussed the need to forge partnerships between stakeholders, preferably by collocating the business and IT people on a project team so that they create a more seamless project. I’ve seen a lot of projects where there is a serious disconnect between the business and IT participants, and having them sit and work together could only help that situation.

Forrester went out to a number of enterprises to see how they build for change, and saw a few different models:

  • An IT-focused model where the technical team always makes changes to the process (hopefully based on conversations with the business)
  • A blended model where the business owners meet with the project team on a regular basis, and the process changes are made by business analysts or technical team members, depending on the requriement

There needs to be a change model that allows for both continuous change — every 1-2 weeks for process tuning — and for new process versions — every 2-6 months for new processes and major changes. This change model needs to be incorporated from the beginning in any process project to allow for continuous improvement, or you’ll end up with inflexible processes; at the very least, plan on a minimum of 3 iterations shortly after implementation before the process is even remotely correct. At the same time, you need to consider forming a process center of excellence to help with overall process improvement, and consider the link to SOA in order to provide a technical framework for dynamic business applications.

When Forrester asked enterprise architects about the primary benefit of BPM, the largest response (24%) was increased productivity, with process visibility (18%) and agility (15%) following. Other benefits included the ability to model processes, consistent processes across business units/geographies, and reduced reliance on IT for process improvement. By looking at the perceived degree of success and the existence of a BPM center of excellence, they found a clear correlation: about half of those who said that BPM was a rousing success had a COE, whereas less than 5% of the failing efforts had a COE.

Her experience — which matches mine — shows that going with a large systems integrator is not a good way to build the necessary skills within an enterprise to achieve ongoing process improvement, and sees direct skills transfer from the BPM vendor has a greater degree of success. Business analysts need to become process analysts, and developers need to become assemblers of dynamic applications. She finished up with several suggestions on how to get started, for business people, IT and executives.

Although there was a lot of repetition from earlier versions of this message that I’ve heard her deliver, I do see some evolution and refinement of the message. Some of the stats and ideas go by pretty fast, however; the audience might benefit from a bit less of a PowerPoint karaoke feeling.

There was an audience question about how Web 2.0 concepts and products — mostly being developed by tiny companies — will be integrated with traditional BPM products from larger companies; Moore didn’t really answer the question, but discussed how the BPM platform vendors are building their own Web 2.0 functionality, and many other BPM vendors are partnering with SharePoint or other collaborative tools. I think that there’s a lot of room for the Enterprise 2.0 vendors and the non-platform BPM vendors to get together to create social networking-enabled processes that are far beyond what’s available from any of the platform vendors (although IBM is doing some pretty innovative stuff), or through SharePoint integration.

BPM Milan: The Future of BPM

Peter Dadam of University of Ulm opened the last day of the conference (and my last session, since I’m headed out at the morning break) with a keynote on the future of BPM: Flyin with the Eagles, or Scratching with the Chickens?

He went through some of his history in getting into research (in the IBM DB2 area), with a conclusion when you ask current users about what they want, they tend to use the current technology as a given, and only request workarounds within the constraints of the existing solution. The role of research is, in part, to disseminate knowledge about what is possible: the new paradigm for the future. Anyone who has worked on the bleeding edge of innovation recognizes this, and realizes that you first have to educate the market on what’s possible before you can begin to start developing the use cases for it.

He discussed the nature of university research versus industrial research, where the pendulum has swung from research being done in universities, to the more significant research efforts being done (or being perceived as being done) in industrial research centers, to the closing of many industrial research labs and a refocusing on pragmatic, product-oriented research by the rest. This puts the universities back in the position of being able to offer more visionary research, but there is a risk of just being the research tail that the industry dog wags.

Moving on to BPM, and looking at it against a historical background, we have the current SOA frenzy in industry, but many enterprises implementing it are hard-pressed to say why their current SOA infrastructure provides anything for them that CORBA didn’t. There’s a big push to bring in BPM tools, particularly modeling tools, without considering the consequences of putting tools like this in the hands of users who don’t understand the impact of certain design decisions. We need to keep both the manual and automated processes in mind, and consider that exceptions are often not predictable; enterprises cannot take the risk of becoming less flexible through the implementation of BPM because they make the mistake of designing completely structured and rigid processes.

There’s also the issue of how the nature of web services can trivialize the larger relationship between a company and its suppliers: realistically, you don’t replace one supplier with another just because they have the same web services interface, without significant other changes (the exception to this is, of course, when the product provided by the supplier is the web service itself).

He sees that there is a significant risk that BPM technology will not develop properly, and that the current commercial systems are not suitable for advanced applications. He described several challenges in implementing BPM (e.g., complex structured processes; exceptions cannot be completely anticipated), and the implications in terms of what must exist in the system in order to overcome this challenge (e.g., expressive process meta model; ad-hoc deviations from the pre-planned execution sequence must be possible). He discussed their research (more than 10 years ago now) in addressing these issues, considering a number of different tools and approaches, how that resulted in the ADEPT process meta model and eventually the AristaFlow process management system. He then gave us a demo of the AristaFlow process modeler — not something that you see often in a keynote — before moving on to discuss how some of the previously stated challenges are handled, and how the original ADEPT research projects fed into the AristaFlow project. The AristaFlow website describes the motivation for this joint university-industry project:

In particular, in dynamic environments it must be possible to quickly implement and deploy new processes, to enable ad-hoc modifications of single process instances at runtime (e.g. to add, delete or shift process steps), and to support process schema evolution with instance migration, i.e. to propagate process schema changes to already running instances. These requirements must be met without affecting process consistency and by preserving the robustness of the process management system.

Although lagging behind many commercial systems in terms of user interface and some functionality, this provides much more dynamic functionality in areas such as allowing a user to add make minor modifications to the process instance that they are currently running.

He concluded with the idea that BPM technology could become as important as database technology, if done correctly, but it’s a very complex issue due to the impact on the work habits of the people involved, and the desire not to limit flexibility while still providing the benefits of process automation and governance. It’s difficult to predict what real-world process exceptions will occur and therefore what type of flexibility will be required during execution. By providing a process template rather than a rigidly-structured process instance, some of this flexibility can be achieved within the framework of the BPMS rather than forcing the users to break the process in order to handle exceptions.

BPM Milan: Diagnosing Differences between Business Process Models

Remco Dijkman of the Technical of Technology of Eindhoven presented a paper on Diagnosing Differences between Business Process Models, focusing on behavioral differences rather than the structural differences that were examined in the previous paper by IBM. The problem is the same: there are two process models, likely two versions of the same model, and there is a need to detect and characterize the differences between them.

He developed a taxonomy of differences between processes, both from similar processes in practice and from completed trace inequivalences. This includes skipped functions, different conditions (gateway type with same number of paths traversed), additional conditions (gateway conditions with a potentially larger number of paths traversed), additional start condition, different dependencies, and iterative versus once-off.

You can tell it’s getting near the end of the day — my posts are getting shorter and shorter — and we have only a panel left to finish off.

BPM Milan: Detecting and Resolving Process Model Differences

Jochen Kuester of IBM Zurich Research presented a paper on Detecting and Resolving Process Model Differences in the Absence of a Change Log, co-authored by Christian Gerth, Alexander Foerster and Gregor Engels. Detecting differences would be done in the case where a process model is changed, and there is a need to detect and resolve the differences between the models. They focus on detection, visualization and resolution of differences between the process models.

Detection of differences between process model, which involves reconstructing the change log that transforms one version to another. This is done by computing fragments for the process models similar to the process structure tree methods that we saw from other IBM researches yesterday, then identifying elements that are identical in both models (even if in a different part of the model), elements that are in the first model but not the second, and those that are in the second model but not the first. This allows correspondences to be derived for the fragments in the process structure tree. From there, they can detect differences in actions/fragments, whether an insertion, deletion or move of an action within or between fragments.

They have a grammar of compound operations describing these differences, which can now be used to create a change log by creating a joint process structure tree formed by combining the process structure tree of both models, tagging the nodes with the operations, and determining the position parameters of each of the operations.

They’ve prototyped this in IBM WebSphere Process Modeler.