Travel-crazy again

Having spent almost two months without getting on a plane, I’m back on the road for the next few weeks:

  • April 22-23: Washington DC for the Architecture and Process conference
  • April 29-May 2: San Francisco for TUCON, TIBCO’s user conference
  • May 5-7: Orlando for SAPPHIRE, SAP’s user conference
  • May 13-14: Chicago for BEA.Participate.08, BEA’s user conference

Expect to see lots of blogging from the conferences. If you’re attending any of them, ping me for a meet-up.

Disclosure: for the three vendor conferences, the respective vendors are paying my travel expenses to attend.

BEAParticipate and TUCON wrapups

Last session of the conference, and it was a tough choice: the ALBPM Experience track was featuring a talk about BAM, but since I’d already covered this in the past two days, I decided on the performance tuning session. Unfortunately, 10 minutes into the time slot, the Q&A for the previous session was still dragging on; I took this as a message from the conference gods that my time here was done and ducked out to write this wrapup post before I leave for the airport. I’m back in Toronto for a week catching up on real work (the sort that actually pays, as opposed to this blogging gig) so blogging may be light for the rest of the week. My next conference is Shared Insights’ Portals and Collaboration in Las Vegas later this month, where I’m speaking on the changing face of BPM.

It’s been an interesting experience attending two (competing) vendors’ user conferences back-to-back: TIBCO last week, and BEA this week. Since this week’s conference was only a subset of BEA’s customer base — those that use ALUI and ALBPM — it was less than half the size of last week’s TUCON, but I found myself at both conferences having to make decisions between what to see in any particular session since there was a lot of good content at both. Since both of these vendors are from the more technical integration space, and gained their BPM products by fairly recent acquisitions rather than organic growth (Staffware in the case of TIBCO, and Fuego in the case of BEA), the conferences were still quite focussed on technical rather than business attendees. TIBCO, having a head start on the BPM space by a couple of years, probably did a bit better at addressing the business part of the audience, but both they still have a long way to go. I contrast this with FileNet user conferences that I’ve attended over the years, which have a much higher percentage of business attendees (by my assessment) and many more topics specifically addressing their needs.

The reception that I had at both conferences was nothing short of amazing. My hosts made sure that I had everything that I needed to work productively (although there wasn’t much that they could do about the crappy wifi in either location), access to the people who I wanted to talk to, and wined and dined me around town in the evening — my diet starts tomorrow. Special thanks to Jeff and Emily at TIBCO, and Jesper and Marissa at BEA for being the point people during my visits, although there were many others involved at both vendors.

Just to finish up, I’ve been noting the logistical things that make attending a conference a lot better for me. Not to sound like too much of a prima donna, here they are:

  • Wifi. Free wifi. In fact, I would go so far as to say that technically-oriented conferences should only be held in hotels that offer free wifi throughout their hotel, both in public areas and rooms.
  • Power. Although less important than wifi, because I can run for a couple of hours without it, the last thing that I want to do is have to switch from blogging to paper note-taking because my battery dies during a session.
  • Tea. Hot. Preferably green.
  • T-shirts. What is it with all the fancy paper notebooks being given away as conference schwag this year, with nary a t-shirt in sight? All I know is that if I go home without a man’s large t-shirt from the conference, I hear about it for days. Particularly this week, when I’m missing his birthday to be here.

I have to say, both TIBCO and BEA failed me on the t-shirt requirement. 😉

BEAParticipate: BPM for Compliance

Brandon Dean of BEA talked about how to use BPM for compliance and improved visibility into processes. I wrote a course on compliance and BPM recently, and I was interested in how they’re seeing this roll out amongst their customer base.

Regulatory compliance (e.g., SOX) and any sort of commercial compliance (e.g., SLAs) or organizational compliance (e.g., internal KPIs) have many of the same requirements: processes need to behave in a consistent fashion, and any exceptions have to be handled using a standard method. Measurements on how well that the process is meeting its stated compliance goals are critical to understand whether or not the underlying business process is compliant. This, of course, plays directly to the strengths of BPM: providing a platform for standardizing and, where possible, automating processes; integration of multiple systems; consistent exception handling; security on the process steps and a comprehensive audit trail on who did what, and when; and monitoring and reporting for visibility into the processes and proactive alerts when they start to wander out of compliance.

Dean covered on how to position BPM for compliance, starting with a great categorization of organizational types ranging from companies that already have compliant processes but just need a better audit trail, to those that are actively trying to find ways around compliance. He made a point that I also discussed in my compliance course: if you implement compliance on a regulation-by-regulation basis, it’s a lot more expensive and time-consuming. In fact, I used a quote from a Gartner report from 2004, in the middle of the SOX gold rush:

Enterprises that choose one-off solutions for each regulatory challenge that they face will spend 10 times more on compliance projects than their counterparts that take a proactive approach.

 He went through a number of case studies and how their compliance was facilitated by BPM:

  • Dental insurance claims processing, which started out as a completely manual process that had no audit trail and didn’t enforce standard rates and practices. Using BPM, they not only had some processes decrease cycle time from 3 days to 8 minutes, but they were able to meet HIPAA compliance requirements.
  • Trade processing, where the SLA was not being met and they were risking losing the ability to execute trades. BPM allowed them to set alerts on trades that arrived but didn’t complete for some reason, so that any manual intervention required could be performed in time to meet their SLA. This also allowed them to do follow-the-sun processing for more intelligent human resource allocation.
  • Residential mortgage processing, which wasn’t able to track requests for special handling in loan origination, and was causing them to lose customers. Using BPM, documents were automatically rendezvoused with waiting processes, and the processes presented for work at the point when they were ready to be processed rather than having people track the missing documents manually. This also automated feedback to the brokers to submit the necessary documents to reduce the wait time. A major gain was in making sure that all the information was gathered in a timely manner, and not presented for processing until all the information was available.

Although I think that Dean’s definition of compliance is a bit stretched to include both customer SLAs and internal KPIs, his points are valid for developing many types of business cases for BPM.

BEAParticipate: Building your own UI

First session of the last morning, Eduardo Chiocconi of BEA and Rob Wald of JPMorgan Chase talked about the ALBPM UI: what comes out of the box, and what you can build yourself.

Out of the box, ALBPM has three user interfaces alternatives:

  • HiPer WorkSpace, a full user workspace with menus based on their permissions, a view of their inbox, and views of any instances that the user opens. It uses CSS if you want to change styles, and can be customized further in terms of removing buttons and other controls.
  • JSR-168 portlets for any standard portal environment, such as WebLogic.
  • WorkSpace Extensions WorkList Portlets that can be plugged into ALI and provide additional integration functionality over the standard portal interface, such as integration with the ALI Collaboration environment.

They’re working on consolidating these interfaces in order to reduce the user learning curve.

If those don’t work for you, then you can create your own user interface, either browser-based or rich client, using the available APIs. For building web clients, many of their customers have used JSP to re-code the entire user interface, then used servlets to access the engine. Alternatively, you can use Struts or JSF/AJAX. All of these can use their Java-based process API, PAPI, or the web services version, PAPI-WS, to retrieve instances from the engine, or WAPI (a servlet-based API) to execute interactive activities such as screen flows.

For rich clients, they’re seeing a lot of .Net development that uses PAPI-WS to retrieve instances, then create the UI. It’s also possible to use Eclipse or Swing to build rich user interfaces that call PAPI directly. This is more complex for interactive activities, but there are ways to work around that.

To sum up, there are three public APIs:

  • PAPI (process API), which is a Java implementation. If you’re working in a Java environment, this provides the tightest integration and best functionality. It manages multiple process engines transparently, and does instance caching on the client side to reduce latency in connecting to the engine — a critical performance factor.
  • PAPI-WS is a subset of PAPI that operates as web services (SOAP), although this is being extended in the near future to provide the full functionality of PAPI. There may be a .Net version of PAPI in the future, but for now you have to use PAPI-WS if you’re developing in .Net (e.g., ASP.Net), and can also be used from any web service client. Right now, PAPI-WS is part of the HiPer WorkSpace, but will be decoupled as a self-contained web application in the future. It’s also possible to expose processes directly as web services, as we heard in an earlier session, which provides another point of integration from a web service or .Net development environment.
  • WAPI, which is a servlet API that can be used to launch the UI of an interactive activity at a point in the process, which can’t be done in PAPI or PAPI-WS.

With any custom UI, there’s always the question of single sign-on. With the WorkSpace Extensions WorkList Portlets in ALI, that’s handled natively; in the HiPer WorkSpace and JSR-168 portlet implementations it requires some customization, although there is a single sign-on login servlet provided with the JSR-168 portlets to make this easier.

Getting to the specific JPMorgan case study, they created a custom user interface since, like many large companies, they want integration with other applications in their environment and want more control over the look and feel of the interface. It’s possible to just create custom JSPs and use them in the standard work portal framework, which provides a great deal of control over the UI without completely rewriting it, but this wasn’t sufficient for many of their applications. What they ended up doing was creating a completely custom inbox using Struts/JSP/GWT with PAPI: one example that he showed was using Struts and AJAX via the Google Web Toolkit to manage financial reconciliation processes. They’re also using IceFaces, an open source RenderKit implementation of JSF (as a replacement for Struts) that supports AJAX to create a visual drag-and-drop components for use in an IDE such as Eclipse. Since JPMorgan is dedicated to the use of open source, they’re doing some innovative development that’s not seen in most corporate environments, but maybe should be. They’re also using the JSR-168 portlets in a more standard portal implementation, and building rich clients with Eclipse.

On the back end of their implementation, they’ve found that some of the PAPI protocols don’t work well over wide-area networks, such as between their US and Japan operations, so they do quite a bit of preloading of the PAPI cache.

JPMorgan has implemented ALBPM as a centralized shared service in order to provide efficient use of both human and server resources: centralized code and best practices on the human side, and a single ALBPM server handling 10 applications without difficulty.

BEAParticipate: BAM

Eduardo Chiocconi of BEA gave us a technical view of the ALBPM BAM functionality: what’s available out of the box, the extensions, how to create customized dashboards, security, and a bit of the architecture underlying it all so that we have a bit of an understanding of what happens in the underlying services and data stores when a custom key performance indicator (KPI) is defined.

Like every other BPM vendors’ BAM, ALBPM’s BAM is visualized as a set of dashboards that show KPIs for the purpose of monitoring the health of a process and early problem detection. There are some out-of-the-box dashboards including widgets such as gauges and charts attached to a data source, and the ability to create custom dashboards. As we saw in the architectural view this morning, there’s a BAM database to collect and aggregate the analytics data from one or more process engines, plus external data sources if you want a combined view. There is a single BAM database for each directory service, and an updater service that executes regularly to pull data from the associated engine database(s) to the BAM database. Data in the BAM database is very granular — down to the second, if required — but is flushed out as it ages, typically after a day. The OLAP data mart, which has the same data as the BAM database and is updated by the same service, is much less granular and is not automatically purged; this is used for historical analytics rather than the near-real-time requirements of the BAM database.

The out-of-the-box dashboards are instance workload, percentage workload by organizational unit, performance (e.g., end-to-end cycle time) including drill-downs to more granular levels, or a unified dashboard with all three of these measures. Surprisingly, these widgets are not currently provided as standard portlets, but can be wrapped into a portlet if required.

Most organizations will want to define their own KPIs and create their own dashboards: KPIs can be defined by a business analyst in the Designer as dimensions (e.g., time or geographic aggregation) or measures (e.g., averages and other statistical aggregations), and can be based on standard process variables or business variables. This causes a new column to be created in each of the three main BAM database tables to capture the necessary data for the three display widgets for that measure or dimension.

It’s also possible to specify the points in the process where the KPI data are captured and sent to the BAM database in addition to allowing the automatic update process to occur, giving it a sort of audit functionality. Internally, the BAM data are generated from the process engine’s audit trail, so you’ll have to have auditing enabled for all of the processes and events that you want to track in BAM (in many cases, you would turn off auditing for processes and events that don’t require it in order to improve performance).

ALBPM allows for role-based security access to the BAM dashboards, so that only specific roles can see them.

Future directions are to allow ad hoc dashboard creation and move to event-driven BAM, although that will require some architectural changes to the underlying database and services in order to handle the increased load that will result from allowing everyone to roll their own analytics.

The more I look at it, the less than I’m convinced that all the BPM vendors should be developing their own BAM like this; I think that there could be a market for a BAM product that can connect to many different BPM products as soon as we get some standardization around the process engine audit trails that are typically used to populate BAM databases.

BEAParticipate: The Future of BPM

Jesper Joergensen gave us BEA’s view of how BPM is changing business and the future of BPM. Actually, he starts out with Gartner’s view of expected market growth (hockey, anyone?) and how BPM is becoming more and more a part of organizations’ productivity improvement initiatives as it moves from opportunistic to pervasive adoption.

There’s a number of obstacles to BPM adoption, however, many of them cultural:

  1. The ivory tower of process expertise, where a few process experts are doing the modelling. Easy-to-use modelling tools like ALBPM are helping to change that, but my opinion is that we need to have more pervasive technology to enable the shift, and that’s going to be web-based, like Appian‘s process designer or Lombardi’s Blueprint process discovery tool.
  2. The ROI barrier: many current opportunistic BPM projects are low-hanging fruit in terms of ROI, but projects need to deliver faster and cheaper in order to implement processes with a lesser potential return.
  3. Getting beyond just system-to-system orchestration and adding human-facing steps to the process. Stop thinking of those processes as “human-interrupted” and accept people as necessary actors in BPM-automated processes.
  4. Managing complexity and scale.

Jesper went on to discuss a number of areas where practices and technologies need to evolve (note that this is not a statement of where ALBPM is going, just how he thinks that the BPM market needs to change):

  • Web-based process modelling (which I obviously agree with).
  • improved standards and interoperability, especially between a vendor’s own tools if they have multiple tools for discovery, modelling and design.
  • Automated process discovery based on monitoring (which I saw recently in a vendor demonstration but can’t at this moment recall which one), where historical trends in manual decision-making are used to suggest automation of certain decision points.
  • Better integration of BPM and BI, since many products currently have very separate BPM and analytics environments that don’t integrate well.
  • Standardization and service-enablement of process data; I think that BPRI may help with some of the standardization, but it likely needs to be taken further in terms of how process instance data and be extracted from a process engine via RSS feeds or other integration mechanisms.
  • Process decision support, e.g., making suggestions based on historical decisions, and potentially raising exceptions if the current decision doesn’t match the past trend.
  • More open process flows to allow for dynamic changes to the process flow at runtime, even if they weren’t anticipated at design time.
  • Social computing functionality, such as tagging.
  • Better integration with SOA and ESB.
  • Enterprise-scale process execution and management to allow for end-to-end cross-departmental processes rather than the more common departmental BPM implementations that we see today.

We’re in agreement on a lot of things on this list, and I’m looking forward to how some of these ideas might creep into the product in the future.

BEAParticipate: Advanced Process Modelling

Last session of the morning was Mateo Almenta Recca of BEA and Kunal Shah of Citigroup talking about advanced process modelling — specifically process exceptions — in ALBPM. Exceptions can be either system exceptions, such as a service being unavailable, or business (user-defined) exceptions, such as an account being closed. System exceptions are typically handled through automated retries and/or transaction rollbacks, whereas business exceptions are modelled into the process by the process designer.

Exception handlers can be built into the process at either the individual activity, group (similar to a BPMN transaction) or process level. Exception handlers at an activity appear just as an alternative path out of that activity, although an exception is typically invoked by a timeout or other non-decision activity instead of an explicit decision at that point. Exception handlers at the group level are shown connected to the group wrapper boundary, as in BPMN transaction exceptions, and process exception handlers are visualized as disconnected from the process but on the same model.

All exception handlers can take one of three basic actions: abort the process and go to the end of the process, go back and retry the step that threw the exception, or skip the step that threw the exception and move on to the next step in the process. The back and skip functionality is always at the activity level: an exception at the group level that causes a “back” instruction from the exception handler would return to the specific activity to be retried, not the entire group; “skip” would skip the activity but continue on to any later activities in the same group. This was counter-intuitive for me and I asked specifically about that: I would have expected that a group would be treated more like a BPMN transaction wrapper, such that a retry or skip would apply to the entire group, not the specific activity. Exception handlers can be automatic or manual steps: an automatic exception handler might perform some related transaction rollback in another system before aborting a process, for example, whereas a manual exception handler might allow for data repair before retrying the failed step.

They then talked about compensation flows, which seems to match the BPMN meaning of a compensation, in that it reverses a completed activity or group of activities. This isn’t so easy as just rolling back any changed data values in the process instance, since there may have been external systems updated that now need to be rolled back to an earlier state, or non-transactional activities such as sending an email. Compensation flows are used when you can’t use automatic rollback because an activity executed successfully, but can also be called by exception handlers. Visually, these appear very similar on the process model to exception handlers, in that they can be attached at the activity, group or process level. Since groups can be nested in a process model, the compensation flow for a group will invoke the compensation flows for any groups nested within it as it rolls back the entire flow of the group.

They finished up with a short Citigroup case study on how they handle trade exceptions in their back-office processes. Although most financial trades are handled straight through with no manual intervention, they handle 2000 trade exceptions each day. From designing a number of similar transactional BPM implementations, I know that there’s huge financial risk if you don’t handle your exceptions in a timely manner: market fluctuations that occur after the trade is accepted and priced but before it’s completed are purely the risk of the financial institution, not the customer, so it’s key to get the exceptions resolved as quickly as possible. Citigroup has implemented this as process-level exception handlers that log the exceptions and pass them on for manual review. In most cases, the exception handling process is just a matter of some manual data repair and the trade is resubmitted to the automated process, although some trades are cancelled from within the exception handler.

BEAParticipate: ALBPM Architectural Overview

I started my day in a session about what’s coming up in future versions of ALBPM; unfortunately, most of the information hasn’t been publicly released, so you’ll have to wait to read about it at a later date. BEA will be holding BPM steering group meetings in July, so if you’re an ALBPM customer and want to get involved in defining future versions of the product, this is your chance. I’d love to sit in on these, although I can’t imagine the size of the NDA that I’d have to sign first.

I’m back listening to Mariano Benitez with an architectural overview of ALBPM for administrators and operators; I think that he’s nervous now based on his reaction to my post yesterday. 🙂

We’re going to cover the components of the ALBPM solutions, the enterprise infrastructure, and the deployment alternatives. I’ll leave out a lot of the technical details since it would only be of interest if you were actually digging into ALBPM at this level in order to plan a deployment, in which case you’re probably in the room with me right now.

In short, an ALBPM project can be made up of many processes, where a process consists of the process flow, points of integration and the presentation layer. Also associated with a process are the roles used by the process (which map to enterprise security), service endpoints, and deployment methods. ALBPM allows you to define the organization as it pertains to the business processes: participants and groups, organizational units, and roles.

Taking a look at the ALBPM enterprise infrastructure, it’s (not surprising) three layers:

  • There’s a number of data sources, including a directory repository (which maintains configuration information about the deployment as well as organizational models used both in process definition and for authentication), the engine back-end database for all information on work in progress, historical instance data archived from the work-in-progress database, a real-time BAM data store with one day’s worth of data aggregated for dashboard views, and an OLAP data store for more complete historical analytics. Either Oracle, MS SQL Server or DB2 can be used for the data sources, and although multiple execution engines don’t require a separate database for each, they do require at least a separate schema. For performance, however, I would assume that you’d tend to split any production engine data stores and the analytics data stores onto separate database servers for performance reasons.
  • In the middle layer is the main process execution engine — the heart of the system — plus a few other services such as data warehousing to load the analytics data stores. There are a number of basic services provided by the engine that are used to execute running process instances; no big surprises here on an architectural level if you’ve seen the process engine of other BPM vendors, although every engine has its specific advantages.
  • Layered above the engine are various web applications, such as the main Workspace UI application that can be used both for processing and monitoring work. Listening to highly-technical engineers talk about user functions is always pretty funny: Benitez refers to the action of processing work as “invoking instance operations”, so you can be sure that he’s not going to be writing any user documentation. To be fair, I used to talk like that when I wrote code, too.

We unfortunately had to rush through the deployment scenarios, but saw that it’s possible to deploy ALBPM either as a standalone BPM box or in a J2EE container (simple or clustered).

BEAParticipate: Tips and Tricks for Successful Deployment

One hour left, and 25% of my battery life. It’s a race to the finish.

Craig Cochrane from BEA’s professional services and Becky Lewis of SAIC finished off the first day with a session on the specific nature of BPM system roll-outs.

Cochrane pointed out some of the critical groundwork to cover in any BPM project: establish goals and key performance indicators, develop strategies for maximizing user adoption, select BPM projects, and prepare and train resources.

He covered several strategies for designing BPM systems, ranging from low-complexity, near out-of-the-box with direct user access to a standard inbox and a minimal amount of integration with other systems; through to fully-orchestrated situations where BPM controls the entire process, requiring significant integration. These often represent different stages in the same BPM project rather than endpoints in different projects: you can think of the low-complexity systems as early versions of what will eventually be a fully-orchestrated system.

Cochrane advocates an iterative development approach: not as extreme as Agile, but breaking the development into much smaller building blocks that can be rolled out incrementally, with user feedback adjusting the requirements along the way. It’s more of a mini-waterfall approach, although that’s obviously a taboo word, involving requirements, design, implementation, testing and project management at each stage. As he goes on to discuss change management, it’s clear that there’s still a lot of the old-style development mindset of use cases and screen mockups at the front end — in reality, we don’t mockup screens any more, we use rapid prototyping tools to create a working prototype, or else we risk delaying development to an unacceptable degree.

Lewis then talked to us about enterprise BPM at SAIC: they have multiple systems that embody parts of business processes (some redundantly due to decentralized IT), but no enterprise-level tool to tie all of them together or enforce consistent roles. They found that the sweet spot for BPM within their organization was processes that are complex, span functional boundaries, and have multiple system interfaces. They did think big and start small: they started on security and other framework components as would be required by future BPM applications, but started with a couple of smaller, low-risk projects. At the same time, they scoped out the high-priority (and higher risk) projects to take on once some of the internal resources were trained, and they’d had a chance to learn about BPM on the starter projects. Their first applications were training request forms plus some of the BPM framework components, and A/P invoice exception handling (now under development).

A big part of their framework vision is around the integration of ALBPM into their existing enterprise portal (built with BEA WebLogic, not ALUI), complete with single sign-on and a common look and feel, and with other technologies such as their Documentum document management system. This required the right balance: they didn’t want to customize so much that they couldn’t easily implement new versions of the core ALBPM product, but they wanted to have a consistency at the presentation layer. They removed some of the standard functionality (like creating custom views) in order to make it easier to support internally.

They also focussed on integrating their centralized Active Directory with ALBPM so that there wasn’t a duplication of effort in maintaining users, groups and roles. Interestingly enough, they created an automated ALBPM process to synchronize Active Directory into the ALBPM users and groups.

A key part of their strategy was to create a BPM knowledge repository, which they did using a wiki to capture key findings, evolving standards and best practices. Although they use a template to provide some level of consistency, they found that a wiki provides much more flexibility for knowledge capture than standard document repositories.

She had some useful summary points, like the one about planning for the first project to take more time than you expect, especially if you’re trying to build part of the big-picture framework at the same time. Still, they completed their first project in three months, which is acceptably fast.

Tonight, we’re all off to the ESPN Zone for dinner and entertainment, although I’m still trying to figure out exactly what the ESPN Zone is. I realize that sports-themed extracurricular events is the price that I pay for going to the type of conferences where there’s no lineup in the women’s restrooms.

BEAParticipate: Best Practices for Succeeding with BPM

I’m jumping around between tracks (and hence rooms): I started the afternoon in the ALUI Experience track, then on to the ALBPM Technical/Developer track, and now I’m in the ALBPM Experience track for a discussion of best practices for managing BPM projects with Dan Atwood of BEA (another former Fuego employee) and Karl Djernal of Citigroup. It’s a bit difficult to pick and choose sessions when you’re interested in multiple tracks: this session and the one after it in the same track are both around best practices, although appear to cover different aspects of BPM implementations and I’d like to sit through both. This one is categorized as “beginner” and the next as “intermediate”, so I’m hoping that someone’s actually checked to ensure that there’s not too much overlap between them. I’d also like to see the next technical track on how BPM and ESB work together, but think that I can probably get a briefing on that directly from BEA as required.

Atwood started the session with seven key practices for BPM success:

  1. Fundamentals of process-based competition: understanding the competitive advantage of being a process-oriented company, and the business case for BPM.
  2. BPM and its value to the corporation: understanding what BPM is and how it differs from other management and technology approaches.
  3. From functional management to process-oriented thinking: how the shift from functional management must permeate through the ranks of middle management in order to disperse the fiefdoms within an organization.
  4. Getting hands-on BPM experience, with the help of mentors.
  5. Foundations for process practitioners: BPM as the capability for implementing and extending other management practices such as Six Sigma.
  6. Business process modelling and methods: learn about process-oriented architectures and development methods, and how they differ from traditional approaches.
  7. Human interactions and their roles within BPM: while system-to-system automation is often a BPM focus, the human-facing parts of the process are critical. In other words, you can’t think of these as being “human-interrupted” processes, as a customer of mine did long ago.

Obviously a big fan of BPM books, Atwood references Peter Fingar, Howard Smith, Andrew Spanyi, John Jeston, Mike Jacka, Paulette Kellerin and Keith Harrison-Broninski, as well as a raft of BPM-related sites (although not, unfortunately, www.column2.com). Also a fan of lists, he finishes up with his top five success factors:

  • Executive sponsorship
  • Correct scoping
  • Start with the end in mind
  • Framework
  • Engage stakeholders

Hmmm, that seems to make 12 best practices in total…

Djernal then discussed the Agile methodology that they used for BPM implementation at Citigroup, starting with a description of Agile and Scrum as the anti-waterfall approach: providing incremental deliveries based on changing, just-in-time requirements, and involving the end users closely during the development cycle to provide feedback on each iteration. Just as important as delivery mechanisms is the Agile team structure: the team’s not managed in the traditional sense, but works closely with the customer/end-user to create what they want. There’s a 15-minute team meeting every day, and a delivery (sprint) every 30 days. Many teams vary the sprint length slightly while sticking to the Agile methodology, although there’s danger in increasing it too much or you slip back to months-long delivery cycles. Initiated by the original prioritized set of product features, the user feedback on each iteration can impact both the features and the priorities. There’s basically three roles in Agile: a product owner who represents the stakeholders, the team that implement everything, and the ScrumMaster who provides mentoring on the Agile process and helps to sort out external roadblocks.

The interesting thing is how they brought together BPM and Agile, since I’m convinced that these are two things that belong together. Process diagrams fill in a lot of the documentation gap and are a naturally agile form of creating a functional specification; they form a good basis for communication between the business and IT. Changes in requirements that cause changes to the business process can be done easily in a graphical process modelling environment. In fact, in many BPM environments, the processes can be prototyped and an initial executable version developed in a matter of days without writing any code, which in turn helps to set priorities on the functions that do require coding, such as developing web services wrappers around legacy systems.

They’ve learned some things from their experiences so far:

  • Get training on using the BPM products, and on BPM in general.
  • Use some external resources (like me) to help you get started.
  • Since BPM involves integration, setting up the development, testing and production environments can be time-consuming and require specialized resources.
  • Spend some time up front putting together a good test environment, including automated testing tools.
  • Create a centre of excellence for BPM.
  • Start something small for your first BPM project.

There’s a lot of arguments about how Agile can’t really handle large-scale development projects, but it’s my belief that most BPM projects lend themselves well to Agile. The worst disasters that I’ve seen in BPM implementation have been the product of the most non-Agile development processes imaginable, with months of requirements writing followed by many more months of development, all of which resulted in something that didn’t match the users’ requirements and was much too costly to change. As I’ve said many times before, if you can’t get something up and running in BPM in a matter of a couple of months, then you’re doing something really wrong.