Fujitsu Interstage BPM V11

I had a briefing this week on Fujitsu’s just-released Interstage BPM version 11 as well as an update on their cloud platform. I’ll cover the cloud platform in another blog post, since this one is getting a bit long.

Collaboration within a structured processVersion 11 has a lot of new features for handling ad hoc, collaborative, knowledge-intensive work; this isn’t surprising, since the analysts and many of the vendors have woken up to the fact that not all processes (or all parts of all processes) are structured, and sometimes people need to be able to create their own processes or just find the right person to which to send a task. In fact, Fujitsu, like many others, consider that the bulk of the processes done today are ad hoc, collaborative and knowledge intensive, with a much smaller portion structured people-centric work, and an even small portion purely automated system-centric processes.

Fujitsu is calling this “sense and respond”, where the “sense” part is about finding the right person for a task, and “respond” is about being able to dynamically create an ad hoc subtask. There’s a lot in the “sense” part that I haven’t seen in other products, such as making recommendations/selections of a person to perform a task based on their past performance at this task; this reminds me somewhat of the research that Ben Jennings is doing on establishing reputation within a social network by examining past behaviors, in addition to just doing assignments based on a predefined skills matrix or assigning tasks to people who you know. In addition to past performance, it also takes into account future tasks assigned to people in order to predict workload, and makes recommendations on due dates based on historical data.

Creating a subtaskThe key functionality for what Fujitsu is calling “dynamic BPM” is the ability for process participant to add subtasks at a point in the process, or create any entirely new process by specifying the tasks involved. This allows a process participant to stretch the process to fit their needs by creating one or more subtasks from any task that is assigned to that user, specifying a task name and description, assigning it to one or more users, and specifying a priority and due date. Control is passed to the subtask(s), then returned to the calling task when all subtasks are completed, after which the process can continue on its previously defined structured path. The status for the subtask is shown along with the task status, which provides the necessary transparency and auditing: the big problem with the way that ad hoc tasks are done now is that users typically just send an email, or make a phone call, in order to involve another person, and that deviation from the structured process is never captured.

A user can also create an entirely new process dynamically, too: they just give it a name, description, priority and due date, then add subtasks to that process in the same manner as adding an ad hoc subtask to a structured process. There is no routing or flow management, however, in either dynamic task creation scenario: subtasks are independent from each other and run in parallel, and the calling task (or dynamic process) waits for all subtasks to complete before proceeding. The recipient of a subtask can further divide it into more subtasks, and assign them as they see fit. The expected use case for a completely dynamic process, then, is for one person to create subtasks for the high-level activities and assign them, then have the recipients of those subtasks create their own subtasks required to complete the block of work assigned to them. Process outline tool for simple flow controlIf you’re in an environment where the activities don’t have dependencies, this would work well; however, if there are dependencies between the subtasks, it would have to be manually coordinated.

If you need to have more flow control in the processes, you can step up to the Process Outline tool intended for non-technical process analysts and business users. This shows the tasks in a tabular representation with timelines, and allows the creation of dependencies between the tasks. It wasn’t clear, however, the degree of control offered here, and the interoperability with the simpler subtask creation method.

The really cool thing, however, is what happens behind the scenes with these dynamic processes during execution: Changing the sensitivty to show only more frequent paths traversedthe automated discovery engine, which is now part of the analytics, tracks all the ad hoc subtasks, and can make suggestions on improving the process based on how the process was actually executed including the user-created subtasks, rather than how it was originally designed. Just as with the desktop application, this bit of Flash allows you to view how many times each path was traversed in the process, and dial it back so that only the most common paths are shown. I think that Fujitsu has done some very interesting things with their process discovery tool – which they can use on the system logs of pretty much any system, not just a BPM system – and it’s a natural fit integrated into their BPM suite. Working together with the dynamic subtask creation, this allows you to see how a process really executes, rather than how your process analyst thinks that it works.

There are some other collaborative features that have been highlighted in this version: discussion threads on process instances (really just a nicely-formatted comment feature, and it would be nice to add tags here to allow for searching the history based on the text within process instance discussions), and wiki pages within the community to allow process documentation. The community portal pages can also link to external portals such as MyYahoo, and incorporate a feed such as a Twitter stream. Users can also get an RSS feed of their tasks, which allows them to consume them in a different interface, if they don’t want to use the Interstage BPM portal.

A few other vendors are starting to think about processes as projects, and Fujitsu has added some of this to Interstage as well, by allowing a process to be viewed as phases and milestones – although not, from what I saw, in a standard GANTT chart representation that allows easy visualization of the critical path – then see which milestones were met or missed.

They’ve added some new dashboard and analytics features, too, but the big win for Fujitsu in this version is the combination of ad hoc task creation and automated process discovery.

21st Century Government with BPM and BRM #brf

Bill Craig, a consultant with Service Alberta, discussed their journey with process and rules to create agile, business-controlled automation for land titles (and, in the future, other service areas such as motor vehicle licensing) in the province of Alberta. They take an enterprise architecture approach, and like to show alignment and traceability through the different levels of business and technology architecture. They used a number of mainframe-based legacy applications, and this project was driven initially by legacy renewal – mostly rewriting the legacy code on new platforms, but still with a lot of code – but quickly turned to the use of model-driven development for both processes and rules in order to greatly reduce the amount of code (which just creates new legacy code) and to put more control in the hands of the business.

They see 21st century government as having the following characteristics:

  • customer service focus
  • business centric
  • aligned
  • agile
  • assurance
  • management and controlled
  • architected (enterprise and solution)
  • focused on knowledge capture and retention
  • collaborative and integrative
  • managed business rules and business processes

BPM and BRM have been the two biggest technology contributors to their transformation, with BRM the leader because of the number of rules that they have dealing with land titles; they’ve also introduced SOA, BI, BAM, EA, KM and open standards.

In spite of their desire to be agile, it seems like they’re using quite a waterfall-style design; this is the government, however, so that’s probably inevitable. They ended up with Corticon for rules and Global 360 for process, fully integrated so that the rules were called from tasks in their processes (which for some reason required the purchase of an existing “Corticon Integration Task” component from Global 360 – not sure why this isn’t done with web services). He got way down in the weeds with technical details – although relevant to the project, not so much to this audience – then crammed a description of the actual business usage into two minutes.

One interesting point: he said that they tried doing automated rules extraction from their mainframe applications to load into Corticon, but the automated extraction found mostly navigation rules rather than business rules, so they gave up on it. It would be interesting to know what sort of systems that automated rule extraction works well on, since this would be a huge help with similar legacy modernization initiatives.

Collecting, Connecting and Correcting the BPM Dots #brf

Roger Burlton, who organized the BPM track here, gave a presentation this afternoon on process discovery techniques that fit well with Kathy Long’s previous presentation on process notations. He looked at different levels of BPM (and therefore of models): enterprise, business process, and implementation. Most of the BPM models done at the enterprise level are for the purposes of enterprise architecture and high-level strategy; those at the business process level may be for documentation and optimization whether or not the processes are ever automated; and those at the implementation level are primarily for automation purposes. Some of the collect-connect-correct techniques can be reused across these levels, allowing for easier alignment between the different levels:

  • Collect:
    • Agree on our intent – get the same motivation
    • Find out who cares
    • Discover the truth
    • Measure real performance
  • Connect:
    • Draw pictures and communicate
    • Question why
  • Correct:
    • Make it better
    • Check it out
    • Get to yes
    • Launch and learn
    • Deal with worries

He went through each of these in detail, pointing out what information that you need to gather at each point, and how this applies at each of the levels. Great presentation, tons of information, although I captured very little of it here due to end-of-day blogger burnout.

That’s it for the first day of Business Rules Forum; I’ll be here the next two days as well. Tomorrow, I can just sit in on presentations, but Thursday I’m back to work by facilitating a peer-to-peer workshop on BPM in the cloud over breakfast, and sit on a panel on emerging trends at the end of the day.

Process Notations #brf

The pool at the Bellagio was a big draw, but I’ve kept on track for this afternoon’s presentations, starting with Kathy Long on process notations. She spoke about the necessity of documenting processes, as well as the levels to which documents should be documented. Documenting the current process should only be done down to a certain level; below that, it’s more likely to be an indeterminate or changeable set of tasks that aren’t even correct.

She proposes a much simpler, higher-level process model that’s a lot like IDEF0, but she uses Input, Guides, Outputs and Enablers instead:

  • Input: something that is consumed by or transformed by an activity/process
  • Guide: something that determines why, how or when an activity/process occurs but is not consumed
  • Output: something that is produced by or results from an activity/process
  • Enabler: something (person, facility, system, tools, equipment, asset or other resource) utilized to perform the activity/process

She looked at some of the problems with other modeling formats; for example, BPMN is easy to learn and communicate and shows cross-functional processes and roles, but multiple process involvement is difficult to model, and it’s hard to follow decision threads: they end up more as system flows than actual business process models.

She touched on a lot of points for making process models accurate and relevant, such as levels of decomposition, and not modeling events and rules as activities; these are things that tend to happen in BPMN swimlane diagrams, but not in IGOE models. A lot of this, in fact, is about making the distinction between events and activities; there’s some confusion about this in the audience, too, although most often what is shown as an activity (box) on a swimlane diagram should actually just be a line between activities, e.g., instead of adding an activity called “send to Accounting”, you should just have a line from the previous activity to the new activity in the Accounting swimlane. Her BPMN is a bit rusty, perhaps, because an event would not be modeled as an activity, it would be modeled as an activity; instead, she shows a customer example where she used a stoplight icon to indicate an event, although there is an event icon available in BPMN.

Regardless of the notation, however, there are things that you need to consider:

  • Understand why you’re modeling processes: documentation, understanding, communication, process optimization.
  • Simplify the models by removing events and decisions
  • Understand the goals in order to set the focus – and determine the critical path – for the process

I’m not sure that I agree with all of what she states about modeling; much of the fault that she finds with BPMN is not about BPMN, but about bad instances of BPMN or bad tools. She has one really valid point, however: most process models created today are just wallpaper, not something that is actually useful for process documentation and optimization.

This is the third year that I’ve heard her speak at BRF, and the message hasn’t changed much from last year or the year before, including the core examples, so it could use a refresh. Also, I think that she needs to get a bit more updated on some of the technology that touches on process models: she sees the business doing process modeling, then handing it over to IT for implementation (which doesn’t really account for model-driven development), and speaks only fleetingly of “workflow” systems. I realize that many process models are never slated for automation, but more and more are, and the process modeling needs to account for that.

BPM, Collaboration and Social Networking

Although social software and BPM is an underlying theme in a lot of the presentations that I give, today at the Business Rules Forum is the first time that I’ve been able to focus exclusively on that topic in a presentation for more than 3 years. Here’s the slides, and a list of the references that I used:

References:

There are many other references in this field; feel free to add your favorites in the comments section.

BPM Customer Panel #appianforum

The first day of Appian Forum ended with a panel of Appian customers – Archstone, AGF, Enterprise Rent-A-Car and Mercer Outsourcing – hosted by Clay Richardson of Forrester. Clay started with a question about which BPM project to do first: instead of the old “start small, think big, act fast” mantra, many organizations are choosing to start with a bigger project where they’re experiencing a lot of pain. Not, however, the organizations represented on the panel: they all indicated that they either started with a smaller project, or started with a big one and regretted it. I think that the key is balance: select a big enough project to be meaningful and use an iterative approach so that you don’t get swamped by it.

The discussion continued on to include data integrity/cleansing and return on investment, and the audience chimed in with questions on testing BPM applications to ensure correctness (getting a working system in front of the users earlier for validation and testing helps, as do frequent releases), production support (often done by original project team, which cuts into time for new development and CoE activities, but ideally project team is second line of support and leverages shared services support for underlying server and network infrastructure) and business change management/buy-in (requires communication, participation and vision). I think that my presentation on BPM centers of excellence that immediately preceded this had an impact: a couple of the questions directly referenced what I was talking about, particularly in the last question on process asset reusability across projects (difficult unless there is a CoE that manages an asset repository or otherwise governs reusability).

My job here is done: tomorrow is a more in-depth day for customer product training, so I’m headed back to Toronto tonight.

Benenden Healthcare Society BPM case study #appianforum

Ian Grant of Benenden Healthcare Society, a UK not-for-profit, user-pay healthcare provider with almost a million members. They had a need to improve their business agility, and identified that they needed a new case management system as well as better auditability of the decisions made within processes. They reengineered their processes first, then had their three short-listed vendors build out those processes to see how quickly (and well) it could be done; Appian was the unanimous choice of the selection panel.

They created Service Management System (SMS) to document and manage all interactions with members. Typically, when a member calls in, the service rep accesses all previous case data for this member, gathers some information about what is wrong with the member in layman’s terms – so that the service rep doesn’t have to be a clinical expert – then the rules and processes built into SMS present the services available to that member, and generates the necessary paperwork and follow-on processes. When they go live (soon), they expect to reduce their service rep training time from months to three weeks, and improve their customer satisfaction rating from 95% to 99%. They’ve created some lightly-customized user interfaces that allow for fast information gathering and problem resolution.

During implementation, they completely ignored the current state, and only considered the to-be processes and functionality. Although they had a waterfall requirements process up front with formal signoff, they moved into a more iterative prototype development cycle (although one that seems to have taken a year, so not so agile).

They’ve already achieved two million GBP in savings through renegotiations with their service providers based on their expected future-state process, as well as seeing some improvements across all processes. This is fairly common, since the act of examining and reengineering a business process almost always has the effect of improving it since many of the inefficiencies will be exposed and resolved even before any technology is brought to bear on the processes.

They have involved 85% of the entire user base in some way in the creation of the new system, which has resulted in a high degree of user buy-in since they developed the requirements themselves. They’ve already identified their requirements for the next phase, and are creating the development plan now to deliver that by the end of next year.

I’m left with the impression that there is still a lot of waterfall methodology at Benenden; whether that hinders their efforts will be seen once they’ve rolled out the first version.

Appian 6 Release #appianforum

Malcolm Ross was up next to give us an update on Appian 6, being released in GA this week. I had a briefing a few weeks back, so I’ll include my notes from that here for a more complete view.

Appian 6 application marketplaceTheir claim is that Appian 6 is the fastest way to deploy process applications through rapid design and collaboration, rapid deployment, rapid process improvement cycles; they claim that they can complete a production pilot before the big BPM vendors can install their product (I think that they could have the pilot complete before the big guys could sign a contract, but that’s another story). In a nice illustration, one of the Appian tech guys installed and configured Appian 6 on another screen while Malcolm was giving his 30-minute presentation, including deploying an application with process models, forms, rules and reports.

They have some unique technology differentiators to support their speed claims: an integrated portal for creating composite applications and zero-code model-driven design for implementation speed; in-memory architecture for execution speed; easy import and export of applications between Appian systems and the Appian Forum online community using a marketplace paradigm; and seamless migration between their SaaS and on-premise solutions for scalability or changing requirements. To support that, they have a services team and methodology with a CMM-like maturity model built in, including a center of excellence for sharing best practices.

Appian 6 composite app including the ubiquitous Google mapThere have been a number of improvements to the end user interface: intuitive URLs for navigating directly to specific applications, collaborative discussion forums, and realtime user presence. As we heard earlier, the UI has been simplified with tabs across the top to access different applications and areas; in general, there is a lot more glue to pull together the components into complete applications. The portal allows for mashups to be created not just of Appian components and applications, but of other widgets using JSR168 and WSRP, and an application can include different composite interfaces for different roles: in my previous briefing, I saw an application that included different user interfaces for a loan representative, IT staff member, and IT manager, displaying the same data in a different manner depending on the role. Controls to edit the dashboard and create ad hoc reports can be exposed to specific user roles so that they can modify their own working environment; other roles are limited to what the application designer provides to them. The key thing about a composite application built in this environment is that it is task-driven: the process is baked right into the application.

One of the things that I like about this release is the ease of packaging, deploying and exchanging applications. An entire application, including all of its components such as processes and rules, can be exported at XML; this can be managed in a source code control system, or imported into another Appian system while maintaining unique IDs for the components across all systems. This allows applications to be easily moved to and from the Appian Forum marketplace, an on-premise Appian system and a SaaS Appian instance.

Clayton Holdings BPM Case Study #appianforum

Clayton Holdings, which provides risk analysis, loss mitigation and operational solutions to the mortgage industry, have been using Appian’s SaaS solution, Appian Anywhere, for more than a year, and John Cowles from Clayton was here to tell us about their experiences. They have 135 users over 3 business units, with another business unit coming online soon, kicking off 40,000 process instances per month across 50 different process models. They’re doing all of the build and maintenance with 2 primary resources; considering that their first roll-out only took about six weeks, they’re doing a lot quickly without a lot of resources.

They had a number of business challenges, many of them triggered by the meltdown of their financial/mortgage client base that reduced the amount of work that they had and called for tighter controls. They didn’t have a lot of visibility into their processes and metrics, and many of their key processes were manual; typical training time for the business processes was about six months, yet they had a high attrition rate that meant that people were leaving just as they became capable at the processes. With little internal IT bandwidth and slashed budgets, they decided on a SaaS solution to allow them to try out BPM without a lot of up-front costs or IT efforts.

They had some specific goals for their BPM implementation, particularly around having process visibility (and auditability) and reducing training time, plus reducing process variability by making decisions based on metrics. Their initial project team was the EVP of business operations, about eight subject matter experts, two process efficiency team members and one business analyst.

They do monthly releases with new or modified process models or UI enhancements; most processes are kicked off using web service calls driven by exceptions from Clayton’s internal systems, although they don’t integrate from Appian process instances back to the internal systems. Users can also instantiate processes manually from their dashboard as required, but most are created from the nightly batch of web service calls.

They see Appian Anywhere as a platform for building applications, and hope to replace some of their traditional development with assembly of components into applications using Appian.

Some of their benefits: 38% less headcount in spite of an increased workload to manage delinquencies, 100% more average value adds (e.g., where they detect a previously-overlooked revenue opportunity for their customers such as a penalty payment) per FTE, and the ability to shift the workload to geographic areas with lower costs because it’s all in the cloud. They have much better process monitoring, including reporting on their key metrics, and because of that have identified other process improvement opportunities.

Their lessons learned and best practices:

  • Focus on change management and process management early
  • Find net promoters and over-communicate rather than under-communicate
  • Limited or no system integration in first releases
  • Prototype everything
  • Frequent releases, e.g., monthly
  • Challenge the desire to simple push current variability into the new tool, i.e., don’t just pave the cowpaths
  • Emphasize the reporting desires up front since it influences design
  • Resist temptation to start at detailed level of a process

In the future, they plan to bring in another business unit and focus on integrating Appian with internal systems in order to reduce manual rekeying of data between systems. They’re also going to look at some internal process, such as HR and Legal.

Appian Corporate Update #appianforum

Matt Calkins gave us a brief address at the customer dinner last night, but there are many more people here today, and he provided a more in-depth review of the corporate picture. Amongst other indicators are a revenue increase of 150% and active customer increase of 58% in 2009: I’m seeing numbers like this from many of the midsized BPMS vendors, supporting my impression that the BPM market continues strong even in the face of an economic downturn.

Their new corporate slogan is “BPM Accelerated”, referring to both speed of creation and operational speed. Speed to create results in quick ROI and reduced risk while satisfying constituencies; speed to operate results in customer satisfaction, better cost structure and enables the optempo opportunity to adapt to changing conditions. Given their new professional services offerings “Live in 10” and “Live in 20” – meaning a fully operational production system in 10 or 20 days – supports their goal of implementation speed.

Appian is creating a new BPM implementation methodology based on the idea that great processes evolve, they’re not invented: the ability to gradually change a process in order to optimize it is a key factor. I completely agree with this very Agile tenet: if you can’t change your processes gradually over the first few months of operation, they will be unable to properly support your business.

He highlighted some of the new features in Appian 6, such as an application focus both in user interface and deployment. He also emphasized the benefits of their real-time architecture, that allows for subsecond response time for process data, rules and reports from the instance data stored in Appian’s proprietary database combined with the full business data in a relational database. They’ve taken a page from Google’s book and made their UI as minimalist as possible, displaying only the features that the user really needs, in order to make BPM as easy to use as email.

The old Appian Access online community has been rebranded as Appian Forum, and expanded to include a library of free applications (created by Appian, partners and customers) with a starting point of 25 applications contributed by Appian based on customer requests: again, speeding time to implementation for these types of processes.