WebSphere BPM Product Portfolio Technical Update

The keynotes sessions this morning were typical “big conference”: too much loud music, comedians and irrelevant speakers for my taste, although the brief addresses by Steve Mills and Craig Hayman as well as this morning’s press release showed that process is definitely high on IBM’s mind. The breakout session that I attended following that, however, contained more of the specifics about what’s happening with IBM WebSphere BPM. This is a portfolio of products – in some cases, not yet really integrated – including Process Server and Lombardi.

Some of the new features:

  • A whole bunch of infrastructure stuff such as clustering for simple/POC environments
  • WS CloudBurst Appliance supports Process Server Hypervisor Edition for fast, repeatable deployments
  • Database configuration tools to help simplify creation and configuration of databases, rather than requiring back and forth with a DBA as was required with previous version
  • Business Space has some enhancements, and is being positioned as the “Web 2.0 interface into BPM” (a message that they should probably pass on to GBS)
  • A number of new and updated widgets for Business Space and Lotus Mashups
  • UI integration between Business Space and WS Portal
  • Webform Server removes the need for a client form viewer on each desktop in order to interact with Lotus Forms – this is huge in cases where forms are used as a UI for BPM participant tasks
  • Version migration tools
  • BPMN 2.0 support, using different levels/subclasses of the language in different tools
  • Enhancements to WS Business Modeler (including the BPMN 2.0 support), including team support, and new constructs including case and compensation
  • Parallel routing tasks in WPS (amazing that they existed this long without that, but an artifact of the BPEL base)
  • Improved monitoring support in WS Business Monitor for ad hoc human tasks.
  • Work baskets for human workflow in WPS, allowing for runtime reallocation of tasks – I’m definitely interested in more details on this
  • The ability to add business categories to tasks in WPS to allow for easier searching and sorting of human tasks; these can be assigned at design time or runtime
  • Instance migration to move long-running process instances to a new process schema
  • A lot of technical implementation enhancements, such as new WESB primitives and improvements to the developer environment, that likely meant a lot to the WebSphere experts in the room (which I’m not)
  • Allowing Business Monitor to better monitor BPEL processes
  • Industry accelerators (previously known as industry content packs) that include capability models, process flows, service interfaces, business vocabulary, data models, dashboards and solution templates – note that these are across seven different products, not some sort of all-in-one solution
  • WAS and BPM performance enhancements enabling scalability
  • WS Lombardi Edition: not sure what’s really new here except for the bluewashing

I’m still fighting with the attendee site to get a copy of the presentation, so I’m sure that I’ve missed things here, but I have some roundtable and one-on-one sessions later today and tomorrow that should clarify things further. Looking at the breakout sessions for the rest of the day, I’m definitely going to have to clone myself in order to attend everything that looks interesting.

In terms of the WPS enhancements, many of the things that we saw in this session seem to be starting to bring WebSphere BPM level with other full BPM suites: it’s definitely expanding beyond being just a BPEL-based orchestration tool to include full support for human tasks and long-running processes. The question lurking in my mind, of course, is what happens to FileNet P8 BPM and WS Lombardi (formerly TeamWorks) as mainstream BPM engines if WPS can do it all in the future? Given that my recommendation at the time of the FileNet acquisition was to rip out BPM and move it over to the WebSphere portfolio, and the spirited response that I had recently to a post about customers not wanting 3 BPMSs, I definitely believe that more BPM product consolidation is required in this portfolio.

More BPM Acquisitions: Progress Buys Savvion

BPM acquisitions must be in the air: today, Progress Software announced that they’ve bought Savvion for $49M. This is hot on the heels of IBM’s announcement last month that they’re buying Lombardi, with one huge difference being that Progress doesn’t already have a BPM product in their lineup, whereas IBM has two. Of the three mid-range BPMS-only vendors that I would most commonly name – Appian, Lombardi and Savvion – that’s two out of the three announcing acquisition in less than a month. With the economy just starting to pull out of a huge pit, that’s telling news: as I mentioned in my post about Lombardi, if the economic climate were different, these would be IPOs that we’d be seeing rather than acquisitions. These acquisitions by larger companies, however, changes the BPM market landscape pretty significantly, since this makes it significantly easier for Lombardi and Savvion (under the IBM and Progress banners, respectively) to get a foot in the door of larger customers who rely on their major vendors to bring them enterprise solutions, rather than considering a smaller company. One advantage that Progress/Savvion have at this point in time is that the acquisition is actually closing today (or later this week), whereas IBM/Lombardi went the pre-acquisition announcement route, and will endure several months of limbo before the deal closes. [Update: I’ve received a few tweets and emails indicating that the IBM/Lombardi close will happen very soon, possibly around February 1st, although I haven’t heard a final date. My “several months” was based on past experience.]

I had an early morning call with Dr. John Bates (CTO of Progress) and Dr. Ketabchi (CEO of Savvion), but a few people obviously had earlier time slots: Neil Ward-Dutton has already posted his initial thoughts, as has Jason Stamper. I agree with Neil that this is a smart move for Progress: a good fit of products with minimal overlap, directly addressing some of the challenges that they’re hearing from their customers in terms of achieving operational responsiveness. The existing suite of Progress products allows for determining what happened within an organization – a rear-view mirror approach – but not much that allows the organization to quickly change how they’re doing things in order to drive efficiency or respond to changing conditions. Bringing BPM into the fold allows them to change that, primarily through tying Progress’ Apama CEP with Savvion BPM, but also by leveraging the rest of the Progress SOA and ESB infrastructure, including data and application integration.

Savvion’s had a couple of internal shakeups in the past two years: in early 2008, Savvion axed contractors, most of their marketing department and some salespeople, ostensibly in order to shift towards a solution focus, although at the time I said that they could be positioning themselves for acquisition. They’ve had a strong push on their vertical solutions since that time, wherein they develop frameworks for vertical applications, then allow partners – or even customers – to built vertical solutions on those common frameworks.

Like many BPM vendors, Savvion has often sold to the technology side of organizations but have shifted focus to the business side recently. Progress is still a very technology-focused set of tools, so it will be interesting to see how well they can bring together the different marketing messages. In my conversation with him this morning, John Bates said that they’re moving towards more of a solutions-oriented approach rather than product-oriented: although this is an easier sell to the business side, it can be used to mask a number of disparate products being clumped together without much natural cohesion (cf. “IBM BPM”).

There will need to be some product integration points to be able to really sell this as an integrated suite of tools rather than a “solution” patched together with professional services. First, they need to bring together a common process modeling environment. Ditto for an event/process monitoring environment. Third, they need to consider the touchpoints within application development: although data integration and application integration will be designed using the existing Progress products, these have to be seamlessly integrated into Savvion’s process application development environment. There are likely also areas of integration at the engine level, too, but getting the developer and analyst-facing tools integrated first is key to acceptance, and therefore sales, of an integrated solution.

Another consideration will be a software-as-a-service offering: Savvion already has inroads in this with their BPO market, although they haven’t yet announced any consumer-facing SaaS products. Bates stated that Progress considers SaaS “an important paradigm”, which I would translate as “we know that we have to do it, but aren’t there yet”. Pushing BPM and CEP to mid-range and smaller companies is going to require a strong SaaS offering, as well as providing a platform for larger enterprises to use for piloting and testing.

Because the acquisition has already closed, or is closing within the next day or two, Progress and Savvion sales and partner channels are already being brought together; the same will happen soon for marketing teams. As always happens in this case, there will be some losses, but given the small degree of overlap in product functionality, they’ll probably need most of the skills from both sides to make this work. Dr. K. has stated that he’ll stay with Progress, although his role hasn’t been announced.

The BPM+CEP equation is becoming increasingly important as organizations focus on operational responsiveness, and I think that it’s particularly significant that Progress appointed Bates – formerly co-founder and CTO of Apama before their acquisition by Progress – to the CTO position during the time when they must have been negotiating to acquire Savvion. Clearly, Progress sees BPM+CEP as an important mix, too.

 

Disclosure: Savvion has been my client within the past year, for creating a webinar and internal strategy reports, although we have no active projects at this time.

Software AG Technology Innovation Fair

For once, I don’t need to travel to see a (mini) vendor conference: Software AG has taken it on the road and is here in Toronto this morning. I wanted to get an update of what’s happening with webMethods since I attended their user conference in Miami last November, and this seemed like a good way to do it. Plus, they served breakfast.

Susan Ganeshan, SVP of Product Management, started the general keynote with a mention of Adabas and Natural, the mainframe (and other platforms) database and programming language that drive so many existing business applications, and was likely the primary concern of many of the people in the room. However, webMethods and CentraSite are critical parts of their future strategy and formed the core of the rest of the keynote; both of these have version 8 in first-customer-ship state, with general availability before the end of the year.

First, however, she talked about Software AG’s acquisition of IDS Scheer, and how ARIS fits into their overall plan, following on today’s press release about how Software AG has now acquired 90% of IDS Scheer’s stock, which should lead to a delisting and effective takeover. She discussed their concept of enterprise BPM, which is really just the usual continuous improvement cycle of strategize/discover and analyze/model/implement/execute/monitor and control that we see from other BPMS vendors, but pointed out that whereas Software AG has traditionally focused on the implement and execute parts of the cycle, IDS Scheer handles the other parts in a complementary fashion. The trick, of course, will be to integrate those seamlessly, and hopefully create a shared model environment (my hope, not her words). They are also bringing a process intelligence suite to market, but no details on that at this time.

Interesting message about the changing IT landscape: I’m not sure of the audience mix between Adabas/Natural and webMethods, but I have to guess based on her “intro to BPM” slides that it is heavily weighted towards the former, and that the webMethods types are more focused on web services than BPM. She also invokes the current mantra of every vendor presenter these days about how the new workforce has radically different expectations about what their computing environment should look like (“why can’t I google for internal documents?”); I completely agree with this message, although I’m sure that most companies don’t yet have that as a high priority since much of the new workforce is just happy to have a job in this economy.

She discussed the value of CentraSite – or at least of SOA governance – as being a way to not just discover services and other assets, but to understand dependencies and impacts, and to manage provisioning and lifecycle of assets.

A few of the BPM improvements:

  • Also a common message from BPMS vendors this week, she talked about their composite application environment, a portal-like dynamic workspace that can be created by a user or analyst by dragging portlets around, then saved and shared for reuse. This lessens the need for IT resources for UI development, and also allows a user to rearrange their workspace the way it best works for them.
  • They’ve also added ad hoc collaboration, which allows a process participant to route work to people who are not part of the original process; it’s not clear if they can add steps or subprocesses to the structured process, or whether this is a matter of just routing the task at its current step to a previously unidentified participant.
  • They integrate with Adobe Forms and Microsoft Infopath, using them for forms-driven processes that use the form data directly.
  • They’ve integrated Cognos for reporting and analytics; it sounds like there are some out of the box capabilities that run without additional licensing, but if you want to make changes, you’ll need a Cognos license.

Since the original focus of webMethods was in B2B and lower-level messaging, she also discussed the ESB product, particularly how they can provide high-speed, highly-available messaging services across widespread geographies. They can provide a single operational console across a diverse trading network of messaging servers. There’s a whole host of other improvements to their trading networks, EDI module and managed file transfer functionality; one interesting enhancement is the addition of a BPEL engine to allow these flows to be modeled (and presumably executed) as BPEL.

They have an increased focus on standards, and new in version 8 are updates to XPDL and BPEL support, although they’re still only showing BPMN 1.1 support. They also have some new tooling in their Eclipse-based development suite.

She laid out their future vision as follows:

  • Today: IT-driven business, with IT designing business processes and business dictating requirements
  • 2009 (um…isn’t that today?): collaborative process discovery and design; unified tooling
  • 2010: business rules management and event processing; schema conformance
  • 2012: personalized, smart-healing processes; centralized command and control for deployment and provisioning
  • 2014: business user self-service and broad collaboration without organizational boundaries; elastic and dynamic infrastructure

She finished up with a brief look at AlignSpace for collaborative process discovery; I’m sure that someday, they will approve my request for a beta account so that I can take a closer look at this. 🙂 Not only process discovery and modeling, however, AlignSpace will also provide a marketplaces of resources (primarily services) related to processes in particular vertical industries.

They have a complete fail on both wifi and power here, but I no longer care: my HP Mini has almost six hours of battery life, and my iPhone plan allows me to tether the netbook and iPhone to provide internet access (at least in Canada).

Process Design Slam 2009 – The Final Judgement #SAPTechEd09 #BPXslam09

To wrap up the proceedings from last night, I was asked to critique the efforts of the groups and pick a winner: as it turned out, I was the only judge. Each of the groups did great work, and I want to call out some of the specific efforts:

  • The Business Use Case group had a great written story, including a lot of cultural and social background for our fictional city in order to provide context for the implementation.
  • The BPM Methodologies group had excellent documentation on the wiki, including graphics and charts to make it clear how the methodologies fit with the other groups.
  • The Business Rules group were stars at collaboration with the other groups, in part because everyone quickly realized the importance of business rules to data, UI and process, and solicited their input.
  • The UI and Dashboards group created mockups of monitoring dashboards that provide a starting point for future design slam work.
  • The Collaborative Modeling group led at international collaboration, using Gravity (process modeling within Google Wave) interactively with team members in Europe during the session, and produced a business process model.
  • The Service Implementation group also kicked off implementation, creating a service orchestration process model as a starting point.

In general, everyone seemed to have a good understanding of the importance of data, rules and process, but there could have been better cross-pollination between the groups; in future design slams, that could be helped by requiring some group members to move partway through the evening in order to ensure that there is a better understanding on both sides, something that is fairly common in real-life businesses where people are seconded from one department to another for part of a project. Although a certain amount of collaboration did occur, that was one area that requires more work. I saw one tweet that referred to the design slam as crowdsourced rather than collaborative, although I’m not sure that I would say that: crowdsourcing usually has more of a flavor of individuals contributing in order to achieve their own goals, whereas this was a collaboration with common goals. However, those goals were a bit fragmented by group.

Another issue that I had was the lack of an architectural view of process design: although all of the groups are contributing to a common process (or set of processes), there is little thought around the transformations required to move the process list developed by the Business Use Case group to the process model developed by the Collaborative Modeling group to the process design developed by the Service Implementation group. In enterprise architecture terms, this is a case of transforming models from one layer to another within the process column of the architecture (column 2 if you’re a Zachman fan); understanding these transformations is key so that you don’t reinvent the process at each layer. One of the goals of model-driven design is that you don’t do a business-level process model, then redraw it in another tool; instead, the business-level process model can be augmented with service-level information to become an executable process without recreating the model in another tool. In reality, that often doesn’t happen, and the business analysts draws a process in one tool (such as Visio, or in the case of the design slam, Gravity), then IT redraws it in a tool that will create an executable process (NetWeaver in this case). I have a couple of suggestions here:

  • Combine the Business Use Case and Collaborative Modeling groups into a single group, since they are both doing high-level business analysis. This would allow the process list to be directly modeled in the same group without hand-off of information.
  • Reconsider the use of tools. Although I have a great deal of appreciation for Gravity (I am, after all, a geek), the fact that it does not share a model with the execution environment is problematic since the two groups creating process models were really off doing their own thing using different tools. Consider using NetWeaver 7.2, which has a business analyst perspective in the process composer, and having the business use case/collaborative modeling group create their initial non-technical models in that environment, then allow the service implementation team to add the technical underpinnings. The cool Wave collaboration won’t be there, or maybe only as an initial sketching tool, but the link will be made between the business process models and the executable models.

When it came down to a decision, my choice of the winner was more a product of the early state of the design slam rather than the efforts or skills of the group: I suspect that my view would change if I were judging in Vienna or Bangalore when the process is further along. I selected the Business Use Case group as the winner at this point based on the four judging criteria: although they failed to include alternative media, their story was clear and well-written, it fit well with the other groups’ efforts, and they used good social and collaborative methods within their group for driving out the initial solutions.

The winning team was made up of Greg Chase, Ulrich Scholl and Claus von Riegen, all of SAP, with input from a few others as subject-matter experts on public utilities and electricity production, and started the discussions on pricing plans that ended up driving much of the Business Rules group’s work. Ulrich also has solar cells on his house that connect to the grid, so he has in-depth knowledge of the issues involved with micro-generation, and was very helpful at determining the roles involved and how people could take on multiple roles. They leveraged a lot of the content that was already on the wiki, especially references to communities with experience in micro-generation and virtual power plants. Besides this initial leg up on their work, they were forced to work fast to produce the initial use cases and processes, since that provided necessary input to the other groups to get started with their work, which left them with more of the evening to write a great story around the use case (but, apparently, not enough time to add any graphics or multimedia).

There was a huge amount of effort put into the design slam, both in the preceding weeks through conference calls and content added to the wiki, and at the session last night in Phoenix. I believe that a huge amount of groundwork has been laid for the design slams upcoming in Vienna and Bangalore, including process model, service orchestration diagrams, business rules decision tables, and monitoring dashboard mockups.

I had a great time last night, and would happily participate in a future process design slam.

Process Design Slam 2009 #SAPTechEd09 #BPXslam09

8pm

We’re just getting started with the Process Design Slam: one of the face-to-face sessions that make up the collaborative design process that started a couple of months ago on the Design Slam wiki. Marilyn Pratt has identified the six groups that will each work on their part of the design, collaborating between groups (a.k.a. poaching talent) as required, and even bringing in people from the Hacker Night and Business Objects events going on in the same area.

  • Business Use Case, led by Greg Chase
  • Collaborative Modeling, led by David Herrema
  • Business Rules, led by James Taylor
  • Service Implementation, led by John Harrikey
  • BPM Methodologies, led by Ann Rosenberg
  • UI and Dashboards, led by Michelle Crapo

Right now, everyone has formed into initial groups based on their interests, and is having some initial discussions before the food and beer arrives at 8:30. Since there was an initial story and process model developed by the online community, everyone is starting at something close to a common point. Participants within a group (and even the leaders) could change throughout the evening.

By the end of the night, each team will have created a story about their work, and give a 5-minute presentation on it. The story must include additional media such as video and images, and in addition to the presentation, it must be documented on the wiki. Each story must also be related to the output of the other teams – requiring some amount of collaboration throughout the evening – and include pointers on what worked and didn’t work about their process, and what they would do differently in the future.

At that point, the judging panel, which includes me plus Marc Rosson, Uli Scholl, Ann Rosenberg and Dick Hirsch, will render our judgment on the creations of the groups based on the following criteria:

  • Clarity and completeness of the story on the wiki, particularly if it could be understood without the presentation.
  • Creative use of media.
  • How well this story ties into the overall storyline of the night.
  • The social process that was used to create the story.

I’m floating around between groups to listen in on what they’re doing and some of their initial thoughts.

8:30pm

Beer o’clock. The Business Rules team is still deep in conversation, however, and Business Use Case comes over to them to ask for help in bringing the business rules and business use case together. Business Use Case outlines the actors that they have identified, and the high-level business processes that they have identified in addition to the initial business process of bringing new consumer-producers online.

9pm

BPM Methodologies has a much wider view than just this project: developing methodologies that can be used across (SAP) BPM projects, including assessing the business process maturity of an organization in order to determine where they need to start, and identifying the design roles. In the context of the design slam, they will be helping to coordinate movement of people between the teams in order to achieve the overall goals.

9:30pm

Service Implementation – viewed by groups such as Business Use Case as “the implementers” – have revised the original process map from a service standpoint; looking at the services that were required led to a process redesign. They are using the Composite Designer to model the service orchestration, including the interfaces to the services that they need and external services such as FirstLook, an wind assessment service based on location data. In their service orchestration process, they assume that the process is initiated with the data gathered from a user interface form, and they focus primarily on the automated process steps. Ginger Gatling doesn’t let me leave the table until I tell them what they have to do to win; I advise them to update the wiki with their story.

9:50pm

The Collaborative Modeling group is modeling the business process using Gravity, online with a couple of participants in Europe. This is a process model from a business standpoint, not an executable model; there is no concept of the linkage between this and what is being done by the Service Implementation team. I suggest that they should head over there to compare processes, since these should (at some level) just be different perspectives on the same process.

10pm

Business Use Case is identifying the necessary processes based on their earlier collaboration with Business Rules: this has given them a good understanding of business case, goals and incentives. They’re considering both human and automated usages, and have fed their results to the UI, Business Rules and Collaborative Modeling teams.

10:10pm

Business Rules states that they’ve had to gather information from numerous sources, and the challenge is to sequence it properly: data is captured by the UI, but is driven by the Business Use Case. They didn’t work with the Collaborative Modeling group directly, but there are links between what they do and what’s happening in the process. They’re also interested in using historical usage data to determine when to switch consumers between usage plans.

10:20pm

UI and Dashboards managed to recruit a developer who is actually coding some of their interfaces; they were visited by many of the other groups to discuss the UI aspects, since the data gathered by the UI drives the rest of the process and rules, and the data generated by the process drives the dashboard interfaces. They feel that they had the best job since they could just be consumers and visualize the solutions that they would like to have.

10:35pm

Presentations start. Marilyn Pratt is being the MC, and Greg Chase is wrangling the wiki to show what has been documented by each of the groups. Half of the Service Implementation team just bailed out. I have to start paying attention now. Checking out the wiki pages and summarizing the presentations:

  • Business Use Case worked with the UI, Collaborative Modeling and Business Rules teams, since those teams required the business use cases in order to start their work. They developed a good written story including cultural/social background about the fictional city where the power generation plan would go into effect. They defined the roles that would be involved (where one person could take on more than one role, such as a consumer that is also a producer), and the processes that are required in order to handle all of the use cases. They did not use any presentation/documentation media besides plain text.
  • BPM Methodologies had excellent documentation with the use of graphics and tables to illustrate their points, but this was a quite general methodology, not just specific to this evening’s activities. They worked briefly with the other groups and created a chart of the activities that each of these groups would do relative to the different phases in the methodology. I found the methodology a bit too waterfall-like, and not necessarily a good fit with the more agile collaborative methods needed in today’s BPM.
  • Business Rules focused on the rules related to signing up a new user with the correct pricing plan, documenting the data that must be collected and an initial decision table used to select a plan, although no graphics or other non-text media. They worked with the Business Use Case team and the UI team to drive the underlying business use cases and data collection.
  • UI and dashboards created the initial mockups that can be used as a starting point for the design slam in Vienna in a couple of weeks. They worked with Business Rules and Business Use Case in order to nail down the required user data inputs, and what is required for monitoring purposes, and included some great graphics of the monitoring dashboards (although not the data collection form).
  • Collaborative Modeling used Gravity (process modeling in Google Wave) not just for modeling with the group around the table, but also with participants in Germany and the Netherlands. They included photos of the team as well as screen snaps of the Gravity Wave that they created, although the text of the story documented on the wiki isn’t really understandable on its own. I’m not sure that they spent enough time with other groups, especially the Service Implementation group.
  • Service Implementation talked to the Business Rules and UI teams to discuss rules and data, but felt that they were running blind since there wasn’t enough of the up-front work done for them to do any substantial work. They used placeholders for a lot of the things that they didn’t know yet, and modeled the service orchestration. The documentation in the wiki is very rudimentary, although includes the process map that they developed; it’s not clear, however, how the process model developed in Collaborative Modeling relates to their map.

11:30pm

And now, on to the judging – I’ll write up the critique and results in a later post.

Can packaged applications ever be Lean? #BTF09

Chip Gliedman, George Lawrie and John Rymer participated in a panel on packaged applications and Lean.

Rymer argued that packaged apps can never be Lean, since most are locked down, closed engines where the vendor controls the architecture, they’re expensive and difficult to upgrade, they use more functions than customers use, they provide a single general UI for all user personas, and each upgrade includes more crap that you don’t need. I tend to be on his side in this argument about some types of appls (as you might guess about someone who used to write code for a living), although I’m also a fan of buy over build because of that elusive promise of a lower TCO.

Gliedman argued the opposite side, pointing out that you just can’t build the level of functionality that a packaged application provides, and there can be data and integration issues once you abandon the wisdom of a single monolithic system that holds all your data and rules. I tend to agree with respect to functionality, such as process modeling: you really don’t want to build your own graphical process modeler, and the alternative is hacking your own process together using naked BPEL or some table-driven kludge. Custom coding also does not guarantee any sort of flexibility, since many changes may require significant development projects (if you write bad code, that is), rather than a package app that may be more configurable.

It’s never a 100% choice between packaged apps and custom development, however: you will always have some of each, and the key is finding the optimal mix. Lean packaged apps tend to be very fit-to-purpose, but that means that they become more like components or services than apps: I think that the key may be to look at composing apps from these Lean components rather than building Lean from scratch. Of course, that’s just service-oriented architecture, albeit with REST interfaces to SaaS services rather than SOAP interfaces to internal systems.

There are cases where Lean apps are completely sufficient for purpose, and we’re seeing a lot of that in the consumer Web 2.0 space. Consider Gmail as an alternative to an Exchange server (regardless of whether you use Outlook as a desktop client, which you can do with either): less functionality, but for most of us, it’s completely sufficient, and no footprint within an organization. SaaS, however, doesn’t not necessarily mean Lean. Also, there are a lot of Lean principles that can be applied to packaged application deployment, even if the app itself isn’t all that Lean: favoring modular applications; using open source; and using standards-based apps that fit into your architecture. Don’t build everything, just the things that provide your competitive differentiation where you can’t really do what you need in a packaged apps; for those things where you are doing the same all every other company, suck it up and consider a packaged app, even if it’s bulky.

Clearly, Gliedman is either insane or a secret plant from [insert large enterprise vendor name here], and Rymer is an incurable coder who probably has a ponytail tucked into his shirt collar. 🙂 Nonetheless, an entertaining discussion.

Patterns for Business Process Implementations #GartnerBPM

Benoit Lheureux from Gartner’s Infrastructure and Architecture group gave a presentation on process implementation patterns. I think that he sees BPM as just part of SOA, and presents as such, but I’m willing to give him a pass on that.

He discussed five styles of flow management in SOA:

  1. Microflows: fine-grained services implemented via flows amongst software components. This is a process from a software development standpoint, not a business-level process: probably 3GL code snippets assembled into what we old-timers might refer to as a “subroutine”. 🙂
  2. Service composition: coarse-grained services implemented by assembling fine-grained flows (microflows). This may be done with a BPMS tool, but is low-level service composition rather than business processes.
  3. Straight-through process: automating business processes involving multiple services across systems, but without human intervention.
  4. Workflow: pretty much the same as STP, but with human intervention at points in the process.
  5. Semi-structured processes: a combination of structured processes with unstructured activities or collaboration.

He has some good strategic planning assumptions based on these four patterns, such as 75% of companies will use at least three different products to implement at least three different styles of flows. His primary focus, however, is on B2B, and how internal process connect to multi-enterprise processes, and the ultimate goal of shared process execution across enterprises. This led to the four B2B flow management styles:

  1. Blind document/transaction exchange: loosely-coupled, with each partner managing their own internal processes, and no visibility outside their own processes.
  2. Intelligent document/transaction exchange: visibility across the shared process to provide a shared version of the truth, such as a BAM dashboard that provides an end-to-end view of an order-to-cash process across enterprises. Although this isn’t that popular yet, it is providing significant benefits for companies that are implementing it, and Lheureux estimates that 50% of B2B relationships will include this by 2013.
  3. Multi-enterprise applications: shared execution of a process that spans the enterprises, such as vendor-managed inventory. This may be hosted by one of the partners, or may be hosted by a third-party service provider.
  4. Multi-enterprise BPMS and rules: centralized processes and rules, such as shared compliance management on a shared process. By 2013, he predicts that at least 40% of new multi-enterprise integration projects will leverage BPMS technology.

He showed a chart that I’ve seen at earlier conferences on identifying process characteristics, classifying your processes as case management, form-driven workflow, content collaboration, multiparty transactional workflow, participant-driven workflow, and optimization of network relationships based on the unit of work, process duration, degree of expertise required, exception rate, and critical milestones that progress work. Then, consider when to use BPMS technology rather than code when there are specific process characteristics such as complexity and changeability.

The final recommendations: don’t try to use the same tool to handle every type of process implementation, but be aware of which ones can be best handled by a BPMS (and by different types of BPMS) and which are best handled in code.

Tutorial: enabling flexibility in process-aware information systems #BPM2009

Manfred Reichert of Ulm University and Barbara Weber of University of Innsbruck presented a tutorial on the challenges, paradigms and technologies involved in enabling flexibility in process-aware information systems (PAIS). Process flexibility is important, but you have to consider both build time flexibility (how to quickly implement and configure new processes) and run time flexibility (how to deal with uncertainty and exceptional cases during execution), as well as their impact on process optimization.

We started by looking at the flexibility issues inherent in the imperative approach to BPM, where pre-defined process models are deployed and executed, and the execution logs monitored (in other words, the way that almost all BPMS work today). As John Hoogland discussed this morning, there are a number of flexibility issues at build time due to regional process variations or the lack of a sufficient information about decisions to build them into the process model. There’s also flexibility issues in the run time, mostly around exception handling and the need for ad hoc changes to the process. As all this rolls back in to the process analyst through the execution monitoring, it can be used to optimize the process model, which requires flexibility in evolving the process model and impacting work in progress. The key problem is that there are way too many variants in most real-life processes to realistically model all of them: there needs to be a way to model a standard process, then allow user-driven configuration (either explicitly or based on the instance parameters) at run time. The Provop approach presented in the tutorial allows for selective enabling and disabling of process steps in a master model based on the instance conditions, with a lot of the research based on the interaction between the parameters and the soundness of the resultant models.

Late binding and late modeling approaches use a pre-specified business process with one or more placeholder activities, then the placeholder activities are replaced with a process fragment at run time either from a pre-determined set of process fragments or a process fragment assembled by the user from existing activity templates (the latter is called the “pockets of flexibility” approach, a name that I find particularly descriptive).

Up to this point, the focus has been on changes to the process model to handle variability that are part of normal business, but possibly not considered exceptions. Next, we looked at runtime exception handling, such as the failure or unavailability of a web service that causes the normal process to halt. Exceptions that are expected (anticipated) can be handled with compensation, with the compensation events and handler built into the process model; unexpected exceptions may be managed with ad hoc process changes to that executing instance. Ad hoc process changes can be a bit tricky: they need to be done at a high level of abstraction in order to make it understandable to the user making the change, yet the correctness of the changes must be validated before continuing. This ability needs to be constrained to a subset of the users, and the users who can make the changes may require some assistance to do this correctly.

This was a good tutorial, but I wanted to catch the process mining session so skipped out at the break and missed the last half.

Community participation in a hosted BPM system #BPM2009 #BPMS2’09

Rania Khalaf of IBM’s T.J. Watson Research Center presented a paper on enabling community participation for workflows through extensibility and sharing, specifically within a hosted BPM system.

She is focused on three areas of collaboration: extension activities (services), collaborative workflow modeling, and collaboration on executing workflow instances. There are two key aspects to this: method and technical enablement, and the business and security aspects.

This is really about the community and how they participate: developers who create extension activities, process designers who create process models and include the extension activities, and participants in the executing workflows. For extension activities, they’re leveraging the Bite language and runtime, which uses REST-based interaction, and allows developers to create extensions in their language of choice and publish them directly in a catalog that is available to process designers. Workflow designers can provide feedback on the extensions via comments. Essentially, this is a sort of collaborative SOA using REST instead of WS-*: developers create extensions and publish them, and designers can pull from a marketplace of extensions available in the hosted environment. Much lighter weight than most corporate SOA efforts, although undeniably more nimble.

Process models can be shared, either for read-only or edit access, with others both within and outside your organization in order to gather feedback and improve the process. Once created, the URL for instantiating a process can be sent directly to end users to kick off their own processes as designed.

This is part of several inter-related IBM efforts, including the newly-released BPM BlueWorks and the still-internal Virtuoso Business Mashups, and seems to fall primarily under the LotusLive family. This is likely an indication of what we’ll see in BlueWorks in the future; they’ll be adding more social capabilities such as process improvement and an extensions marketplace, and addressing the business and security issues.

The Open Group’s Service Integration Maturity Model and SOA Governance Framework

I had a chance last week for a pre-release briefing from The Open Group’s Chris Harding, Forum Director for SOA and Semantic Interoperability, on two new standards that they are releasing today: the Service Integration Maturity Model (OSIMM) and the SOA Governance Framework. These are both vendor-neutral (although several large vendors were involved in their creation), and are available for free on The Open Group’s site. In their words:

OSIMM will provide an industry recognized maturity model for advancing the continuing adoption of SOA and Cloud Computing within and across businesses. The SOA Governance Framework is a free guide for organizations to apply proven governance standards that will accelerate service-oriented architecture success rates.

OSIMM is a strategic planning tool: it is used to assess where you are in your SOA initiatives relative to a standard, vendor-neutral maturity model, and help create a roadmap for how to move on to the higher levels of maturity. At the heart of it is the OSIMM matrix, with maturity levels as columns progressing from left to right, and the different organizational dimensions being measured as rows: business view, governance and organization, methods, applications, architecture, information, and infrastructure and management.

OSIMM Matrix

Within each cell of the matrix are the indicators for that dimension and maturity level: for example, if you’re using object oriented modeling methods, that indicates that your methods are at level 2, whereas using service oriented modeling would move you up to level 4 or 5 in the methods dimension. Behind this matrix, OSIMM includes a full set of maturity indicators and attributes, plus assessment questions that organizations can use to determine where they are in terms of maturity: each dimension can be (and likely will be) at a different level of maturity.

This has the potential to be an incredibly useful self-assessment tool for organizations: rather than the very product-specific measurements that you see from vendors (“Not using our product? Oh, you’re not at all advanced in your SOA efforts…”), this is independent of whatever products that you’re using: it’s more about the type of products, and the methods and governance that you’re using to apply them. You’ll be able to use it to understand services and SOA, assess the maturity of your organization, and develop a roadmap to reach your goals.

The first version of the OSIMM Technical Standard will be available here for free download, although that link was still not working at the time that I wrote this. Other industry-specific standards organizations are free to use OSIMM directly, or extend it with their own dimensions and indicators as required.

The other major announcement today is about the SOA governance framework, which helps an organization to define their governance processes and methods. This is more of a practical framework for defining policies aligned between business and IT, aiding communication and capturing vendor-neutral best practices. This includes best practices around both lifecycle management and portfolio management, for both services and service-based solutions.

Governance Processes

Lifecycle and portfolio management are quite different: for example, a service lifecycle would include the idea or motivator for the service, the service definition, service creation, putting the service into operation, modifying and maintaining the service, and eventually retiring the service from operation. Service portfolio management is more concerned with reusability, and the practice of looking in the portfolio in the early stages of service lifecycle to see if there is an existing service that suits the requirements. The same applies to solution lifecycle and portfolio management; this differs from any other type of solution governance since there may be service-specific issues such as composition to be considered.

This generic reference model for SOA governance is provided as a standard, to be used by companies to create (and constantly monitor and update) their own specific governance model and best practices. The SOA governance framework may be used in the context of another governance framework, such as COBIT or ITIL; the SOA working group did a mapping of COBIT to this framework as part of the framework development process, and plan to do more in the future in order to help organizations preserve their investment in COBIT/ITIL training and implementation.

The SOA Governance Framework will be available here for free download.