BPM Think Tank Day 1: Paul Harmon

Phil Gilbert kicked off morning with welcome and logistics before turning it over to Paul Harmon, who gave a keynote entitled “Does the OMG have any business getting involved in business process management?” I love a little controversy first thing in the morning.

He started out with a fairly standard view of the history of BPM and process improvement, from Rummler-Brache and TQM in the 80’s to BPR in the 90’s to BPM in the 00’s. He pointed out that BPM has become a somewhat meaningless term, since it means process improvement, the software used to automate processes, a management philosophy of organizing businesses around their processes (the most recent Gartner viewpoint) and a variety of other things.

He broke down BPM into enterprise level, process level and implementation level concerns (with a nice pyramid graphic), and gave some examples of each. For example, at the enterprise level, we have frameworks such as SCOR (for supply chain) and high-level organizational issues such as the Business Process Maturity Model (BPMM); Harmon questions whether OMG should be involved at this level since its primary focus is on technology standards. Process-level concerns are more about modelling, documenting and improving processes, and spreading that process culture throughout the organization. Implementation-level concerns includes the automation of processes, including execution and monitoring, plus the training required to support these new processes.

He made an interesting distinction between stable processes,which need to be efficient and productive, and dynamic processes, which need to be flexible. Processes that are newer or need to be changed frequently are in the dynamic range; in my opinion, these tend to be the processes that are competitive differentiators for an organization. IBM has recently thrown the concept of “value nets” into the mix as an alternative to value chains, but Harmon feels that both are valid concepts: possibly using value chains for stable processes, which might even be candidates for outsourcing, and value nets for more dynamic processes.

He also made a distinction between process improvement, process redesign and process reengineering, a division that I find a bit false since it’s more of a spectrum than he shows.

There was an interesting bit on model-driven architecture (MDA) and how it moves from platform-independent models (in BPMN) to platform-specific models (also in BPMN) to implementation (e.g., J2EE); for example, there may be parts of a process modelled at the platform-independent level that are never automated, hence aren’t directly mapped to the platform-specific level.

He put forward the idea that process is where business mangers and IT meet, and that different organizations may have the implementation level being driven by either the business side or the IT side, and that there’s often poor coordination at this level.

He then discussed BPMS and came up with yet another taxonomy: integration-centric, employee-centric, document-centric, decision-centric and monitoring-centric. Do we need another way to categorize BPMS? Are these divisions all that meaningful, since the vendors all keep jostling for space in the segment that they think that the analysts are presenting as most critical? More importantly, Harmon sees that also the BPM suites vendors (those that combined process execution/automation with modelling, BAM, rules and all the other shiny things) are leading the market now, the platform vendors (IBM, Microsoft, etc.) will grow to dominate the market in years to come. I’m not sure that I agree with that unless those platform vendors seriously improve their offerings, which are currently disjointed and much less functional than the BPM suites.

Harmon’s slides will be available under OMG-BPM on the BPTrends site. There’s definitely some good stuff in here, particularly in the standards and practices that fit into each level of the pyramid.

Good thing that I’m blogging offline in Windows Live Writer, since the T-mobile connectivity keeps dropping, and isn’t smart enough to keep a cookie to stay logged in, but requires a new login each time that its crappy service cuts out. Posting may come in chunks since it will likely require me to dash out to the lobby to get a decent signal.

Webinar Q&A

I gave a webinar last week, sponsored by TIBCO, on business process modeling; you’ll be able to find a replay of the webinar, complete with the slides, here). Here’s the questions that we received during the webinar and I didn’t have time to answer on the air:

Q: Any special considerations for “long-running” processes – tasks that take weeks or months to complete?

A: For modeling long-running processes, there’s a few considerations. You need to be sure that you’re capturing sufficient information in the process model to allow the processes to be monitored adequately, since these processes may represent risk or revenue that must be accounted for in some way. Second, you need to ensure that you’re building in the right triggers to release the processes from any hold state, and that there’s some sort of manual override if a process needs to be released from the hold state early due to unforeseen events. Third, you need to consider what happens when your process model changes while processes are in flight, and whether those processes need to be updated to the new process model or continue on their existing path; this may require some decisions within the process that are based on a process version, for example.

Q: Do you have a recommendation for a requirements framework that guides analysts on these considerations, e.g. PRINCE2?

A: I find most of the existing requirements frameworks, such as use cases, to be not oriented enough towards processes to be of much use with business process modeling. PRINCE2 is a project management methodology, not a requirements framework.

Q: The main value proposition of SOA is widely believed to be service reuse. Some of the early adopters of SOA, though, have stated that they are only reusing a small number of services. Does this impact the value of the investment?

A: There’s been a lot written about the “myth” of service reuse, and it has proved to be more elusive than many people thought. There’s a few different philosophies towards service design that are likely impacting the level of reuse: some people believe in building all the services first, in isolation of any calling applications, whereas others believe in only building services that are required to meet a specific application’s needs. If you do the former, then there’s a chance that you will build services that no one actually needs — unlike Field of Dreams, if you build it, they may not come. If you do the latter, then your chance of service reuse is greatly reduced, since you’re effectively building single-purpose services that will be useful to another application only by chance.

The best method is more of a hybrid approach: start with a general understanding of the services required by your key applications, and use apply some good old-fashioned architectural/design common sense to map out a set of services that will maximize reusability without placing an undue burden on the calling applications. By considering the requirements of more than one application during this exercise, you will at least be forcing yourself to consider some level of reusability. There’s a lot of arguments about how granular is too granular for services; again, that’s mostly a matter that can be resolved with some design/development experience and some common sense. It’s not, for that matter, fundamentally different than developing libraries of functions like we used to do in code (okay, like I used to do in code) — it’s only the calling mechanism that’s different, but the principles of reusability and granularity have not changed. If you designed and build reusable function libraries in the past, then you probably have a lot of the knowledge that you need to design — at least at a conceptual level — reusable services. If you haven’t built reusable function libraries or services in the past, then find yourself a computer science major or computer engineer who has.

Once you have your base library of services, things start getting more interesting, since you need to make sure that you’re not rewriting services that already exist for each new application. That means that the services must be properly documented so that application designers and analysts are aware of their existence and functionality; they must provide backwards compatibility so that if new functionality is added into a service, it still works for existing applications that call it (without modifying or recompiling those applications); and most important of all, the team responsible for maintaining and creating new services must be agile enough to be able to respond to the requirements of application architects/designers who need new or modified services.

As I mentioned on the webinar, SOA is a great idea but it’s hard to justify the cost unless you have a “killer application” like BPM that makes use of the services.

Q: Can the service discovery part be completely automated… meaning no human interaction? Not just discovery, but service usage as well?

A: If services are registered in a directory (e.g., UDDI), then theoretically it’s possible to discover and use them in an automated fashion, although the difficultly lies in determining which service parameters are mapped to which internal parameters in the calling application. It may be possible to make some of these connections based on name and parameter type, but every BPMS that I’ve seen requires that you manually hook up services to the process data fields at the point that the service is called.

Q: I’d be interested to know if you’re aware of a solid intro or training in the use and application of BPMN. I’ve only found general intros that tend to use the examples in the standard.

A: Bruce Silver offers a comprehensive course in BPMN, which I believe it available as either an online or classroom course.

Q: Does Data Object mean adding external documentation like a Word document into the BPM flow?

A: The origin of the data object is, in part, to serve the requirements of document-centric BPM, where the data object may represent a document (electronic, scanned paper, or a physical paper document) that travels with the workflow. Data objects can be associated with a sequence flow object — the arrows that indicate the flow in a process map — to show that the data artifact moves along that path, or can be shown as inputs and outputs to a process to show that the process acts on that data object. In general, the data object would not be documentation about the process, but would be specific to each instance of the process.

Q: Where is the BPMN standard found?

A: BPMN is now maintained by OMG, although they link through to the original BPMN website still.

Q: What is the output of a BPMN process definition? Any standard file types?

A: BPMN does not specify a file type, and as I mentioned in the webinar, there are three main file formats that may be used. The most commonly used by BPA and BPM vendors, including TIBCO, is XPDL (XML Process Definition Language) from the Workflow Management Coalition. BPEL (Business Process Execution Language) from OASIS has gained popularity in the past year or so, but since it was originally designed as a web service orchestration language, it doesn’t include support all of the BPMN constructs so there may be some loss of information when mapping from BPMN into BPEL. BPDM (Business Process Definition Metamodel), a soon-to-be-released standard from OMG, promises to do everything that XPDL does and more, although it will be a while before the level of adoption nears that of XPDL.

Q: What’s the proper perspective BPM implementers should have on BPMN, XPDL, BPEL, BPEL4People, and BPDM?

A: To sum up from the previous answer: BPMN is the only real contender as a process notation standard, and should be used whenever possible; XPDL is the current de facto standard for interchange of BPMN models between tools; BPDM is an emerging standard to watch that may eventually replace XPDL; BPEL is a web service orchestration language (rarely actually used as an execution language in spite of its name); and BPEL4People is a proposed extension to BPEL that’s trying to add in the ability to handle human-facing tasks, and the only standard that universally causes laughter when I name it aloud. This is, of course, my opinion; people from the integration camp will disagree — likely quite vociferously — with my characterization of BPEL, and those behind the BPDM standard will encourage us all to cast out our XPDL and convert immediately. Realistically, however, XPDL is here to stay for a while as an interchange format, and if you’re modeling with BPMN, then your tools should support XPDL if you plan to exchange process models between tools.

I’m headed for the BPM Think Tank next week, where all of these standards will be discussed, so stay tuned for more information.

Q: How would one link the business processes to the data elements or would this be a different artifact altogether?

A: The BPMN standard allows for the modeler to define custom properties, or data elements, with the scope depending on where the properties are defined: when defined at the process level, the properties are available to the tasks, objects and subprocesses within that process; when defined at the activity level, they’re local to that activity.

Q: I’ve seen some swim lane diagrams that confuse more than illuminate – lacking specific BPMN rules, do you have any personal usage recommendations?

A: Hard to say, unless you state what in particular that you find confusing. Sometimes there is a tendency to try to put everything in one process map instead of using subprocesses to simplify things — an overly-cluttered map is bound to be confusing. I’d recommend a high-level process map with a relatively small number of steps and few explicit data objects to show the overall process flow, where each of those steps might drill down into a subprocess for more detail.

Q: We’ve had problems in the past trying to model business processes at a level that’s too granular. We ended up making a distinction between workflow and screen flow. How would you determine the appropriate level of modeling in BPM?

A: This is likely asking a similar question to the previous one, that is, how to keep process maps from becoming too confusing, which is usually a result of too much detail in a single map. I have a lot of trouble with the concept of “screen flow” as it pertains to process modeling, since you should be modeling tasks, not system screens: including the screens in your process model implies that there’s not another way to do this, when in fact there may be a way to automate some steps that will completely eliminate the use of some screens. In general, I would model human tasks at a level where a task is done by a single person and represents some sort of atomic function that can’t be split between multiple people; a task may require that several screens be visited on a legacy system.

For example, in mutual funds transaction processing (a particular favorite of mine), there is usually a task “process purchase transaction” that indicates that a person enters the mutual fund purchase information to their transaction processing system. In one case, that might mean that they visit three different green screens on their legacy system. Or, if someone wrote a nice front-end to the legacy system, it might mean that they use a single graphical screen to enter all the data, which pushes it to the legacy system in the background. In both cases, the business process is the same, and should be modeled as such. The specific screens that they visit at that task in order to complete the task — i.e., the “screen flow” — shouldn’t be modeled as explicit separate steps, but would exist as documentation for how to execute that particular step.

Q: The military loves to be able to do self-service, can you elaborate on what is possible with that?

A: Military self-service, as in “the military just helped themselves to Poland?” 🙂 Seriously, BPM can enable self-service because it allows anyone to participate in part of a process while monitoring what’s happening at any given step. That allows you to create steps that flow out to anyone in the organization or even, with appropriate network security, to external contractors or other participants. I spoke in the webinar about creating process improvement by disintermediation; this is exactly what I was referring to, since you can remove the middle-man by allowing someone to participate directly in the process.

Q: In the real world, how reliable are business process simulations in predicting actual cycle times and throughput?

A: (From Emily) It really depends on the accuracy of your information about the averages of your cycles. If they are relatively accurate, then it can be useful. Additionally, simulation can be useful in helping you to identify potential problems, e.g. breakpoints of volume that cause significant bottlenecks given your average cycle times.

I would add that one of the most difficult things to estimate is the arrival time of new process instances, since rarely do they follow those nice even distributions that you see when vendors demonstrate simulation. If you can use actual historical data for arrivals in the simulation, it will improve the accuracy considerably.

Q: Would you have multiple lanes for one system? i.e. a legacy that has many applications in it therefore many lanes in the legacy pool ?

A: It depends on how granular that you want to be in modeling your systems, and whether the multiple systems are relevant to the process analysis efforts. If you’re looking to replace some of those systems as part of the improvement efforts, or if you need to model the interactions between the systems, then definitely model them separately. If the applications are treated as a single monolithic system for the purposes of the analysis, then you may not need to break them out.

Q: Do you initially model the current process as-is in the modeling tool?

A: I would recommend that you at least do some high-level process modeling of your existing process. First of all, you need to establish what the metrics are that you’re establishing for your ROI, and often these aren’t evident until you map out your process. Secondly, you may want to run simulations in the modeling tool on the existing process to verify your assumptions about the bottlenecks and costs of the process, and to establish a baseline against which to compare the future-state process.

Q: Business Managers : concerns – failure to achieve ROI ?

A: I’m not exactly sure what this question means, but assume that it relates to the slide near the end of the webinar that discusses role changes caused by BPM. Management and executives are most concerned with risk around a project, and they may have concerns that the ROI is too ambitious (either because the new technology fails or too many “soft” ROI factors were used in the calculation) and that the BPM project will fail to meet the promises that they’ve likely made to the layers of management above them. The right choice of ROI metrics can go a long ways to calming their fears, and educating them on the significant benefits of process governance that will result from the implementation of BPM. Management will now have an unprecedented view of the current state and performance of the end-to-end process. They’ll also have more comprehensive departmental performance statistics without manual logging or cutting and pasting from several team reports.

Q: I am a manager in a MNC and I wanted to know how this can help me in my management. How can I use it in my daily management? One example please?

A: By “MNC” I assume that you mean “multi-national corporation”. The answer is no different than from any other type of organization, except that you’re likely to be collaborating with other parts of your organization in other countries hence have the potential to see even greater benefits. One key area for improvement that can be identified with business process modeling, then implemented in a BPMS, is all of the functional redundancy that typically occurs in multi-nationals, particularly those that grow by acquisition. Many functional areas, both administrative/support and line-of-business, will be repeated in multiple locations, for no better reason than that it wasn’t possible to combine them before technology was brought to bear on it. Process modeling will allow you to identify areas that have the potential to be combined across different geographies, and BPM technology allows processes to flow seamlessly from one location to another.

Q: How much detail is allowed in a process diagram (such as the name of the supplier used in a purchase order process or if the manager should be notified via email or SMS to approve a loan)? Is process visibility preferred compared to good classic technical design, in the BPM world?

A: A placeholder for the name of a supplier would certainly be modeled using a property of the process, as would any other custom data elements. As for the channel used for notifying the manager, that might be something that the manager can select himself (optimally) rather than having that fixed by the process; I would consider that to be more of an implementation detail although it could be included in the process model.

I find your second question interesting, because it implies that there’s somehow a conflict between good design and process visibility. Good design starts with the high-level process functional design, which is the job of the analyst who’s doing the process modeling; this person needs to have analytical and design skills even though it’s unlikely that they do technical design or write code. Process visibility usually refers to the ability of people to see what’s happening within executing processes, which would definitely be the result of a good design, as opposed to something that has to be traded off against good design. I might be missing the point of your question, feel free to add a comment to clarify.

Q: Are there any frameworks to develop a BPM solution?

A: Typically, the use of a BPMS implies (or imposes) a framework of sorts on your BPM implementation. For example, you’re using their modeling tool to draw out your process map, which creates all the underpinnings of the executable process without you writing any code to do so. Similarly, you typically use a graphical mapping functionality to map the process parameters onto web services parameters, which in turn creates the technical linkages. Since you’re working in a near-zero-code environment, there’s no real technical framework involved beyond the BPMS itself. I have seen cases where misguided systems integrators create large “frameworks” — actually custom solutions that always require a great deal of additional customization — on top of a BPMS that tends to demote the BPMS to a simple queuing system. Not recommended.

There were also a few questions specifically about TIBCO, for which Emily Burns (TIBCO’s marketing manager, who moderated the webinar) provided answers:

Q: Is TIBCO Studio compatible with Windows Vista?

A: No, Vista is not yet supported.

Q: Are there some examples of ROI from the industry verticals

A: On TIBCO’s web site, there are a variety of case studies that discuss ROI here: http://www.tibco.com/solutions/bpm/customers.jsp. Additionally, these are broken down into some of the major verticals here: http://www.tibco.com/solutions/bpm/bpm_your_industry.jsp

Q: Is there any kind of repository or library of “typical” process? I’m particularly interested in clinical trials.

A: TIBCO’s modeling product ships with a large variety of sample processes aggregated by industry.

And lastly, my own personal favorite question and answer, answered by Emily:

Q: What’s the TLA for BPM+SOA?

A: RAD 🙂

Survey on BPMN

You may have seen this announced on other BPM blogs, but there’s currently a survey out on the use of, and satisfaction with, BPMN by process modellers. This is part of a PhD research project by Jan Recker at the Queensland University of Technology in Brisbane, Australia (a city that I remember fondly, in spite of the fact that it was pouring rain last time that I was there).

As a perq for completing the survey, you’ll get the summarized results of the survey, plus access to recent studies on BPMN, so it’s worth doing if you’re using BPMN. The details, from Jan’s request to me:

BPMN is gaining huge momentum in practitioner communities, up to a point that even those vendors who were initially reluctant to adopt it, can no longer completely ignore it. But what exactly are the factors that drive this acceptance? How satisfied are end users of BPMN with the notation? Do user experiences on BPMN match those by BPA tool vendors?

Jan Recker from the BPM Research Group at Queensland University of Technology is undertaking a worldwide survey on the use of BPMN by process modellers to shed light into this question. You can help Jan by completing the survey available here:

http://www.bpm.fit.qut.edu.au/projects/acceptance/survey/BPMN/.

The best way to contact Jan is via email: [email protected]

I’m hoping that if I publish his request, maybe they’ll sponsor me to come down and speak at their BPM conference in September 🙂

TUCON: The Face of BPM

Thursday morning, and it seems like a few of us survived last night’s baseball game (and the after-parties) to make it here for the first session of the day. This will be my last session of the conference, since I have a noon flight in order to get back to Toronto tonight.

Tim Stephenson and Mark Elder from TIBCO talked about Business Studio, carrying on from Tim’s somewhat shortened bit on Business Studio on Tuesday when I took up too much of our joint presentation time. The vision for the new release coming this quarter is that one tool can be used by business analysts, graphical tools developers and operational administrators by allowing for different perspectives, or personas. There’s 9 key functions from business process analysis and modelling to WYSIWYG forms design to service implementation.

The idea of the personas within the product are similar to what I’ve seen in the modelling tool of other BPMS vendors: each has a different set of functions available and has some different views onto the process being modelled. Tim gave some great insight into how they considered the motivations and requirements of each of the types of people that might use the product in order to develop the personas, and showed how they mapped out the user experience flow with the personas overlaid to show the interfaces and overlaps in functionality. This shows very clearly the overlap between the business analyst and developer functionality, which is intentional: who does what in the overlap depends on the skills of the particular people involved.

As we heard in prior sessions, Business Studio provides process modelling using BPMN, plus concept modelling (business domain data modelling) using UML to complement the process model. There’s a strong focus on how BPM can consume web services and BusinessWorks services, because much of the audience is likely developers who use TIBCO’s other products like BusinessWorks to create service wrappers around legacy applications. At one point between sessions yesterday, I had an attendee approach me and thank me for the point that I made in my presentation on Tuesday about how BPM is the killer app for SOA (a point that I stole outright from Ismael Ghalimi — thanks, Ismael!), because it helped him to understand how BPM creates the ROI for SOA: without a consumer of services, the services themselves are difficult to justify.

We saw a (canned) demo of how to create a simple process flow that called a number of services that included a human-facing step, a database call to a stored procedure, a web service call based on introspecting the WSDL and performing some data mapping/transformation, a script task that uses JavaScript to perform some parameter manipulation, and an email task that allows the runtime process instance parameters to be mapped to the email fields. Then, the process definition is exported to XPDL, and imported into the iProcess Modeler in order to get it into the repository that’s shared with the execution engine. Once that’s done, the process is executable: it can be started using the standard interface (which is built in General Interface), and the human-facing steps have a basic form UI auto-generated.

It is possible to generate an HTML document that describes a process definition, including a graphical view of the process map and tabular representations of the process description.

As I mentioned in other posts, and in many posts that I’ve made about BPA tools, is that there’s no shared model between the process modeller, which is a serious issue for process agility and round-tripping unless you do absolutely nothing to the process in the iProcess Modeler except to use it as a portal to the execution repository. TIBCO has brought a lot (although not all) of the functionality of the Modeler into Studio, and are working towards a shared model between analysts and developers; they believe that they can remove the need for Modeler altogether over time. There’s no support at this time, however, to being able to deploy directly from Studio, that is, Studio won’t plug directly into the execution engine environment. Other vendors who have gone the route of a downloadable disconnected process modeller or a separate process discovery tool are dealing with the same issue; ultimately, they all need to make this new generation of modelling tools have the capability to be as integrated with the execution environment as those that they’re replacing in order to eliminate the requirement for round-tripping.

TUCON: BPM Evolution and Roadmap

At this point, it makes more sense to start labelling the posts by session title rather than presenter, since we’re getting into some pretty detailed breakout topics. This one was presented by Roger King, Director of BPM Product Strategy & Management at TIBCO, and Justin Brunt, product manager for iProcess.

Most of the technical people working TIBCO’s BPM group seem to be vestiges of the Staffware acquisition; many of them are still based in the UK, where the development is still done.

They started out with a review of what’s happened in the products in the past 12 months:

  • Business Studio 1.x, a standalone modelling and simulation product aimed at business analysts; the free downloadable version released in November already has more than 10,000 downloads. Modelling is done in BPMN, and XPDL is supported for import and export — necessary for even getting the models into the iProcess Suite for execution, since there is no shared model with the process execution environment. It also supports imports from Visio and ARIS. There’s some more advanced features as well: a hierarchical organization of business processes and associated assets; and process simulation with SLA indicators and reports.
  • iProcess Suite 10.5, with improved work queue performance and scalability to support more concurrent users, better performance for sorting and filtering (always slow with most BPM products) and faster startup time. It also included an enhanced web client based on General Interface, with GI or custom forms support and a number of other new functions.
  • iProcess Insight 2.0, the BAM product, which I reviewed in a post yesterday.

What’s coming in the near future:

  • Business Studio 2.0, with support for the full BPMN 1.0 specification and XPDL 2.0. I keep meaning to download Business Studio and do some comparative analysis with some of the other downloadable modelling products, but I may wait until version 2.0. I wrote about a few of the new features from Tim Stephenson talk yesterday, but here’s a recap. In the process analyst perspective: design patterns/fragments to speed design, refactoring, concept modelling with UML support, import/export of EPC/FAD from ARIS, and custom XSLT translations to XPDL. In the process architect perspective: service registry, native services such as email and database connectors, direct server deployment and version control
  • iProcess Suite core component support for some new platforms, including 64-bit Windows Server and Red Hat Linux; direct deployment from Studio to Engine (although it’s not clear if this is via a shared model or just automates the import/export process); and new audit trail entries. They’ve also simplified installation.
  • Web services capability, with support for WS security at the transport and SOAP layer, and support for withdraw actions and delayed release.

They went on to discuss a number of key themes in product development for this year and beyond.

They’re gradually migrating to a single modelling/design environment — Business Studio — although they’re still not quite there yet; this will provide a more consistent experience for both business and IT users of the design tools. This supports the move to full model-driven development by allowing for the easy integration of forms design into the Eclipse-based environment, which can in turn generate GI, JSP or other runtime forms for the updated iProcess web client. Business rules definition will be in the Eclipse-based design environment, although it’s not clear if they’re using a third-party BRE or have their own rules technology. The old modelling environment, Business Modeler, isn’t going away any time soon, but new feature development will focus on Business Studio so will encourage migration. Like most vendors using this tactic to get existing customers off an old product, I expect that they’ll hear grumbling about this for years.

The out-of-the-box web client will be simplified and made to look more like the familiar Outlook client, with improved performance. The UI will also be exposed as components and services to allow them to be included in custom applications or portals, and they’ll ship an out-of-the-box BPM portal using TIBCO’s portal platform to show how this can be used. There will be better MS-Office integration and an Eclipse-based desktop application.

They’re also going to provide a project collaboration portal for BPM projects, to allow people developing TIBCO BPM applications to collaborate. They’re also adding in some governance capabilities to help handle the lifecycle of BPM projects and assets.

King mentioned my presentation from yesterday directly, and commented that they’re going to be supporting more of the BPA tools for import soon, including Proforma. They’ve obviously identified that it’s important to be extremely open from both a standards and BPA support standpoint.

Next on the list is goal-driven BPM, or virtual processes, where there may be too many process alternatives to model explicitly and the optimal runtime process has to be generated based on process parameters and environmental factors. This sounds like fuzzy future stuff, but would be great if they can pull it off.

They’re also developing workforce management and more complex resource modelling for the purposes of business optimization.

There was a brief point at the end about preparing for the next generation of SOA, although no time to talk about what this means; I would have loved if this session had been a bit longer.

TUCON: Simon Hayward

I’m in my first breakout session of the day, State of BPM – Trends and Drivers for Success: A Leading Analyst Perspective by Gartner, and although the schedule shifted slightly to accommodate overtime speakers in the breakout session, the speaker decided to just go ahead and start anyway so I have no idea who I’m listening to. He’s certainly familiar, I’m sure that I’ve seen him at a Gartner event before, but with the recent departure of Jim Sinur (and, I have heard, Michael Melenovsky), I’m not sure who’s pushing BPM at Gartner these days besides Janelle Hill, and this guy at the front of the room is definitely not her. If I can get some wifi in here, I’ll look up my coverage of the Gartner events and that will likely jog my memory. Oh, wait, I think it’s Simon Hayward, who I referred to previously as the Energizer Bunny of BPM for his high-energy flying tour of BPM at Gartner. Given that Hayward usually does high-profile keynotes, it’s interesting that he’s here doing one of five simultaneous breakout sessions — Gartner’s obviously a little thin on BPM resources these days.

Unfortunately, I’ve seen so many Gartner presentations now that this sort of state of the union address looks pretty rehashed to me. Gartner’s business process maturity model takes a starring role, as it has for the last several months; I first saw it in a webinar that I hosted with Appian and Jim Sinur last October, when it was still labelled “the road to BPM” instead of BPMM. He went on to talk about the value of BPM to enterprises, and moving from a functionally-driven to a process-driven organization, also seen in that October webinar and many other places.

His six critical success factors for a BPM project (or for that matter, any IT project):

  • Strategic alignment
  • Culture and leadership
  • People
  • Governance
  • Methods
  • Information Technology

In moving from implicit processes within applications to explicit processes in a cloud above the infrastructure, he sees three paths: BPM suites, process-aware middleware (he puts TIBCO in this category), and process orchestration in composite applications.

Then, the now-ubiquitous gear diagram of BPMS, with the process orchestration engine and business services repository in the middle, surrounded by the 10 necessary features and functions required to play in this market. He moved quickly through a number of other subjects, such as how BPM and SOA are orthogonal dimensions when implementing processes (nice characterization), and the complementary relationship between BPM, BI and BAM. He finished up with a slide that I’ve seen many times about assigning responsibilities between IT and business, still valid although I think that some of the responsibilities are shifting more than is indicated here.

I realize that Gartner is a draw at a conference like this, but I’m hoping to see a little more innovative material out of them soon.

TUCON: Tom Laffey and Matt Quinn

Last in the morning’s general session was Tom Laffey, TIBCO’s EVP of products and technologies, and Matt Quinn, VP of product management and strategy. Like Ranadivé’s talk earlier, they’re talking about enterprise virtualization: positioning messaging, for example, as virtualizing the network layer, and BPM as enterprise process virtualization. I’m not completely clear if virtualization is just the current analyst-created buzzword in this context.

Laffey and Quinn tag-teamed quite a bit during the talk, so I won’t attribute specific comments to either. TIBCO products cover a much broader spectrum that I do, so I’ll focus just on the comments about BPM and SOA.

TIBCO’s been doing messaging and ESB for a long time, and some amount of the SOA talk is about incremental feature improvements such as easier use of adapters. Apparently, Quinn made a prediction some months ago that SOA would grow so fast that it would swallow up BPM, so that BPM would just be a subset of SOA. Now, he believes (and most of us from the BPM side agree 🙂 ) that BPM and SOA are separate but extremely synergistic practices/technologies, and both need to developed to a position of strength. To quote Ismael Ghalimi, BPM is SOA’s killer application, and SOA is BPM’s enabling infrastructure, a phrase that I’ve included in my presentation later today; like Ismael, I see BPM as a key consumer of what’s produced via SOA, but they’re not the same thing.

They touched on the new release of Business Studio, with its support for BPMN, XPDL and BPEL as well as UML for some types of data modelling. There’s some new intelligent workforce management features, and some advanced user interface creation functionality using intelligent forms, which I think ties in with their General Interface AJAX toolkit.

Laffey just defined “mashup” as a browser-based event bus, which is an interesting viewpoint, and likely one that resonates better with this audience than the trendier descriptions.

They discussed other functionality, including business rules management, dynamic virtual information spaces (the ability to tap into a real-time event message stream and extract just what you want), and the analytics that will be added with the acquisition of Spotfire. By the way, we now appear to be calling analytics “business insight”, which lets us keep the old BI acronym without the stigma of the business intelligence latency legacy. 🙂

They finished up with a 2-year roadmap of product releases, which I won’t reproduce here because I’d hate to have to embarrass them later, and some discussion of changes to their engineering and product development processes.

BrainStorm BPM Day 1: Bruce Silver track keynote

There’s an awful lot of keynotes in this conference: a couple of overall sessions this morning, now “track keynotes” for each of the four tracks within the BPM conference. I’m in Bruce Silver’s New Directions in BPM Tools and Technology session, where he started by taking a gentle poke at Gartner, saying that BPM is more than a management discipline (Gartner’s most recent definition of BPM).

He started out discussing process modelling, and how it’s inherently a business activity, not an IT activity, which speaks directly to the issue of the tools used for modelling: is there a handoff from a modelling-only tool to an execution environment at the point of business to IT handoff, or is the model actually just a business view of the actual implementation? With all of the vendor demos that I’ve done lately (I know, I have yet to document many of there here, but I’m getting to it), I’ve had quite a focus on the distinction between having a model shared between business and IT, and having a separate BPA tool that models much more than just the processes that will be implemented in a BPMS. Bruce positions this as “BPA versus BPMN” views towards describing process modelling, and doesn’t see them in conflict; in fact, he thinks that they’re ignoring each other, a viewpoint that I’d have to agree with given that BPA initiatives rarely result in any processes being transferred to some sort of execution engine.

Bruce, who often accuses me of being too nice, takes a stab a the vendors in a couple of areas. First is with their BPMN implementations, specifically that of events: he states that many of the execution engines just don’t support intermediate events, so that the vendors conveniently forget to include those events in their BPMN modelling tool. Second is with simulation, and looking at whether a vendor’s implementation is actually a useful tool, or a “fake” feature that’s there to enable it to be checked off on an RFP, but not functional enough to make it worth using.

He has a nice way of categorizing BPMS products: by vendor speciality (e.g., integration, human-centric), by process type/use case (e.g., production workflow) and by business/IT interaction method (collaborative shared model versus handoff). This was interesting, because I wrote almost identical words two days ago in my presentation for the Shared Insights Portals and Collaboration conference that I’ll be speaking at next month; great minds must think alike. 🙂  His point, like the one that I was making in my presentation, is that most BPM products have some strengths and some weaknesses that can make or break some process automation; for example, a product focussed on human-centric workflow probably doesn’t do some nice integration tricks like mapping and transformation, or complex data objects.

He also makes a good distinction between business rules (implemented in a BRE) and routing rules (implemented in a BPMS): business rules represent corporate or departmental policies that may need to be shared across business processes, whereas routing rules are the internal logic within a process that’s just required to get through the process but don’t represent policy in any way.

Bruce thinks that BPM and SOA together is still vapour-ware for the most part: it’s what the vendors are selling but not typically what they’re delivering. In particular, he thinks that if the BPMS and the ESB are not from the same vendor, then “all bets are off” in terms of whether a BPMS will work with any particular ESB or other services environment.

The session turned out to be too short and Bruce couldn’t even finish his materials, much less take questions: it was only 45 minutes to begin with, and shortened at the beginning while Bruce waited for stragglers for the previous session to make their way upstairs.