PegaWORLD Breakout: The Process Of Everything

Setrag Khoshafian and Bruce Williams of Pega led a breakout session discussing the crossover between the internet of things (IoT) — also known as the internet of everything (IoE) or the industrial internet — and BPM as we know it. The “things” in IoT can be any physical device, from the FitBit on my wrist to my RFID-enabled conference badge to the plane that flew me here, none of which you would think of primarily as a computing device. If you check out my coverage of the Bosch Connected World conference from earlier this year, there’s a lot being done in this area, and these devices are becoming full participants in our business processes. Connected devices are now pervasive in several sectors, from consumer to manufacturing to logistics, with many of the interactions being between machines, not between people and machines, enabled by automation of processes and decisions over standard communication networks. There’s an explosion of products and players, and also an explosion of interest, putting us in the middle of the tipping point for IoT. There are still a number of challenges here, such as standardization of platforms and protocols: I expect to see massive adoption of dead-end technologies, but hopefully they’re so inexpensive that changing out to standardized platforms won’t be too painful in a couple of years.

Getting everything instrumented is the first step, but devices on their own don’t have a lot of value; as Khoshafian pointed out, we need to turn the internet of things into the process of everything. A sea of events needs to feed into a sense/respond engine that drives towards outcomes, whether a simple status outcome, a repair request, or automation and control. BPM, or at least the broad definition of intelligent BPM that includes decisions and analytics, is the perfect match for that sense and respond capability. There are widespread IoT applications for energy saving through smart homes and offices regulating and adjusting their energy consumption based on demand and environmental conditions; in my house, we have a Nest smoke/CO detector and some WeMo smart metered electrical outlets, both of which can be monitored and controlled remotely (which is what happens when a systems engineer and a controls engineer get together). I’ve seen a number of interesting applications in healthcare recently as well; Williams described nanobots being used in surgery and Google Glass used by healthcare workers, as well as many personal health sensors available for everyday home use. Cool stuff, although many people will be freaked out by the level of monitoring and surveillance that is now possible from many devices in your home, office and public environments.

This was more of a visionary session than any practicalities of using Pega products for addressing IoT applications, although we did hear a bit about the technological ramifications in terms of authentication, integration, open standards, and managing and detecting patterns in the sheer volume of device data. Definitely some technical challenges ahead.

We’re headed off to lunch and the technology pavilion, but first I’m going to use the WeMo app on my phone to turn on the desk lamp in my home office so that my cat can snooze under it for the afternoon: the small scale practical application of IoT.

A Vision Of Business Transformation At PegaWORLD

The second half of today’s keynote started with a customer panel of C-level executives: Bruce Mitchell, CTO at Lloyds Banking Group, Jessica Kral, CIO for Medicare & Retirement at UnitedHealthcare, and Richard Haley, CFO at FBI, moderated by Rafe Brown, CFO at Pega. Some interesting comments there about how their organizations are transforming: a shift to customer focus while improving efficiency by reducing handoffs on inbound calls; how incremental development and faster release cycles reduce risk and improve business-IT alignment; and how big data can be used to improve context for everything from customer journeys to police investigations.

We finished the morning with new product highlights from Kerim Akgonul, Pega’s SVP of product management. Their case interface is the cornerstone of the new look of Pega: business processes are described at a high level by  simple linear stage view, with processes that might happen at each stage listed below: very reminiscent of the simplified phase views that I’ve seen in a number of other products, both design-time and runtime. I still maintain that there are many processes that don’t lend themselves to a simple stage/phase representation, since activities from multiple phases may be happening simultaneously, but this seems to be a popular representation.

According to their customers and partners, it’s 6.4 times faster to deliver on Pega 7 than direct Java development (assuming, of course, that Pega becomes your captive application development environment, which is not an option for many organizations), and there are definitely many capabilities in the platform and solutions built on that platform, such as next best action marketing, sales force automation and customer process manager. Predictive analytics is definitely assuming a higher profile as a competitive differentiator in sales, marketing, CRM and other customer-facing applications, since it can help provide better customer service as well as improve sales goals. A recent acquisition is also giving them robust mobile support, allowing mobile and remote workers to participate fully in case management activities, while other acquisitions are providing interactive customer support and social media engagement.

Don Schuerman, CTO at Pega, joined Kerim on stage to show how all of these things can come together, with a (fictional) insurance company responding to a tweet about motorcycles with an offer for motorcycle insurance, tied directly in to their back office systems for quotes as well as their call center and CRM system. They demonstrated a seamless integration between the insurance app and the call center agent’s screen, allowing the CSR to push application documents to the customer’s phone in real time. Fun demo of omni channel for integrated communication, next best action with product recommendations, and business processes for fulfillment, complete with drone delivery and helmet-mounted crash detectors.

That’s it for the day 1 keynotes at PegaWORLD 2014; we’re off to a breakout session before lunch and a tour around the technology pavilion, then an afternoon of breakouts and some roundtables with executives.

PegaWORLD Gets Big

My attendance at PegaWORLD has been spotty the past few years because of conflicts with other conferences during June, so it was a bit of a surprise to show up in DC this week to a crowd of more than 3,000 attendees — definitely now the biggest BPM conference around. The opening keynote started with Alan Trefler (Pega’s founder and CEO) talking about change, and how organizations need to become digital enterprises with the power to engage, the power to simplify and the power to change. Interestingly, SAP used the same “simplicity” message at SAPPHIRE last week: typically, this translates to a combination of hiding complexity from the business (which is not really simplification, just better window dressing) and platform rationalization (which is actually technological simplification).

As Trefler described it, Pega sees three major contributors to becoming digital enterprises: case lifecycle management as an alternative to a pure process view for the complexity of real-world business operations; next best action to predict what a customer might do based on their engagement history; and omni channel to provide a consistent customer experience on multiple channels simultaneously in an integrated fashion. These three capabilities provide a digital context based on a unified architecture, bridging (internal) work and (external) customer.

Pega has reached a size now — 3,000 employees and over a half billion in revenue — where they are fueling some of their growth through acquisitions; this is likely to challenge their ability to avoid a “Frankenstack” of technologies weirdly bolted together. They’re hitting all the buzzwords: social, mobile, analytics, cloud and internet of things, with a story of how they’re addressing each. Incidentally, I found it interesting that they still have less than 100 cloud-based production customers, although many times more are using it for development and test systems; that’s going to have to step up if they’re going to really engage with increasingly diverse organizations.

Anette Bronder from Vodaphone’s enterprise delivery group took the stage to talk about their ongoing business transformation program: working to achieve simplification, standardization, digitization and globalization. They are improving their enterprise operations and infrastructure, with the goal of a set of standard products that can be delivered across all segments. Enterprise customers, making up almost 30% of their business, include big names including Amazon and Bosch; these include the communications required for logistics, manufacturing, fulfillment, the internet of things and much more, with the ultimate goal of putting a SIM card in pretty much everything. Transformation of their enterprise delivery processes is based on several factors: sourcing the right people both internally and externally; standardized processes with a common methodology leveraging best practices; governance with a single operating and delivery model across all markets with a consistent set of metrics; and common technology for order management, project management and product catalog. They are moving from manual to automated operations, and from local siloed approaches to globally standardized products and processes. They want to improve customer engagement through a case management approach, where all customer information is available for decision-making and pro-active problem resolution, while improving operational efficiency and business agility. Pega is one of their technology partners, but obviously there’s a lot more involved here, including significant change management. They’re two years into their journey; it will be interesting to see this again in a year or two when they’re starting to see some real results.

Webinar On Developer-Friendly BPM And The Zero-Code Myth

I’m giving a webinar on Wednesday this week (June 11) on developer-friendly BPM and the myth of zero-code BPM when it comes to many complex integrated core business processes. It’s sponsored by camunda, along with a white paper that will be available following the webinar, and co-hosted by BPM.com.

As of last week, about 300 people had already registered for the webinar, should be a good turnout. It starts at 2pm Eastern time, and you can sign up here.

June BPM Conferences

After a month at home, I’m hitting the road for a few vendor events. First, a couple where I’m attending, but not presenting:

  • IBM Content 2014 in Toronto (so technically not hitting much of the road to get there), May 30 – this will travel to Austin, Minneapolis and Chicago in early June (but not with me)
  • SAP’s SAPPHIRENOW in Orlando, June 2-6
  • PegaWORLD in Washington DC, June 8-10

This gives me a chance to catch up on what’s been happening with their products since my last briefing, and talk to the internal teams as well as customers. In both of the latter cases, the vendors are covering my travel expenses but not compensating me for my time, so anything that I blog here (as usual) is my own opinion and not influenced by them.

After that, I have a couple of speaking gigs:

  • Two seminars hosted by IBM in Boston and Seattle, June 17 and 19 respectively, on new business operations imperatives (cloud, mobile, social and analytics with BPM)
  • DST’s ADVANCE Forum Europe in London, June 25, where I’ll be presenting “Designing Process-Based Applications: The Dos and Don’ts”, an updated version of the presentation that I gave at their North American conference in March

I likely won’t be blogging much from these ones since I’ll be busy presenting, but may post my slides online. Obviously, the vendors are paying my expenses as well as a speaking fee, but not for any specific coverage on my blog.

The Case For Smarter Process At IBMImpact 2014

At analyst events, I tend to not blog every presentation; rather, I listen, absorb and take some time to reflect on the themes. Since I spent the first part of last week at the analyst event at IBM Impact, then the second half across the country at Appian World, I waited until I had to pull all the threads together here. IBM keeps the analysts busy at Impact, although I did get to the general session and a couple of keynotes, which were useful to provide context for the announcements that they made in pre-conference briefings and the analyst event itself.

A key theme at Impact this year was that of “composable business” (I have to check carefully every time I type that to make sure I don’t write “compostable business”, but someone did point out that it is about reuse…). I’m not sure that’s a very new concept: it seems to be about assembling the building blocks of business capabilities, processes and technologies in order to meet the current needs without completely retooling, which is sort of what we’ve all been saying that BPM, ODM and SOA can do for some years now.

Smarter Process is positioned as an enabler of composable business, and is IBM’s approach for “reinventing business operations” by enabling the development of customer-centric applications that push top-line growth, while still providing the efficiency and optimization table stakes. Supporting knowledge workers has become a big part of this, which leads to IBM’s biggest new feature in BPM: the inclusion of “basic” case management within BPM. The idea is that you will be able to support a broader range of work types on a single platform: pre-defined “structured” processes, structured processes with some ad hoc activities, ad hoc (case) work that can invoke structured process segments, and fully ad hoc work. I’ve been talking about this range of work types for quite a while, and how we need products that can range across them, because I see so few real-world processes that fit into the purely structured or the purely unstructured ends of the spectrum: almost everything lies somewhere in the middle, where there is a mix of both. In fact, what IBM is providing is “production case management”, where a designer (probably not technical, or not very technical) creates a case template that pre-defines all of the possible ad hoc activities and structured process fragments; the end user can choose which activities to run in which order, although some may be required or have dependencies. This isn’t the “adaptive case management” extreme end of the spectrum, where the end user has complete control and can create their own activities on the fly, but PCM covers a huge range of use cases in business today. Bruce Silver

“But wait,”, you say, “IBM already has case management with IBM Case Manager. What’s the difference?” Well, IBM BPM (Lombardi heritage) provides full BPM capabilities including process analytics and governance, plus basic case capabilities, on the IBM BPM platform;  IBM Case Manager (FileNet heritage) provides full content and case capabilities including content analytics and governance, plus basic workflow capabilities, on the IBM ECM platform. Hmmm, sounds like something that Marketing would say. The Smarter Process portfolio graphic includes the three primary components of Operational Decision Management, Business Process Management and Case Management, but doesn’t actually specify which product provides which functionality, leaving it open for case management to come from either BPM or ICM. Are we finally seeing the beginning of the end of the split between process management in BPM and ICM? The answer to that is likely more political than technical – these products report up through different parts of IBM, turning the merging/refactoring of them into a turf war – and I don’t have a crystal ball, but I’m guessing that we’ll gradually see more case capabilities in BPM and a more complete integration with ECM, such that the current ICM capabilities become redundant, and IBM BPM will expand to manage the full spectrum of work types. The 1,000th cut may finally be approaching. Unfortunately for ICM users, there’s no tooling or migration path to move from ICM to BPM (presumably, no one is even talking about going the other way) since they are built on different infrastructure. There wasn’t really a big fuss made about this new functionality or how it might overlap with ICM about this outside the BPM analyst group; in fact, Bruce Silver quipped “IBM Merges Case into BPM but forgets to announce it”. Winking smile

The new case management functions are embedded within the BPM environment, and appear fairly well integrated: although a simple web-based case design tool is used instead of the BPM Eclipse authoring environment, the runtime appears within the BPM process portal. The case detail view shows the case data, case document and subfolders, running tasks, activities that can be started manually (including processes), and an overall status – similar enough to what you would see with any work item that it won’t be completely foreign, but with the information and controls required for case management. Under the covers, the ad hoc activities execute in the BPM (not ICM) process engine, and a copy of ECM is embedded within BPM to support the case folder and documents artifacts.

The design environment is pretty simple, and very similar to some parts of the ICM design environment: required and optional ad hoc activities are defined, and the start trigger (manual or automatically based on declarative logic or an event) of each activity is specified. Preconditions can be set, so that an activity can’t be started (or won’t automatically start) until certain conditions are met. If you need ad hoc activities in the context of a structured process, these can be defined in the usual BPM design environment – there’s no real magic about this, since ad hoc (that is, not connected by flow lines) activities are part of the BPMN standard and have been available for some time in IBM BPM. The case design environment is integrated with Process Designer and Process Center for repository and versioning, and case management is being sold as an add-on to IBM BPM Advanced.

Aside from the case management announcement, there are some new mobile capabilities in IBM BPM: the ability to design and playback responsive Coaches (UI) for multiple form factors, and pushing some services down to the browser. These will make the UI look better and work faster, so all good there. IBM also gave a shout out to BP3’s mobile portal product, Brazos, for developing iOS and Android apps for IBM BPM; depending on whether you want to go with responsive browser or native apps as a front-end for BPM, you’re covered.

They also announced some enhancements to Business Monitor: a more efficient, high-performance pub-sub style of event handling, and the ability to collect events from any source, although the integration into case management (either in BPM or ICM) at design time still seems a bit rudimentary. They’ve also upgraded to Cognos BI 10.2.1 as the underlying platform, which brings more powerful visualizations via the RAVE engine.  I have the impression that Business Monitor isn’t as popular as expected as a BPM add-on: possibly by the time that organizations get their processes up and running, they don’t have the time, energy or funds for a full-on monitoring and analytics solution. That’s too bad, since that can result in a lot of process improvement benefits; it might make sense to be bundling in some of this capability to at least give a teaser to BPM customers.

In BPM cloud news, there are some security enhancements to the Softlayer-based BPM implementations, including 2-factor authentication and SAML for identity management, plus new pricing at $199/user/month with concurrent user pricing scenarios for infrequent users. What was more interesting is what was not announced: the new Bluemix cloud development platform offers decision services, but no process services.

Blueworks Live seemed to have the fewest announcements, although it now has review and approval processes for models, which is a nice governance addition. IBM can also now provide Blueworks Live in a private cloud – still hosted but isolated as a single tenant – for those who are really paranoid about their process models.

bpmNEXT 2014 Wrapup And Best In Show

I couldn’t force myself to write about the last two sessions of bpmNEXT: the first was a completely incomprehensible (to me) demo, and the second spent half of the time on slides and half on a demo that didn’t inspire me enough to actually put my hands on the keyboard. Maybe it’s just conference fatigue after two full days of this.

However, we did get a link to the Google Hangout recording of the BPMN model interchange demo from yesterday (be sure to set it to HD or you’ll miss a lot of the screen detail).

We had a final wrapup address from Bruce Silver, and he announced our vote for the best in show: Stefan Andreasen of Kapow – congrats!

I’m headed home soon to finish my month of travel; I’ll be Toronto-based until the end of April when IBM Impact rolls around.

bpmNEXT 2014 Thursday Session 2: Decisions And Flexibility

In the second half of the morning, we started with James Taylor of Decision Management Solutions showing how to use decision modeling for simpler, smarter, more agile processes. He showed what a process model looks like in the absence of externalized decisions and rules: it’s a mess of gateways and branches that basically creates a decision tree in BPMN. A cleaner solution is to externalize the decisions so that they are called as a business rules activity from the process model, but the usual challenge is that the decision logic is opaque from the viewpoint of the process modeler. James demonstrated how the DecisionsFirst modeler can be used to model decisions using the Decision Model and Notation standard, then link a read-only view of that to a process model (which he created in Signavio) so that the process modeler can see the logic behind the decision as if it were a callable subprocess. He stepped through the notation within a decision called from a loan origination process, then took us into the full DecisionsFirst modeler to add another decision to the diagram. The interesting thing about decision modeling, which is exploited in the tool, is that it is based on firmer notions of reusability of data sources, decisions and other objects than we see in process models: although reusability can definitely exist in process models, the modeling tools often don’t support it well. DecisionsFirst isn’t a rules/decision engine itself: it’s a modeling environment where decisions are assembled from the rules and decisions in other environments, including external engines, spreadsheet-based decision tables, or knowledge sources describing the decision. It also allows linking to the processes from which it is invoked, objectives and organizational context; since this is a collaborative authoring environment, it can also include comments from other designers.

François Chevresson-Aubain and Aurélien Pupier of Bonitasoft were up next to show how to build flexibility into deployed processes through a few simple but powerful features. First, adding collaboration tasks at runtime, so that a user in a pre-defined step who needs to include other users at that point can do so even if collaboration wasn’t built in at that point. Second, process model parameters can be changed (by an administrator) at runtime, which will impact all running processes based on that model: the situation demonstrated was to change an external service connector when the service call failed, then replay the tasks that failed on that service call. Both of these features are intended to address dynamic environments where the situation at runtime may be different from that at design time, and how to adjust both manual and automated tasks to accommodate those differences.

We finished the morning with Robert Shapiro of Process Analytica on improving resource utilization and productivity using his Optima workbench. Optima is a tool for a serious analyst – likely with some amount of statistical or data science background – to import a process model and runtime data, set optimization parameters (e.g., reduce resource idleness without unduly impacting cycle time), simulate the process, analyze the results, and determine how to best allocate resources in order to optimize relative to the parameters. Although a complex environment, it provides a lot of visualization of the analytics and optimization; Robert actually encourages “eyeballing” the charts and playing around with parameters to fine-tune the process, although he has a great deal more experience at that than the average user. There are a number of analytical tools that can be applied to the data, such as critical path modeling, and financial parameters to optimize revenues and costs. It can also do quite a bit of process mining based on event log inputs in XES format, including deriving a BPMN process model and data correlation based on the event logs; this type of detailed offline analysis could be applied with the data captured and visualized through an intelligent business operations dashboard for advanced process optimization.

We have one more short session after lunch, then best in show voting before bpmNEXT wraps up for another year.

bpmNEXT 2014 Thursday Session 1: Intelligence And A Bit More BPMN

Harsh Jegadeesan of SAP set the dress code bar high by kicking off the Thursday demos in a suit jacket, although I did see Thomas Volmering and Patrick Schmidt straightening his collar before the start. He also set a high bar for the day’s demo by showing how to illuminate business operations with intelligent process intelligence. He discussed a scenario of a logistics hub (such as Amazon’s), and the specific challenges of the hub operations manager who has to deal with inbound and outbound flights, and sorting all of the shipments between them throughout the day. Better visibility into the operations across multiple systems allows problems to be detected and resolved while they are still developing by reallocating the workforce. Harsh showed a HANA-based hub operations dashboard, where the milestones for shipments demark the phases of the value chain: from arrival to ground handling to warehouse to outbound buffer to loading and takeoff. Real-time information is pulled from each of the systems involved, and KPIs show; drill downs can show the lower level aggregate or even individual instance data to determine what is causing missed KPIs – in the demo, shipments from certain other hubs are not being unloaded quickly enough. But more than just a dashboard, this allows the hub operations manager to add a task directly in the context of the problem and assign it (via an @mention) to someone else, for example, to direct more trucks to unload the shipments. The dashboard can also make recommendations, such as changing the flights for specific shipments to improve the overall flow and KPIs. He showed a flight map view of all inbound and outbound flights, where the hub operations manager can click on a specific flight and see the related data. He showed the design environment for creating the intelligent business operations process by assembling SAP and non-SAP systems using BPMN, mapping events from those systems onto the value chain phases (using BPAF where available), thereby providing visibility into those systems from the dashboard; this builds a semantic data mart inside HANA for the different scenarios to support the dashboard but also for more in-depth analytics and optimization. They’ve also created a specification for Process Façade, an interface for unifying process platforms by integrating using BPMN, BPAF and other standards, plus their own process-based systems; at some point, expect this to open up for broader vendor use. Some nice case studies from process visualization in large-scale enterprises.

Dominic Greenwood of Whitestein on intelligent process execution, starting by defining an intelligent process: it has experiences (acquired data), knowledge (actionable information, or analytical interpretation of acquired data), goals (adoptable intentions, or operationally-relevant behavioral directives), plans (ways to achieve goals through reusable action sequences, such as BPMN processes) and actions (result of executing plans). He sees intelligent process execution as an imperative because of the complexity of real-world processes; processes need to dynamically adapt, and process governance needs to actively apply constraints in this shifting environment. An intelligent process controller, or reflective agent, passes through a continuous cycle of observe, comprehend, deliberate, decide, act and learn; it can also collaborate with other intelligent process controllers. He discussed a case study in transportation logistics – a massively complex version of the travelling salesman problem – where a network of multi-modal vehicles has to be optimized for delivery of goods that are moved through multiple legs to reach their destinations. This involves knowledge of the goods and their specific requirements, vehicle sensors of various types, fleet management, hub/port systems, traffic and weather, and personnel assignments. DHL in Europe is using this to manage 60,000 orders per day, allocated between 17,500 vehicles that are constantly in motion, managed by 300 dispatchers across 24 countries with every order changing at least once while en route. The intelligent process controllers are automating many of the dispatching decisions, providing a 25-30% operational efficiency boost and a 12% reduction in transportation costs. A too-short demo that just walked through their process model to show how some of these things are assigned, but an interesting look into intelligent processes, and a nice tie-in to Harsh’s demonstration immediately preceding.

Next up was Jakob Freund of camunda on BPMN everywhere; camunda provides an open-source BPM framework intended to be used by Java developers to incorporate process automation into their applications, but he’s here today to talk about bpmn.io: an open-source toolkit in Javascript that provides a framework for developers and a BPMN web modeler, all published on GitHub. The first iteration is kicking off next week, and the web modeler will be available later this year. Unlike yesterday’s demonstrators who firmly expressed the value of no-code BPM implementations, Jakob jumped straight into code to show how to use the Javascript classes to render BPMN XML as a graphical diagram and add annotations around the display of elements. He showed how these concepts are being used in their cockpit process monitoring product; it could also be used to demonstrate or teach BPMN, making use of functions such as process animation. He demonstrated uploading a BPMN diagram (as XML) to their camunda community site; the site uses the Javascript libraries to render the diagram, and allows selecting specific elements in the diagram and adding comments, which are then seen via a numeric indicator (indicating the number of comments) attached to the elements with comments. He demonstrated some of the starting functionality of the web modeler, but there’s a lot of work to do there still; once it’s released, any developer can download the code and embed that web modeler into their own applications.

We finished the first morning session with Keith Swenson of Fujitsu on letting go of control: we’re back on the topic of agents, which Keith initially defined as autonomous, goal-directed software that does something for you, before pointing out that that describes a lot of software today. He expanded that definition to mean something more…human-like. A personal assistant that can coordinate your communications with those of other domains. These type of agents do a lot of communication amongst themselves in a rules-based dynamic fashion, simplifying and reducing the communication that the people need to do in order to achieve their goals. The key to determining what the personal assistants should be doing is to observe emergent behavior through analytics. Keith demonstrated a healthcare scenario using Cognoscenti, an open-source adaptive case management project; a patient and several different clinicians could set goals, be assigned tasks, review documents and other activities centered around the patient’s care. It also allows the definition of personal assistants to do specific rules-based actions, such as cloning cases and synchronizing documents between federated environments (since both local and cloud environments may be used by different participants in the same case), accepting tasks, and more; copying between environments is essential so that each participant can have their information within their own domain of control, but with the ability to synchronize content and tasks. The personal assistants are pretty simple at this point, but the concept is that they are helping to coordinate communications, and the communications and documents are all distributed via encrypted channels so safer than email. A lot of similarities with Dominic’s intelligent process controllers, but on a more human scale. As many thousand of these personal assistant interactions occur, patterns will begin to emerge of the process flows between the people involved, which can then be used to build more intelligence into the agents and the flows.

bpmNEXT 2014 Wednesday Afternoon 2: Unstructured Processes

We’re in the Wednesday home stretch; this session didn’t have a specific theme but it seemed to mostly deal with unstructured processes and event-driven systems.

The session started with John Reynolds and Amy Dickson of IBM on blending structured flow and event condition action patterns within process models. John showed how they are modeling ad hoc activities using BPMN (rather than CMMN): basically, disconnected activities can have precondition events and expressions specified as to when and how they are triggered, be identified as optional or mandatory, and their behavior. It’s not completely standard BPMN, but uses a relatively small number of extensions to indicate how the activity is started and whether it is optional or required. The user sees activities with different visual indicators to show which are required or optional, and if an activity is still waiting for a precondition. This exposes the natural capabilities of the execution engine as an event handling engine; BPMN just provides a model for what happens next after an action occurs, as well as handling the flow model portions of the process. They’re looking at adding milestones and other constructs; this is an early pre-release version and I expect that we’ll see some of these ideas rolling into their products over the months to come. An interesting way to combine process flows and ad hoc activities in the same (pre-defined) process while hiding some of the complexity of events from the users; also interesting in that this indicates some of IBM’s direction for handling ad hoc cases in BPM.

Ashok Anand and R.V.S. Mani of Inswit presented their beta appiyo “business response platform”, which is an application development platform for small, simple BPM apps that can interconnect with social media such as Facebook, but an overly-short demo followed an overly-long presentation so difficult to grasp much of the capability.

We finished the day with Jason Bloomberg of EnterpriseWeb discussing agent-oriented architecture for cross-process governance: a “style of EA that drives business agility by leveraging policy-based, data-driven intelligent agents”. They call their intelligent agent SmartAlex; it’s like Siri for the enterprise, dynamically connecting people and content at the right time in a goal-driven manner rather than with pre-defined processes. Every step is just an event that calls SmartAlex; SmartAlex interprets models, evaluates and applies policies and rules, then delivers results or makes recommendations using a custom interface and payload depending on the context. Agents can not only coordinate local processes, but also track what’s happening in all related processes across an enterprise to provide overall governance and support integrated functions. EnterpriseWeb isn’t a BPM tool; it’s a tool for building tools, including workflows. Bill Malyk joined remotely to do the demo based on resolving a declarative conflict of interest; he showed creating an application related to cases in the system, and stating that potential conflict of interest cases are those that have relationships between people involved in the case. This immediately identified existing cases where there is a potential conflict of interest, and allowed navigation through the graph that links the cases and the criteria. He then demonstrated creating a process related to the application, which can then run flow-oriented processes based on potential conflicts of interest found using the declarative logic specified earlier. Some powerful capabilities for declarative, agent-based applications that take advantage of a variety of data sources and fact models, with greater flexibility and ease of use than complex event processing platforms.

My brain is full, so it must be time for dinner and another evening of drinks and conversation; I’ll be back tomorrow with another full morning and half afternoon of sessions.