bpmNEXT 2014 Wrapup And Best In Show

I couldn’t force myself to write about the last two sessions of bpmNEXT: the first was a completely incomprehensible (to me) demo, and the second spent half of the time on slides and half on a demo that didn’t inspire me enough to actually put my hands on the keyboard. Maybe it’s just conference fatigue after two full days of this.

However, we did get a link to the Google Hangout recording of the BPMN model interchange demo from yesterday (be sure to set it to HD or you’ll miss a lot of the screen detail).

We had a final wrapup address from Bruce Silver, and he announced our vote for the best in show: Stefan Andreasen of Kapow – congrats!

I’m headed home soon to finish my month of travel; I’ll be Toronto-based until the end of April when IBM Impact rolls around.

bpmNEXT 2014 Thursday Session 2: Decisions And Flexibility

In the second half of the morning, we started with James Taylor of Decision Management Solutions showing how to use decision modeling for simpler, smarter, more agile processes. He showed what a process model looks like in the absence of externalized decisions and rules: it’s a mess of gateways and branches that basically creates a decision tree in BPMN. A cleaner solution is to externalize the decisions so that they are called as a business rules activity from the process model, but the usual challenge is that the decision logic is opaque from the viewpoint of the process modeler. James demonstrated how the DecisionsFirst modeler can be used to model decisions using the Decision Model and Notation standard, then link a read-only view of that to a process model (which he created in Signavio) so that the process modeler can see the logic behind the decision as if it were a callable subprocess. He stepped through the notation within a decision called from a loan origination process, then took us into the full DecisionsFirst modeler to add another decision to the diagram. The interesting thing about decision modeling, which is exploited in the tool, is that it is based on firmer notions of reusability of data sources, decisions and other objects than we see in process models: although reusability can definitely exist in process models, the modeling tools often don’t support it well. DecisionsFirst isn’t a rules/decision engine itself: it’s a modeling environment where decisions are assembled from the rules and decisions in other environments, including external engines, spreadsheet-based decision tables, or knowledge sources describing the decision. It also allows linking to the processes from which it is invoked, objectives and organizational context; since this is a collaborative authoring environment, it can also include comments from other designers.

François Chevresson-Aubain and Aurélien Pupier of Bonitasoft were up next to show how to build flexibility into deployed processes through a few simple but powerful features. First, adding collaboration tasks at runtime, so that a user in a pre-defined step who needs to include other users at that point can do so even if collaboration wasn’t built in at that point. Second, process model parameters can be changed (by an administrator) at runtime, which will impact all running processes based on that model: the situation demonstrated was to change an external service connector when the service call failed, then replay the tasks that failed on that service call. Both of these features are intended to address dynamic environments where the situation at runtime may be different from that at design time, and how to adjust both manual and automated tasks to accommodate those differences.

We finished the morning with Robert Shapiro of Process Analytica on improving resource utilization and productivity using his Optima workbench. Optima is a tool for a serious analyst – likely with some amount of statistical or data science background – to import a process model and runtime data, set optimization parameters (e.g., reduce resource idleness without unduly impacting cycle time), simulate the process, analyze the results, and determine how to best allocate resources in order to optimize relative to the parameters. Although a complex environment, it provides a lot of visualization of the analytics and optimization; Robert actually encourages “eyeballing” the charts and playing around with parameters to fine-tune the process, although he has a great deal more experience at that than the average user. There are a number of analytical tools that can be applied to the data, such as critical path modeling, and financial parameters to optimize revenues and costs. It can also do quite a bit of process mining based on event log inputs in XES format, including deriving a BPMN process model and data correlation based on the event logs; this type of detailed offline analysis could be applied with the data captured and visualized through an intelligent business operations dashboard for advanced process optimization.

We have one more short session after lunch, then best in show voting before bpmNEXT wraps up for another year.

bpmNEXT 2014 Thursday Session 1: Intelligence And A Bit More BPMN

Harsh Jegadeesan of SAP set the dress code bar high by kicking off the Thursday demos in a suit jacket, although I did see Thomas Volmering and Patrick Schmidt straightening his collar before the start. He also set a high bar for the day’s demo by showing how to illuminate business operations with intelligent process intelligence. He discussed a scenario of a logistics hub (such as Amazon’s), and the specific challenges of the hub operations manager who has to deal with inbound and outbound flights, and sorting all of the shipments between them throughout the day. Better visibility into the operations across multiple systems allows problems to be detected and resolved while they are still developing by reallocating the workforce. Harsh showed a HANA-based hub operations dashboard, where the milestones for shipments demark the phases of the value chain: from arrival to ground handling to warehouse to outbound buffer to loading and takeoff. Real-time information is pulled from each of the systems involved, and KPIs show; drill downs can show the lower level aggregate or even individual instance data to determine what is causing missed KPIs – in the demo, shipments from certain other hubs are not being unloaded quickly enough. But more than just a dashboard, this allows the hub operations manager to add a task directly in the context of the problem and assign it (via an @mention) to someone else, for example, to direct more trucks to unload the shipments. The dashboard can also make recommendations, such as changing the flights for specific shipments to improve the overall flow and KPIs. He showed a flight map view of all inbound and outbound flights, where the hub operations manager can click on a specific flight and see the related data. He showed the design environment for creating the intelligent business operations process by assembling SAP and non-SAP systems using BPMN, mapping events from those systems onto the value chain phases (using BPAF where available), thereby providing visibility into those systems from the dashboard; this builds a semantic data mart inside HANA for the different scenarios to support the dashboard but also for more in-depth analytics and optimization. They’ve also created a specification for Process Façade, an interface for unifying process platforms by integrating using BPMN, BPAF and other standards, plus their own process-based systems; at some point, expect this to open up for broader vendor use. Some nice case studies from process visualization in large-scale enterprises.

Dominic Greenwood of Whitestein on intelligent process execution, starting by defining an intelligent process: it has experiences (acquired data), knowledge (actionable information, or analytical interpretation of acquired data), goals (adoptable intentions, or operationally-relevant behavioral directives), plans (ways to achieve goals through reusable action sequences, such as BPMN processes) and actions (result of executing plans). He sees intelligent process execution as an imperative because of the complexity of real-world processes; processes need to dynamically adapt, and process governance needs to actively apply constraints in this shifting environment. An intelligent process controller, or reflective agent, passes through a continuous cycle of observe, comprehend, deliberate, decide, act and learn; it can also collaborate with other intelligent process controllers. He discussed a case study in transportation logistics – a massively complex version of the travelling salesman problem – where a network of multi-modal vehicles has to be optimized for delivery of goods that are moved through multiple legs to reach their destinations. This involves knowledge of the goods and their specific requirements, vehicle sensors of various types, fleet management, hub/port systems, traffic and weather, and personnel assignments. DHL in Europe is using this to manage 60,000 orders per day, allocated between 17,500 vehicles that are constantly in motion, managed by 300 dispatchers across 24 countries with every order changing at least once while en route. The intelligent process controllers are automating many of the dispatching decisions, providing a 25-30% operational efficiency boost and a 12% reduction in transportation costs. A too-short demo that just walked through their process model to show how some of these things are assigned, but an interesting look into intelligent processes, and a nice tie-in to Harsh’s demonstration immediately preceding.

Next up was Jakob Freund of camunda on BPMN everywhere; camunda provides an open-source BPM framework intended to be used by Java developers to incorporate process automation into their applications, but he’s here today to talk about bpmn.io: an open-source toolkit in Javascript that provides a framework for developers and a BPMN web modeler, all published on GitHub. The first iteration is kicking off next week, and the web modeler will be available later this year. Unlike yesterday’s demonstrators who firmly expressed the value of no-code BPM implementations, Jakob jumped straight into code to show how to use the Javascript classes to render BPMN XML as a graphical diagram and add annotations around the display of elements. He showed how these concepts are being used in their cockpit process monitoring product; it could also be used to demonstrate or teach BPMN, making use of functions such as process animation. He demonstrated uploading a BPMN diagram (as XML) to their camunda community site; the site uses the Javascript libraries to render the diagram, and allows selecting specific elements in the diagram and adding comments, which are then seen via a numeric indicator (indicating the number of comments) attached to the elements with comments. He demonstrated some of the starting functionality of the web modeler, but there’s a lot of work to do there still; once it’s released, any developer can download the code and embed that web modeler into their own applications.

We finished the first morning session with Keith Swenson of Fujitsu on letting go of control: we’re back on the topic of agents, which Keith initially defined as autonomous, goal-directed software that does something for you, before pointing out that that describes a lot of software today. He expanded that definition to mean something more…human-like. A personal assistant that can coordinate your communications with those of other domains. These type of agents do a lot of communication amongst themselves in a rules-based dynamic fashion, simplifying and reducing the communication that the people need to do in order to achieve their goals. The key to determining what the personal assistants should be doing is to observe emergent behavior through analytics. Keith demonstrated a healthcare scenario using Cognoscenti, an open-source adaptive case management project; a patient and several different clinicians could set goals, be assigned tasks, review documents and other activities centered around the patient’s care. It also allows the definition of personal assistants to do specific rules-based actions, such as cloning cases and synchronizing documents between federated environments (since both local and cloud environments may be used by different participants in the same case), accepting tasks, and more; copying between environments is essential so that each participant can have their information within their own domain of control, but with the ability to synchronize content and tasks. The personal assistants are pretty simple at this point, but the concept is that they are helping to coordinate communications, and the communications and documents are all distributed via encrypted channels so safer than email. A lot of similarities with Dominic’s intelligent process controllers, but on a more human scale. As many thousand of these personal assistant interactions occur, patterns will begin to emerge of the process flows between the people involved, which can then be used to build more intelligence into the agents and the flows.

bpmNEXT 2014 Wednesday Afternoon 2: Unstructured Processes

We’re in the Wednesday home stretch; this session didn’t have a specific theme but it seemed to mostly deal with unstructured processes and event-driven systems.

The session started with John Reynolds and Amy Dickson of IBM on blending structured flow and event condition action patterns within process models. John showed how they are modeling ad hoc activities using BPMN (rather than CMMN): basically, disconnected activities can have precondition events and expressions specified as to when and how they are triggered, be identified as optional or mandatory, and their behavior. It’s not completely standard BPMN, but uses a relatively small number of extensions to indicate how the activity is started and whether it is optional or required. The user sees activities with different visual indicators to show which are required or optional, and if an activity is still waiting for a precondition. This exposes the natural capabilities of the execution engine as an event handling engine; BPMN just provides a model for what happens next after an action occurs, as well as handling the flow model portions of the process. They’re looking at adding milestones and other constructs; this is an early pre-release version and I expect that we’ll see some of these ideas rolling into their products over the months to come. An interesting way to combine process flows and ad hoc activities in the same (pre-defined) process while hiding some of the complexity of events from the users; also interesting in that this indicates some of IBM’s direction for handling ad hoc cases in BPM.

Ashok Anand and R.V.S. Mani of Inswit presented their beta appiyo “business response platform”, which is an application development platform for small, simple BPM apps that can interconnect with social media such as Facebook, but an overly-short demo followed an overly-long presentation so difficult to grasp much of the capability.

We finished the day with Jason Bloomberg of EnterpriseWeb discussing agent-oriented architecture for cross-process governance: a “style of EA that drives business agility by leveraging policy-based, data-driven intelligent agents”. They call their intelligent agent SmartAlex; it’s like Siri for the enterprise, dynamically connecting people and content at the right time in a goal-driven manner rather than with pre-defined processes. Every step is just an event that calls SmartAlex; SmartAlex interprets models, evaluates and applies policies and rules, then delivers results or makes recommendations using a custom interface and payload depending on the context. Agents can not only coordinate local processes, but also track what’s happening in all related processes across an enterprise to provide overall governance and support integrated functions. EnterpriseWeb isn’t a BPM tool; it’s a tool for building tools, including workflows. Bill Malyk joined remotely to do the demo based on resolving a declarative conflict of interest; he showed creating an application related to cases in the system, and stating that potential conflict of interest cases are those that have relationships between people involved in the case. This immediately identified existing cases where there is a potential conflict of interest, and allowed navigation through the graph that links the cases and the criteria. He then demonstrated creating a process related to the application, which can then run flow-oriented processes based on potential conflicts of interest found using the declarative logic specified earlier. Some powerful capabilities for declarative, agent-based applications that take advantage of a variety of data sources and fact models, with greater flexibility and ease of use than complex event processing platforms.

My brain is full, so it must be time for dinner and another evening of drinks and conversation; I’ll be back tomorrow with another full morning and half afternoon of sessions.

bpmNEXT 2014 Wednesday Afternoon 1: Mo’ Models

Denis Gagne of Trisotech was back after lunch at bpmNEXT demonstrating socializing process change with their BPMN web modeler. He showed their process animation feature, which allows you to follow the flow through a process and see what happens at each step, and view rich media that has been attached at any given step to explain that step. He showed a process for an Amazon order, where each step had a slideshow or video attached to show the actual work that was being performed at that step; the tool supports YouTube, Slideshare, Dropbox and a few others natively, plus any URL as an attachment to any element in the process. The animated process can be referenced by a URL, allowing it to be easily distributed and socialized. This provides a way for people to learn more about the process, and can be used as an employee training tool or a customer experience enhancement. Even without the rich media enhancements, the process animation can be used to debug processes and find BPMN logical errors (e.g., deadlocks, orphan branches) by allowing the designer to walk through the process and see how the tokens are processed through the model – most modeling tools only check that the BPMN is syntactically correct, not for more complex logical errors that can result in unexpected and unwanted scenarios. Note that this is different from process simulation (which they also offer), which is typically used to estimate performance based on aggregate instances.

Bruce Silver took a break from moderating to do a demo together with Stephan Fischli and Antonio Palumbo of itp commerce on wizard-based generation of “good BPMN” that they’ve done through their BPMessentials collaboration for BPMN training and certification. Bruce’s book BPMN Method and Style as well as his courses attempt to teach good BPMN, where the process logic is evident from the printed diagram in spite of things that can tend to confuse a reader, such as hierarchical modeling forms. He uses a top-down methodology where you identify the start and end states of a process instance, then decompose the process into 6-10 steps where each is an activity aligned with the process instance (i.e., no multi-instance activities), and enumerate the possible end states of each activity if there is more than one so that end states within subprocesses can be matched to gateways that immediately follow the subprocesses. This all takes a bit of a developer’s mindset that’s typically not seen in business analysts who might be creating BPMN models, meaning that we can still end up with spaghetti process models even in BPMN. Bruce walked through an order-to-cash scenario, then Stephan and Antonio took over to demonstrate how their tool creates a BPMN model based on a wizard that walks through the steps of the BPMN method and style: first the process start and (one or more) end states; then a list of the major steps, where each is named, the end states enumerated and (optionally) the performer identified; then the activity-end state pairs are listed so that the user can specify the target (following step), which effectively creates the process flow diagram; then, each activity can be expanded as a subprocess by listing the child activities and the end states; finally, the message flows and lanes are specified by stating which activities have incoming and outgoing message flows. The wizard then creates the BPMN process model in the itp commerce Visio tool where all of the style rules are enforced. Without doubt, this creates better BPMN, although specifying a branching process model via a list of activities and end states might not be much more obvious than creating the process model directly. I know that the itp commerce and some other BPMN modeling tools can also run a check on a BPMN model to check for violations of the style rules; I assume that detecting and fixing the rule violations from a model is just another way of achieving the same goal.

Last up before the afternoon break was Gero Decker of Signavio to demonstrate combining process modeling and enterprise architecture. Signavio’s only product is their process modeler – used to model, collaborate, publish and implement models – which means that they typically deal with process designers and process centers of excellence. However, they are finding that they are now running into EA modelers as they start to move into process architecture and governance, and application owners for application lifecycle management. EA modelers have to deal with the issues of whether to use a unified tool with a single object repository for all modeling and unified model governance, or multiple best of breed tools where metamodels can be synchronized and may be slaved between tools. Signavio is pushing the second alternative, where their tool integrated with or overlays other tools such as SAP Solution Manager and leanIX. Signavio has added ArchiMate open standard enterprise architecture model types to their tool for EA modeling, creating linkages and tracing from ArchiMate objects to BPMN models. Gero demonstrated what the ArchiMate models look like in Signavio, then how processes in leanIX can be directly linked to Signavio process models as well as having applications from the EA portfolio available as performers to specify in a Signavio process model. Creating of process models in Signavio that use applications from the portfolio then show up (via automated synchronization) in leanIX as references to that application. He also showed an integration with Effektif for approving changes to the process model in Signavio, comparing the before and after versions of the flow, since there is a pluggable connector to Signavio from Effektif processes. Connections to other tools could be built using the Signavio REST API. Nice integration between process models and application portfolio models in separate tools, as well as the model approval workflow.

bpmNEXT 2014: BPMN MIWG Demo

The BPMN Model Interchange Working Group is all about (as you might guess from the name) interchanging BPMN models between different vendors’ products: something that OMG promised with the BPMN standard, but which never actually worked out of the box due to contradictions in the standard and misinterpretations by some vendors. To finish off Wednesday morning at bpmNEXT, we have a live demo involving 12 different tools with participants in different locations, with Denis Gagne of Trisotech (who chairs the working group) and Falko Menge of camunda (who heads up the test automation subgroup) on the stage, a few others here on the sidelines, some at the OMG meeting in Reston, and some in their offices in France and Poland.

To start, different lanes of the process were designed by four different people on IBM Blueworks Live, Activiti, camunda and W4; each then exported their process models and saved to Dropbox. Denis switched back and forth between the different screens (they were all on a Google Hangout) to show us what was happening as the proceeded, and we could see the notifications from Dropbox as the different files were added. In the second stage, Bonitasoft was used to augment the Blueworks Live model, itp-commerce edited the Activiti model, and Signavio edited the camunda model. In the third stage, ADONIS was used to merge together the lanes created in several of the models (I lost track of which ones) into a single process model, and Yaoqiang used to merge the Signavio and camunda models. Then, the Trisiotech Visio BPMN modeler was used to assemble the ADONIS and Yaoqiang models into the final model with multiple pools. At the end, the final model was imported into a number of different tools: the Trisotech web modeler, the Oracle BPM modeler, the bpmn.io environment from camunda, and directly into to the W4 execution engine (without passing through a modeling environment). Wow.

The files exchanged were BPMN XML files, and the only limitations of which tool to use when was that some only support a single pool so had to be used at the earlier stages where each tool was only modeling a single lane or pool. This is how BPMN was supposed to work, but the MIWG has found some number of inconsistencies with the standard and also some issues with the vendors’ tools that had to be corrected.

They have developed a number of test cases that cover the descriptive and analytic classes within BPMN, and automated tools to test the outcome of different vendors’ modelers. Over 20 BPMN modelers have been tested for import, export and roundtrip capabilities; if you’re a BPMS vendor supporting BPMN 2.0 (or claiming to), you should be on this list because there are a lot of us who just aren’t going to write our own XSLT to translate your models into something that can be read by another tool. If you’re a process designer using a BPMS, shame your vendor into participating because it creates a more flexible and inclusive environment for your design and modeling efforts.

This is hugely valuable work that they’re doing in the working group; note that you don’t have to be an OMG member to get involved, and the BPMN MIWG would love to have others join in to help make this work even better.

We’re off for lunch and a break now, then back for six more sessions this afternoon. Did I mention how awesome bpmNEXT is?

bpmNEXT 2014 Wednesday Morning: Cloud, Synthetic APIs and Models

I’m not going to say anything about last night, but it’s a bit of a subdued crowed here this morning at bpmNEXT. Smile

We started the day with Tom Baeyens of Effektif talking about cloud workflow simplified. I reviewed Effektif in January at the time of launch and liked the simple and accessible capabilities that it offers; Tom’s premise is that BPM is just as useful as email, and it needs to be just as simple to use as email so that we are not reliant on a handful of power users inside an organization to make them work. To do this, we need to strip out features rather than add features, and reduce the barriers to trying it out by offering it in the cloud. Inspired by Trello (simple task management) and IFTTT (simple cloud integration, which basically boils down every process to a trigger and an action), Effektif brings personal DIY workflow to the enterprise that also provides a bridge to enterprise process management through layers of functionality. Individual users can get started building their own simple workflows to automate their day-to-day tasks, then more technical resources can add functionality to turn these into fully-integrated business processes. Tom gave a demo of Effektif, starting with creating a checklist of items to be completed, with the ability to add comments, include participants and add attachments to the case. There have been a few changes since my review: you can use Markdown to format comments (I think that understanding of Markdown is very uncommon in business and may not be well-adopted as, for example, a TinyMCE formatted text field); cases can now to started by a form as well as manually or via email; and Google Drive support is emerging to support usage patterns such as storing an email attachment when the email is used to instantiate the case. He also talked about some roadmap items, such as migrating case instances from one version of a process definition to another.

Next up was Stefan Andreasen of Kapow (now part of Kofax) on automation of manual processes with synthetic APIs – I’m happy for the opportunity to see this because I missed seeing anything about Kapow during my too-short trip to the Kofax Transform conference a couple of weeks ago. He walked through a scenario of a Ferrari sales dealership who looks up SEC filings to see who sold their stock options lately (hence has some ready cash to spend on a Ferrari), and narrow that down with Bloomberg data on age, salary and region to find some pre-qualified sales leads, then load them into Salesforce. Manually, this would be an overwhelming task, but Kapow can create synthetic APIs on top of each of these sites/apps to allow for data extraction and manipulation, then run those on a pre-set schedule. He started with a “Kapplet” (applications for business analysts) that extracts the SEC filing data, allows easy manual filtering by criteria such as filing amount and age, then select records for committal to Salesforce. The idea is that there are data sources out there that people don’t think of as data sources, and many web applications that don’t easily integrated with each other, so people end up manually copying and pasting (or re-keying) information from one screen to another; Kapow provides the modern-day equivalent to screen-scraping that taps into the presentation logic and data (not the physical layout or style, hence less likely to break when the website changes) of any web app to add an API using a graphical visual flow/rules editor. Building by example, elements on a web page are visually tagged as being list items (requiring a loop), data elements to extract, and much more. It can automate a number of other things as well: Stefan showed how a local directory of cryptically-named files can be renamed to the actual titles based on table of contents HTML document; this is very common for conference proceedings, and I have hundreds of file sets like this that I would love to rename. The synthetic APIs are exposed as REST services, and can be bundled into Kapplets so that the functionality is exposed through an application that is useable by non-technical users. Just as Tom Baeyens talked about lowering the barriers for BPM inside enterprises in the previous demo, Kapow is lowering the bar for application integration to service the unmet needs.

It would be great if Tom and Stefan put their heads together and lunch and whipped up an Effektif-Kapow demo, it seems like a natural fit.

Next was Scott Menter of BP Logix on a successor to flowcharts, namely their Process Director GANTT chart-style process interface – he said that he felt like he was talking about German Shepherds to a conference of cat-lovers – as a different way to represent processes that is less complex to build and modify than a flow diagram, and also provides better information on the temporal aspects and dependencies such as when a process will complete and the impacts of delays. Rather than a “successor” model such as a flow chart, that models what comes after what, a GANTT chart is a “predecessor” model, that models the preconditions for each task. A subtle but important difference when the temporal dependencies are critical. Although you could map between the two model types on some level, BP Logix has a completely different model designer and execution engine, optimized for a process timeline. One cool thing about it is that it incorporates past experience: the time required to do a task in the past is overlaid on the process timeline, and predictions made for how well this process is doing based on current instance performance and past performance, including tasks that are far in the future. In other words, predictive analytics are baked right into each process since it is a temporal model, not an add-on such as you would have in a process based on a flow model.

For the last demo of this session, Jean-Loup Comeliau of W4 on their BPMN+ product, which provides model-driven development using BPMN 2, UML 2, CMIS and other standards to generate web-based process applications without generating code: the engine interprets and executes the models directly. The BPMN modeling is pretty standard compared to other process modeling tools, but they also allow UML modeling of the data objects within the process model; I see this in more complete stack tools such as TIBCO’s, but this is less common from the smaller BPM vendors. Resources can be assigned to user tasks using various rules, and user interface forms are generated based on the activities and data models, and can be modified if required. The entire application is deployed as a web application. The data-centricity is key, since if the models change, the interface and application will automatically update to match. There is definitely a strong message here on the role of standards, and how we need more than just BPMN if we’re going to have fully model-driven application development.

We’re taking a break, and will be back for the Model Interchange Working Group demonstration with participants from around the world.

bpmNEXT 2014: Work Management And Smart Processes

Bruce Silver always makes me break the rules, and tonight I’m breaking the “everything is off the record after the bar opens” rule since he scheduled sessions after dinner and with an open bar in the back of the room. Rules, as they say, are made to be broken.

Roger King of TIBCO attempted to start this demo during the earlier session but there were problems with the fancy projector setup. He’s back now to talk about model-driven work management. TIBCO’s core customer base (like mine) is traditional enterprises such as financial services, and they’re seeing a lot of them retiring legacy enterprise apps now in favor of process-centric apps built on platforms such as TIBCO. They see specific problems with work management in very large, branch-network organizations like retail banks; by work management and resource management, they mean the way that work is distributed to and accessed by end users, one of the things that BPMN doesn’t do when you define processes. With tens of thousands of participants, just a small increment in productivity through better work management can cause a significant ROI in absolute terms, but traditionally this has been done through custom user interfaces and distribution/matching. There are a number of resource patterns that have been studied and developed, e.g., separation of duties, round robin; Roger demonstrated how these are being incorporated into TIBCO’s AMX BPM (modeled within their Business Studio product) through organizational models, where you can find the resources in the organization, groups and custom organizational units that you need to bring your business vocabulary to determining how work is distributed within your organization. The idea is that once you have this defined, you can then use very fine-grained rules for determining which person gets which piece of work, or who has access to what. This now becomes something that you can attach to an activity in a process model using simple assignments or with a resource query language that assigns it dynamically, including based on process instance variables – essential when you have 100’s or 1000’s of branches and can’t realistically administer your organizational model and work distribution methods manually. Furthermore, you need to be looking at having people go to the work rather than having work sent to the people. This is the only type of work distribution approach when you’re creating declarative processes, where configuration needs to be much more dynamic than what might be drawn in the process model.

We finished off the short opening day of bpmNEXT with a keynote by Jim Sinur, late of Gartner (but not hesitant to use the materials that he helped to create there) and now an independent analyst, on how his processes are smarter than him. Processes based on machine learning, however, can only go so far: although machines are more accurate and consistent (and never complain when you ask them to work overtime), people are better at unexpected situations. The key is to have computers and people work together within intelligent processes: let the computers work on the parts that they do best, including events, analytics standardized decisions, pre-defined processes and the resulting actions from combining all of these; exploit emerging technologies such as cognitive systems, what-if scenarios via simulation, intelligent business operations, visualization and social analytics. Intelligent agents are a big part of this, but we need to have goal-directed processes to really make this work, or abandon the concept of processes at all except for the footprints that they leave behind.

Rule-breaking done. Back tomorrow for a full day of bpmNEXT 2014.

bpmNEXT 2014 Tuesday Session: It’s All About Mobile

I’ll blog this year the same as last year’s bpmNEXT demos, with each session of multiple demos in a single post. The posts are a bit long, but they are usually grouped into themes so it works better that way.

First up was Brian Reale of Colosa (makers of ProcessMaker open source BPM and ProcessMapper) on self-organizing groups, ad hoc work and expectations of simplicity. This is a topic that I’m really interested in, since I’ve been presenting on worker incentives with collaborative work, which includes some of the same issues as self-organization. One of his keys points is about the effort required to start using a typicial BPMS, and how that differs from design time (where there is typically a large degree of effort required and very little organic adoption) to runtime (where there is much less effort and is the main target of ROI). What they are trying to do is increase adoption by reducing the effort required at design time by providing more ad hoc capabilities, with a resultant lower ROI but also lower cost.  The result is FormSlider, an app environment for ad hoc workflow of structured data with minimal setup, which is what Brian demonstrated (still in alpha). He demoed the tablet interface for a loan application that allows for mobile capture of a client requesting a loan, including pictures and signatures, which then interfaces with ProcessMaker or other back-ends. More interestingly, he showed how an easily-setup app can be used for mobile data capture that hte user can then route to whomever they want (possibly limited to a selection list) with a few other fields such as due date and priority. There’s some informational context, such as seeing how long it is taking each of the possible participants to process cases, and also allows for routing to be round-trip or one-way. The standard user interface is pretty simple: My Cases for things that I’m working on, an Inbox for new things, and a simple forms interface for working on items. There’s an historical view of cases, showing the participants and their responses. He demoed a simple flow going through a round-trip from the initiator through two people and back to the initiator; this can be used for adding a collaborative workflow on top of existing pre-defined processes and systems, taking the place of emailing around for approvals and other simple collaboration. He finished up the demo in ProcessMaker showing us how an app and forms are created and deployed in a few minutes, including how potential users and groups are associated with the forms as they are designed. They have email and forum connectors for ProcessMaker and will be using the same methods with FormSlider for providing people with ways to be notified about work but also to interact with it directly.

Next up was Romeo Elias of Interneer on extending enterprise software using mobile apps by using BPM, addressing the issue that many companies have of not having skilled mobile app developers, but there being no commercial apps available for their needs. Their Intellect BPMS has mobile app capabilities, and allows custom mobile apps to be built quickly that can connect directly to the back-end processes. Since BPMS’ are often being used as full application development platforms, this is not that much of a stretch: the BPM platform already has a lot of the integration and other capabilities, and Interneer’s platform is intended to be used mostly in a drag-and-drop model-driven development environment. Romeo demonstrated creating a new application template that consisted of laying out a UI form for the mobile app using the full web interface (there could also have been a process attached, but the point of his demo was to show the mobile UI), then using it as an app on a tablet interface. The design interface on the web provides the ability to specify sidebar content as well as multiple pages (shown as tabs in the designer). The resultant app – immediately available as soon as it is created in the designer – is a native mobile app, not viewed through a mobile browser, so can take advantage of device-specific features as well as cache data offline. The app was a mobile data capture/reporting application that connected to a database; he demonstrated adding records to the table that include text (free text and restricted using a selection list) and a photo field, with any new records stored locally if connectivity is lost.

Scott Francis and Greg Harley of BP3 presented on bringing process to the people using their  Brazos mobile BPM responsive UI toolkit; at the time of last year’s bpmNEXT, they were focused on hybrid mobile apps, but now are directed towards responsible UI, that is, applications that run in a browser but behave appropriately regardless of the form factor of the device. Native apps can cause a lot of problems because of lack of mobile development and deployment skills within enterprises, but also the hurdles that many companies have to go through to deploy a mobile app that connects to their enterprise apps. Conversely, many enterprise applications already have web interfaces, so adding a new web UI that happens to be responsive and hence appropriate for mobile devices may have a much shorter adoption path, and less effort required since there’s a single application to design and deploy for any platform: no specialized mobile browser apps versus desktop browser apps. Plus, they’re giving it away for free, with plans to open source it in the future. Greg demoed a UI for an IBM BPM process in the full desktop browser version, then the same form on a phone (simulator). The same features in the full form are available in the mobile version, just resized and reformatted for the smaller screen in either orientation. He showed a bit of the form designer, although I had the sense that this would take a bit more effort than what we saw in the previous two demos but would offer quite a bit more capability. They support IBM BPM and Activiti BPM (which are the two platforms that BP3 supports in its consulting practice) and can be made to work with pretty much any BPMS that has a REST API since those APIs turn out to be surprisingly similar between different BPMS vendors. If you want to try out the Brazos UI toolkit, they have a sandbox where you can try it out running against an Activiti instance. This is quite the opposite in technology strategy from Interneer: I can understand BP3’s motivation for going with responsive UI, as well as the rapid uptake, but can also understand the challenges of a browser-based app when you have spotty connectivity (as I often do when I’m travelling), and they admittedly give up some of the device-specific capabilities.

We’re heading off to dinner, then back with a last demo (which was aborted from this session due to projector difficulties) and a keynote by Jim Sinur before we get down to the serious business of the evening drinks reception.

bpmNEXT 2014 Begins!

We’re at the lovely oceanside Asilomar conference grounds a couple of hours drive south of San Francisco for this year’s bpmNEXT conference. Last year’s inaugural conference was a great experience – I wrote 7,000+ words in two days, if that’s any indication – and this year’s lineup looks like a winner.

This conference is about what’s happening next in BPM (as you might guess by the  name): no sales pitches or death by PowerPoint, but a look at the technology directions as seen through demos. It’s also a great opportunity for networking, with a lot of the well-known names in BPM here in person meeting each other face-to-face for a change.

Bruce Silver and Nathaniel Palmer, our hosts and organizers, kicked off the conference and laid out the rules: each session (except for the keynote and a multi-company interoperability demo) is strictly 30 minutes long, with 20 minutes for the demo and 10 for Q&A. Last year, Nathaniel would start to look a bit threatening when the speaker reached their deadline, and everything ran on time.

We have sessions this afternoon and into the evening focused on mobile apps and interfaces, then all day tomorrow and until early afternoon on Thursday on a variety of other BPM topics, so get ready for the firehose.