TIBCO TUCON2012: You Say You Want A (Process) Revolution?

I’ve been asked to fill in for a last-minute cancellation at a breakout session here tomorrow (Thursday) morning, and keeping with the retro music theme that we’ve been hearing here at TUCON, we’re going to have a process revolution:

Process Revolution: Turn Operational Efficiency into Operational Advantage

When starting a business process management (BPM) program, the first step is to achieve operational efficiency and its benefits: cut costs, improve agility through process automation, and ensure regulatory compliance with process transparency. What are the next steps? In this session, learn how you can take your BPM program to the next level. We’ll showcase strategies you can use to not only support process  innovation, but ensure every opportunity is a potential source of revenue generation and increased customer satisfaction. Additionally, we’ll outline how our next-generation BPM fits into the overall event-enabled enterprise.

Rachel Brennan, TIBCO’s BPM product marketing manager, will be up there with me to cover the TIBCO-specific parts about their AMX BPM product, while I discuss the main theme of moving from implementing BPM for cost and compliance reasons to a focus on other criteria such as customer satisfaction.

It’s at 10am, but I’m not sure of the room since the original presentation was added late enough that it didn’t make the printed schedule, the online schedule shows the session but not the location, and the app is completely missing it (for now).

Update: the session is on Thursday from 10:00-10:50am in Juniper 1.

Update 2: we’re now in the app!

TIBCO TUCON2012 Day 2 Keynotes: More Big Data and Social Collaboration

Yesterday, the keynotes talked about customer experience management and big data; this morning, we continued on the big data theme with John Shewell of McKesson Enterprise Intelligence talking about the issues in US healthcare: costs are increasing, and are far beyond that of other industrialized countries, yet the US does not have better healthcare than many other countries: it ranks 50th in life expectancy, and 30th in infant mortality rate. Most of the healthcare spending is on a small percentage of the population, often to treat chronic conditions that are preventable and/or treatable. Some portion of the healthcare spend is waste, to the tune of an estimated $700B per year, some of which can be eliminated by ensuring that standard procedures are followed by hospitals, physicians, pharmacies and other healthcare professionals. McKesson, as a provider of healthcare information and systems, has systems in place with hospitals and physicians that can collect information about healthcare procedures as they are executed, but the move to electronic health records is still ongoing and a lot of the data is fairly unstructured, presenting some challenges in mining the data for opportunities to improve procedural compliance and the quality of care.

Historically, healthcare data was in silos, making it difficult to get a holistic view of a patient. In the US, the only place where all the data came together was at the health insurance company (if you had health insurance, of course), and that might be several weeks after the event. If follow-up care was required after a hospital visit, for example, that information didn’t pop up anywhere, since it fell through the cracks in responsibility. One change that can improve this is to align incentives between the provider, payer and patient, so that it’s to everyone’s benefit if the patient is not readmitted due to a missed follow-up appointment. It’s also important to manage patients earlier to detect and avoid problems before they occur. Big data can help with all of these by detecting patterns and ensuring procedural compliance. In closing, he pointed out that this is not a government issue: it needs to be fixed by the industry.

We moved on to the social collaboration theme, with Ram Menon, TIBCO’s president of social computing, talking about tibbr: as an enterprise social networking platform, this is positioned as a “social bus”, much like TIBCO’s earlier technology success is based on the enterprise message bus. In 18 months, tibbr has grown to be used by over a million people – more than half using smartphones – in 104 countries. TIBCO’s heritage with events and messaging is essential to this success, because tibbr isn’t just about following people, it’s also about following physical devices/items, business processes, files and applications. Earlier this year, they launched tibbrGEO, which has physical locations pushing information to people who are nearby, based on their profile.

Menon was joined briefly by Hervé Coureil, CIO of Schneider Electric, then Jay Grant, Secretary General of InterPortPolice to talk about how they are using tibbr for social networking within and across organizations. He then announced tibbr 4, to be released within a few weeks, with a number of new features:

  • Social Profile – presenting a view of yourself to your colleagues (think LinkedIn)
  • Peer Influence – the impact that you have on the things with which you interact (think Klout)
  • tibbr Insights – social analytics, showing a summary of what’s happening in your social network, including both activities and requests waiting for action

We saw a demo of tibbr, which presents a Facebook-like interface for seeing updates from your social graph, but also allows something very similar to Facebook pages for other entities, such as customers. From a CRM standpoint, this allows all information about the customer to be surfaced in one place: a single pane of glass through which to view a customer.

tibbr 4 also provides a social graph API allowing that social graph being collected within tibbr to be accessed from other applications, as well as provide add-on functionality to tibbr, and a marketplace for these applications that allows tibbr users to add them to their own tibbr environment. They have some new partners that are offering applications right now for tibbr: box for cloud content storage and sharing; Wayin for surveys and sentiment analysis; Badgeville for engagement through gamification and incentives; and several others including Whodini, ManyWorlds, BrightIdea, Teamly and FileBoard.

TIBCO Corporate and Technology Analyst Briefing at TUCON2012

Murray Rode, COO of TIBCO, started the analyst briefings with an overview of technology trends (as we heard this morning, mobile, cloud, social, events) and business trends (loyalty and cross-selling, cost reduction and efficiency gains, risk management and compliance, metrics and analytics) to create the four themes that they’re discussing at this conference: digital customer experience, big data, social collaboration, and consumerization of IT. TIBCO provides a platform of integrated products and functionality in five main areas:

  • Automation, including messaging, SOA, BPM, MDM, and other middleware
  • Event processing, including events/CEP, rules, in-memory data grid and log management
  • Analytics, including visual analysis, data discovery, and statistics
  • Cloud, including private/hybrid model, cloud platform apps, and deployment options
  • Social, including enterprise social media, and collaboration

A bit disappointing to see BPM relegated to being just a piece of the automation middleware, but important to remember that TIBCO is an integration technology company at heart, and that’s ultimately what BPM is to them.

Taking a look at their corporate performance, they have almost $1B in revenue for FY2011, showing growth of 44% over the past two years, with 4,000 customers and 3,500 employees. They continue to invest 14% of revenue into R&D with a 20% increase in headcount, and significant increases in investment in sales and marketing, which is pushing this growth. Their top verticals are financial services and telecom, and while they still do 50% of their business in the Americas, EMEA is at 40%, and APJ making up the other 10% and showing the largest growth. They have a broad core sales force, but have dedicated sales forces for a few specialized products, including Spotfire, tibbr and Nimbus, as well as for vertical industries.

They continue to extend their technology platform through acquisitions and organic growth across all five areas of the platform functionality. They see the automation components as being “large and stable”, meaning we can’t expect to see a lot of new investment here, while the other four areas are all “increasing”. Not too surprising considering that AMX BPM was a fairly recent and major overhaul of their BPM platform and (hopefully) won’t need major rework for a while, and the other areas all include components that would integrate as part of a BPM deployment.

Matt Quinn then reviewed the technology strategy: extending the number of components in the platform as well as deepening the functionality. We heard about some of this earlier, such as the new messaging appliances and Spotfire 5 release, some recent releases of existing platforms such as ActiveSpaces, ActiveMatrix and Business Events, plus some cloud, mobile and social enhancements that will be announced tomorrow so I can’t tell you about them yet.

We also heard a bit more on the rules modeling that I saw before the sessions this morning: it’s their new BPMN modeling for rules. This uses BPMN 1.2 notation to chain together decision tables and other rule components into decision services, which can then be called directly as tasks within a BPMN process model, or exposed as web services (SOAP only for now, but since ActiveMatrix is now supporting REST/JSON, I’m hopeful for this). Sounds a bit weird, but it actually makes sense when you think about how rules are formed into composite decision services.

There was a lot more information about a lot more products, and then my head exploded.

Like others in the audience, I started getting product fatigue, and just picking out details of products that are relevant to me. This really drove home that the TIBCO product portfolio is big and complex, and this might benefit from having a few separate analyst sessions with some sort of product grouping, although there is so much overlap and integration in product areas that I’m not sure how they would sensibly split it up. Even for my area of coverage, there was just too much information to capture, much less absorb.

We finished up with a panel of the top-level TIBCO execs, the first question of which was about how the sales force can even start to comprehend the entire breadth of the product portfolio in order to be successful selling it. This isn’t a problem unique to TIBCO: any broad-based platform vendor such as IBM and Oracle have the same issue. TIBCO’s answer: specialized sales force overlays for specific products and industry verticals, and selling solutions rather than individual products. Both of those work to a certain extent, but often solutions end up being no more than glorified templates developed as sales tools rather than actual solutions, and can lead to more rather than less legacy code.

Because of the broad portfolio, there’s also confusion in the customer base, many of whom see one TIBCO product and have no idea of everything else that TIBCO does. Since TIBCO is not quite the household name like IBM or Oracle, companies don’t necessarily know that TIBCO has other things to offer. One of my banking clients, on hearing that I am at the TIBCO conference this week, emailed “Heard of them as a player in the Cloud Computing space.  What’s different or unique about them vs others?” Yes, they play in the cloud. But that’s hardly what you would expect a bank (that uses very little cloud infrastructure, and likely does have some TIBCO products installed somewhere) to think of first when you mention TIBCO.

TIBCO TUCON2012 Day 1 Keynotes, Part 2: Big Honking Data

Back from the mid-morning break, CMO Raj Verma shifted gears from customer experience management to look at one of the other factors introduced in the first part of the session: big data.

Matt Quinn was back to talk about big data: in some ways, this isn’t new, since there has been a lot of data within enterprises for many years. What’s changed is that we now have the tools to deal with it, both in place and in motion, to find the patterns hiding within it through cleansing and transformation. He makes a sports analogy, saying that a game is not just about the final score, but about all of the events that happen to make up the entire game; similarly, it is not sufficient any more to just measure outcomes in business transactions, you have to monitor patterns in the event streams and combine that with historical data to make the best possible decisions about what is happening right now. He referred to this combination of event processing and analytics as closing the loop between data in motion and data at rest. TIBCO provides a number of products that combine to handle big data: not just CEP, but ActiveSpaces (the in-memory data grid) to enable realtime processing, Spotfire for visual analytics and integration with Hadoop.

We saw a demo of LogLogic, recently acquired by TIBCO, which provides analytics and event detection on server logs. This might sound like a bit of a boring topic, but I’m totally on with this: too many companies just turn off logging on their servers because it generates too many events that they just can’t do anything with, and it impacts performance since logging is done on the operational server. LogLogic’s appliance can collect enormous amounts of log data, detect unusual events based on various rules, and integrate with Spotfire for visualization of potential security threats.

Mark Lorion, CMO for TIBCO Spotfire, came up to announce Spotfire 5, with a complete overhaul to the analytics engine, and including the industry’s first enterprise runtime for the R statistical language, providing 10 times the performance of the open source R project for predictive analytics. Self-service predictive analytics, ftw. They are also going beyond in-memory, integrating with Teradata, Oracle and Microsoft SQL Server for in-database analysis. With Teradata horsepower behind it – today’s announcement of Spotfire being optimized for in-database computation on Teradata – you can now do near-realtime exploration and visualization of some shocking amounts of data. Brad Hopper gave us a great Spotfire demo, not something that most TUCON attendees are used to seeing on the main stage.

Rob Friel, CEO of PerkinElmer, took the stage to talk about how they are using big data and analytics in their scientific innovations in life sciences: screening patient data, environmental samples, human genomes, and drug trials to detect patterns that can improve quality of life in some way. They screened 31 million babies born last year (one in four around the globe) through the standard heel-prick blood test, and detected 18,000 with otherwise undiagnosed disorders that could be cured or treated. Their instrumentation is key in acquiring all the data, but once it’s there, tools such as Spotfire empower their scientists to discover and act on what they find in the data. Just as MGM Grand is delivering unique experiences to each customer, PerkinElmer is trying to enable personalized health monitoring and care for each patient.

To wrap up the big data section, Denny Page, TIBCO’s VP of Engineering, came on stage with his new hardware babies: a FTL Message switch and an EMS appliance, both to be available by the end of November 2012.

For the final part of the day 1 keynotes, we heard from an innovators’ panel of Scott McNealy (founder of Sun Microsystems, now chairman of Wayin), Tom Siebel (founder of Siebel Systems, now at C3 Energy where they are using TIBCO for energy usage analytics), Vivek Ranadivé, and KR Sridhar (CEO of Bloom Energy), chaired by David Kirkpatrick. Interesting and wide-ranging discussion about big data, analytics, sentiment analysis, enterprise social media, making data actionable, the internet of things and how a low barrier to platform exit drives innovation. The panel thinks that the best things in tech are yet to come, and I’m in agreement, although those who are paranoid about the impact of big data on their privacy should be very, very afraid.

I’ll be blogging from the analyst event for the rest of the day: we have corporate and technology briefings from the TIBCO execs plus some 1:1 sessions. No pool time for me today!

TIBCO TUCON2012 Day 1 Keynotes, Part 1

The keynotes started with TIBCO’s CEO, Vivek Ranadivé, talking about the forces driving change: a massive explosion of data (big data), the emergence of mobility, the emergence of platforms, the rise of Asia (he referenced the Gangnam Style video, although did not actually do the dance), and how math is trumping science (e.g., the detection and exploitation of patterns). The ability to harness these forces and produce extreme value is a competitive differentiator, and is working for companies like Apple and Amazon.

Raj Verma, TIBCO’s CMO, was up next, continuing the message of how fast things are changing: more iPhones were sold over the past few days than babies were born worldwide, and Amazon added more computing capacity last night than they had in total in 2001. He (re)introduced their concept of the two-second advantage – the right information a little bit before an event is worth infinitely more than any amount of information after the event – enabled by an event-enabled enterprise (or E3, supported by, of course, TIBCO infrastructure). Regardless of whether or not you use TIBCO products, this is a key point: if you’re going to exploit the massive amounts of data being generated today in order to produce extreme value, you’re going to need to be an event-enabled enterprise, responding to events rather than just measuring outcomes after the fact.

He discussed the intersection of four forces: cloud, big data, social collaboration and mobility. This is not a unique message – every vendor, analyst and consultant are talking about this – but he dug into some of these in detail: mobile, for example, is no longer discretionary, even (or maybe especially) in countries where food and resources are scarce. The four of these together all overlap in the consumerization of IT, and are reshaping enterprise IT. A key corporate change driven by these is customer experience management: becoming the brand that customers think of first when the product class is mentioned, and turning customers into fans. Digital marketing, properly done, turns your business into a social network, and turns customer management into fan management.

Matt Quinn, CTO, continued the idea of turning customers into fans, and solidifying customer loyalty. To do this, he introduced TIBCO’s “billion dollar backend” with its platform components of automation, event processing, analytics, cloud and social, and hosted a series of speakers on the subject of customer experience management.

We then heard from a customer, Chris Nordling, EVP of Operations and CIO of MGM Resorts and CityCenter, who use TIBCO for their MLife customer experience management/loyalty program. Their vision is to track everything about you from your gambling wins/losses to your preferences in restaurants and entertainment, and use that to build personalized experiences on the fly. By capturing the flow of big data and responding to events in realtime, the technology provides their marketing team with the ability to provide a zero-friction offer to each customer individually before they even know that they want something: offering reduced entertainment tickets just as you’re finishing a big losing streak at the blackjack tables, for example. It’s a bit creepy, but at the same time, has the potential to provide a better customer experience. Just a bit of insight into what they’re spending that outrageous $25/day resort fee on.

Quinn came back to have a discussion with one of their “loyalty scientists” (really??) about Loyalty Lab, TIBCO’s platform/service for loyalty management, which is all about analyzing events and data in realtime, and providing “audience of one” service and offerings. Traditional loyalty programs were transaction-based, but today’s loyalty programs are much more about providing a more holistic view of the customer. This can include not just events that happen in a company’s own systems, but include external social media information, such as the customer’s tweets. I know all about that.

Another customer, Rick Welts of the Golden State Warriors (who, ironically, play at the Oracle stadium) talked about not just customer loyalty management, but the Moneyball-style analytics that they apply to players on a very granular scale: each play of each game is captured and analyzed to maximize performance. They’re also using their mobile app for a variety of customer service initiatives, from on-premise seat upgrades to ordering food directly from your seat in the stadium.

Mid-morning break, and I’ll continue afterwards.

As an aside, I’m not usually wide awake enough to get much out of the breakfast-in-the-showcase walkabout, but this morning prior to the opening sessions, I did have a chance to see the new TIBCO decision services integrated into BPM, also available as standalone services. Looked cool, more on that later.

BPM2012: Stephen White Keynote on BPMN

It’s the last day at BPM 2012, and the morning keynote is by Steve White of IBM, a.k.a. “the father of BPMN”, discussing the Business Process Model and Notation (BPMN) standard and its future. He went through a quick history of the development of the standard from its beginnings in BPMI (now part of OMG) in 2001, through the release of the 1.0 specification in 2004, the official adoption as an OMG standard in 2006, 1.1 and 1.2 revisions in 2008 and 2009, then BPMN 2.0 in 2011. Although there’s no official plan for BPMN 3.0, he said that he imagined that it might be in the future.

The original drivers for BPMN were to be usable by the business community for process modeling, and be able to generate executable processes, but these turned out to be somewhat conflicting requirements since the full syntax required to support execution ended up making BPMN too complex for non-technical modelers if considered in its entirety. To complicate things further, the business modelers want a simple notation, yet complain when certain behaviors can’t be modeled, meaning that there’s some amount of conflict even within the first of the two requirements. The approach was to use familiar flowchart structures and shapes, have a small set of core elements for simple modeling, then provide variations of the core elements to support the complexity required for execution.

BPMN, as White states, is not equivalent to BPM: it’s a language to define process behavior, but a number of other languages and technologies are also required to implement BPM, such as data, rules, resources and user interfaces. Hence, it’s one tool in the BPM toolbox, to be used at design time or runtime as required. The case management modeling notation (CMMN) is under development, and there are currently mechanisms for a CMMN model to invoke BPMN. Personally, I think that it might make sense to combine the two modeling standards, since I believe that a majority of business processes contain elements of each.

He walked through the diagram types, elements, and the innovations that we’ve seen in modeling through BPMN such as boundary intermediate events, pools/lanes and message flows, and the separation of process and data flows. He also described the conformance levels – descriptive, analytic, common executable, and full – and their role in modeling tools.

He laid out a bit of the vision for BPMN’s future, which is to extend further into uncontrolled and descriptive processes (case management), but also further into controlled and prescriptive processes (service level modeling). He also mentioned the potential to support for element substitution at different levels in order to better support shared models between IT and business – I find this especially interesting, since it would allow different views of the same process model to have some elements hidden or exposed, or even changed to different element types suitable to the viewer.

When BPMN 1.0 was defined, ad hoc processes (really, one in which the activities can occur in any order or frequency) were included but not really well developed, since the BPM systems at the time mostly only supported prescriptive model execution. In considering case management modeling in general, a case may be fairly prescriptive with some runtime variations, or may be completely free form and descriptive; BPMN is known for prescriptive process modeling, but does support descriptive processes via ad hoc subprocesses. Additional process types and behaviors are required to fully support case management such as milestones, new event types and the ability to modify a process at runtime, and he showed some suggestions for what these might look like in an extension BPMN.

Service level modeling, on the other hand, is even more prescriptive than what we see in BPMN today: it’s lower level, more like a screen flow that happens all within a single BPMN task: no lanes, since it’s all within a single task, with gateways allowed but no parallel paths. Think of it as visual pseudo-code, probably not exposed to a business viewer but modeled by IT to effect the actual implementation. I’m seeing these sorts of screen flow models in BPMS products already such as TIBCO’s AMX BPM, as well as similar functionality from Active Endpoints as an add-in to Salesforce, so this isn’t a complete surprise. I saw an paper on client-side service composition at CASCON that could impact on this sort of service level modeling, and it will be interesting to see how this functionality evolves in BPMN and its impact on BPMS products.

This is my last post from BPM 2012: although I would like to attend a few of the other morning sessions, I’ll probably spend the time doing some last minute reviews of the three-hour tutorial on social BPM that I’ll be giving this afternoon.

BPM2012: Papers on Process Mining

I had a bit of blog fatigue earlier, but Keith Swenson blogged the session on process cloud concepts for case management that I attended but didn’t write about, and I’m back at it for the last set of papers for the day at BPM 2012, all with a focus on process mining.

Repairing Process Models to Reflect Reality

[link]

Dirk Fahland of Eindhoven University presented a paper on process repair, as opposed to process mining, with a focus on adjusting the original process model to maximize fitness, where fitness is measured by the ability to replace traces in the event log: if a model can replay all of the traces of actual process execution, then it is perfectly fit. Their methods compare the process model to the event log using a conformance checker in order to align the event log and the model, which can be accomplished with the methods of Adriansyah et al’s cost-based replayer to find the diagnostic information.

The result includes activities that are skipped, and activities that must be added. The activities to be added can be fed to an existing process discovery algorithm to create subprocesses that must be added to the existing process, and the activities that were skipped are either made optional or removed from the original process model.

Obviously, this is relevant in situations where the process model isn’t automated, that is, the event logs are from other systems, not directly executed from the process model; this is common when processes are implemented in ERP and other systems rather than in a BPMS, and process models are created manually in order to document the business processes and discover opportunities for optimization. However, as we implement more semi-structured and dynamic processes automated by a BPMS, the event logs of the BPMS itself will include many events that are not part of the original process model; this could be a useful technique for improving understanding of ad hoc processes. By understanding and modeling ad hoc processes that occur frequently, there is the potential to identify emergent subprocesses and add those to the original model in order to reduce time spent by workers creating the same common ad hoc processes over and over again.

There are other measurements of model quality besides fitness, including precision, generalization and simplicity; future research will be looking at these as well as improving the quality of alignment and repair.

Where Did I Misbehave? Diagnostic Information in Compliance Checking

[link to pdf paper]

Elham Ramezani of Eindhoven University presented a paper on compliance checking. Compliance checking covers the full BPM lifecycle: compliance verification during modeling, design and implementation; compliance monitoring during execution; and compliance auditing during evaluation. The challenge is that compliance requirements have to be decomposed and used to create compliance rules that can be formalized into a machine-understandable form, then compared to the event logs using a conformance checker. This is somewhat the opposite of the previous paper, which used conformance checking to find ways to modify the process model to fit reality; this looks at using conformance checking to ensure that compliance rules, represented by a particular process model, are being followed during execution.

Again, this is valuable for processes that are not automated using a BPMS or BRMS (since rules can be strictly enforced in that environment), but rather processes executing in other systems or manually: event logs from systems are compared to the process models that represent the compliance rules using a conformance checker, and the alignment calculated to identify non-compliant instances. There were some case studies with data from a medical clinic, detecting non-compliant actions such as performing an MRI and CT scan of the same organ, or registering a patient twice on one visit.

There was an audience question that was in my mind as well, which is why to express the compliance rules in Petri nets rather a declarative form; she pointed out that the best conformance checking available for aligning with event logs use operational models such as Petri nets, although they may consider adding declarative rules to this method in the future in addition to other planned extensions to the research. She also mentioned that they were exploring applicability to monitoring service level agreement compliance, which has a huge potential for business applications where SLA measurements are not built into the operational systems but must be detected from the event logs.

FNet: An Index for Advanced Business Process Querying

[link to pdf paper]

Zhiqiang Yan, also of Eindhoven University (are you seeing a theme here in process mining?), presented on querying within a large collection of process models based on certain criteria; much of the previous research has been on defining expressive query languages (such as BPMN-Q) that can be very slow to execute, but here they have focused on developing efficient techniques for executing the queries. They identify basic features, or small fragments, of process models, and advanced elements such as transitive or negative edges that form advanced features.

To perform a query, both the query and the target process models are decomposed into features, where the features are small and representative: specific sequences, join, splits and loops. Keywords for the nodes in the graphs are using in addition to the topology of the basic features. [There was a great deal of graph theory in the paper concerned with constructing directed graphs based on these features, but I think that I forgot all of my graph theory shortly after graduation.]

The results seem impressive: two orders of magnitude increase in speed over BPMN-Q. As organizations continue to develop large repositories of process models and hope to get some degree of reuse, process querying will become more important in practical applications.

Using MapReduce to scale events correlation discovery for business processes mining

[link]

The last paper of this session, and of the day, was presented by Hicham Reguieg of Blaise Pascal University in Clermont-Ferrand. One of the challenges in process mining and discovery is big data: the systems that are under consideration generate incredible amounts of log data, and it’s not something that you’re going to just open up in a spreadsheet and analyze manually. This paper looks at using MapReduce, a programming model for processing large data sets (usually by distributing processing across clusters of computers), applied to the specific step of event correlation discovery, which analyzes the event logs in order to find relationships between events that belong to the same business process.

Although he didn’t mention the specific MapReduce framework that they are using for their experiments, I know that there’s a Hadoop one – inevitable that we would start seeing some applicability for Hadoop in some of the big data process problems.

BPM2012: Papers on Process Model Analysis

More from day 2 of BPM 2012.

The Difficulty of Replacing an Inclusive OR-Join

[link]

Cédric Favre of IBM Research presented the first paper of the session on some of the difficulties in translation between different forms of process models. One specific problem is replacing an inclusive OR join from a language such as BPMN, that supports them, to one that does not, such as Petri nets, while maintaining the same behavior in the workflow graph.

In the paper, they identify which IOR joins can be replaced locally using XOR and AND logic, and a non-local replacement technique. They also identify processes where an IOR join in a synchronization role cannot be replaced by XOR and AND logic.

This research is useful in looking at automated translation between different modeling languages, although questions raised by the audience pointed out some of the limitations of the approach, as well as considering that acyclic models (which were all that were considered in this research) could be easily translated from BPMN to BPEL, and that many BPEL to Petri net translators already exist.

Automatic Information Flow Analysis of Business Process Models

[link to pdf paper]

Andreas Lehmann of University of Rostock presented a paper on detecting where data and information leaks can occur due to structural flaws in processes; they define a data leak as direct (but illegal) access to a data object, while an information leak is when secret information can be inferred by someone who should not have access to that information. This research specifically looks at predefined structured processes within an organization; the issues in collaborative processes with ad hoc participants is obviously a bit more complex.

In a process where some tasks are confidential and others are observable (public within a certain domain, such as within a company), confidential tasks may be prerequisites for observable tasks, meaning that someone who knows that the observable task is happening also knows that the confidential task must have occurred. Similarly, if the confidential and observable tasks are mutually exclusive, then someone who knows that the observable task has not occurred knows that the confidential task has occurred instead. These are both referred to as “interferences”, and they have developed an approach to detect these sorts of interferences, then create extended Petri nets for the flow that can be used to identify reachability (which identifies whether an information leak can occur). Their work has included optimizing the algorithms to accomplish this information leak detection, and you can find out more about this at the service-technology website.

Definitely some interesting ideas here that can be applicable in a number of processes: their example was an insurance claim where an internal fraud investigation would be initiated based on some conditions, but the people participating in the process shouldn’t know that the investigation had begun since they were the ones being investigated. Note that their research is only concerned with detecting the information flows, but does not provide methods for removing information leaks from the processes.

BPM2012: Wil van der Aalst BPM Research Retrospective Keynote

Day 2 of the conference tracks at BPM 2012 started with a keynote from Wil van der Aalst of Eindhoven University, describing ten years of BPM research on this 10th occasion of the International Conference on BPM. The conference started in Eindhoven in 2003, then moved to Potsdam in 2004, Nancy in 2005, Vienna in 2006, Brisbane in 2007, Milan in 2008 (my first time at the conference), Ulm in 2009, Hoboken in 2010, Clermont-Ferrand in 2011, then on to Tallinn this year. He showed a word cloud for each of the conference proceedings in the past, which was an interesting look into the hot topics at the time. The 2013 conference will be in Beijing – not sure if I’ll be attending since it’s a long trip – and I expect that we’ll hear where the 2014 conference will be before we leave Tallinn this week.

In his paper, he looked at the four main activities in BPM – modeling, analysis, enactment and management – and pointed out that much of the research focused on the first two, and we need more on the latter two. He also discussed a history of what we now know as BPM, from office automation to workflow to BPM, with contributions from many other areas from data modeling to operations management; having implemented workflow systems since the early 1990’s, this is a progression that I’m familiar with. He went through 20 BPM use cases that cover the entire BPM lifecycle, and mapped 289 research papers in the proceedings from the entire history of the BPM conferences against them:

  1. Design model
  2. Discover model from event data
  3. Select model from collection
  4. Merge models
  5. Compose model
  6. Design configurable model
  7. Merge models into configurable model
  8. Configure configurable model
  9. Refine model
  10. Enact model
  11. Log event data
  12. Monitor
  13. Adapt while running
  14. Analyze performance based on model
  15. Verify model
  16. Check conformance using event data
  17. Analyze performance using event data
  18. Repair model
  19. Extend model
  20. Improve model

He described each of these use cases briefly, and presented a notation to represent their characteristics; he also showed how the use cases can be chained into composites. The results of mapping the papers against the use cases was interesting: most papers were tagged with one or two of these use cases, although some addressed several use cases.

He noted three spikes in use cases: design model, enact model, and verify model; he found the first two completely expected, but that verifying models was a surprising focus. He also pointed out that having few papers addressing use case 20, improve model, is a definite weakness in the research areas.

He also analyzed the research papers according to six key concerns, a less granular measure than the use cases:

  1. Process modeling languages
  2. Process enactment infrastructures
  3. Process model analysis
  4. Process mining
  5. Process flexibility
  6. Process reuse

With these, he mapped the interest in these key concerns over the years, showing how interest in the different areas has waxed and waned over the years: a hype cycle for academic BPM topics.

He spent a bit of time on three specific challenges that should gain more focus research: process flexibility, process mining and process configuration; for example, considering the various types of process flexibility based on whether it is done at design time or runtime, and how it can be by specification, by deviation, by underspecification or by change.

One clear goal of his talk is to help make BPM research more relevant as it matures, in part through more evidence-based BPM research, to encourage vendors and practitioners to adopt the new ideas that are put forward in BPM. He makes some recommendations for research papers in the future:

  • Avoid introducing new languages without a clear purpose
  • Artifacts (software and data) need to be made available
  • Evaluate results based on a redefined criterion and compare with other approaches
  • Build on shared platforms rather than developing prototypes from scratch
  • Make the contribution of the paper clear, potentially by tagging papers with one of the 20 use cases listed above

As BPM research matures, it makes sense that the standards are higher for research topics in general and definitely for having a paper accepted for publication and presentation at a conference. Instead of just having a theory and prototype, there’s a need for more empirical evidence backing up the research. I expect that we’ll see an improvement in the overall quality and utility of BPM research in the coming years as the competition becomes more intense.

BPM2012: Papers on BPM Applications

We had a session of three papers this afternoon at BPM 2012 on how BPM is applied in different environments.

Event-Driven Manufacturing Process Management Approach

[link]

The first paper, presented by Antonio Estruch from Universitat Jaume I in Castellón, Spain, is on automated manufacturing, where several different information systems (SCADA/PLCs, MES, ERP systems) are all involved in the manufacturing process but possibly not integrated. These systems generate events, and this paper is on the detection and analysis of the complex events from these various systems, providing knowledge on how to handle these events while complementing the existing systems.

BPMN 2.0 is proposed to model the processes including the events; all those events that we love (or hate) in BPMN 2.0 are perfect in a scenario such as this, where multiple non-integrated systems are generating events, and processes need to be triggered in response to those events. This can be used for quality control, where the complex events can detect the potential presence of poorly manufactured items that may have escaped detection by the lower level instrumentation. This also allows the modeling and measurement of key performance indicators in the manufacturing processes.

There are some existing standards for integrating with manufacturing solutions, but more than this is required in order to make a usable BPM solution for manufacturing, such as easily configurable user interfaces for the manufacturing-specific event visualization, event management capabilities, and some data processing and analytics for assisting with the complex event processing over time.

They have been testing out this approach using manufacturing simulation software and an open source BPMS, but want to expand it in the future with more CEP patterns and more complete prototypes for real-world scenarios.

Process-Based Design and Integration of Wireless Sensor Network Applications

[link]

Stefano Tranquillini of University of Trento presented a paper on using BPM to assist in programming wireless sensor network (WSN) applications, which are currently stand-alone systems coded by specialized developers. These networks of sensors and actuators, such as those that control HVAC systems in meeting rooms using sensors for temperature, CO2, presence and other factors, have been the subject of other research, where the sensors are exposed as web services and orchestrated within processes, with an extension to process language specifically for sensors. The idea of the research described of this paper is to develop a modeling notation that allows integrated development of the business process and the WSN.

They created an extension to BPMN, BPMN4WSN, and created a modeling environment for designing the systems that include both business processes and WSN logic. This presents WSN interactions as a specific task type; since these can represent complex interactions of sensors and actuators, it’s not as simple as just a web service call, although this allows it to be abstracted as such within a BPMN model. The resulting model is both a deployable business process that deals with the business logic (e.g., billing for power consumption) and code generation for the WSN logic, plus the endpoints and communication proxy that connect the business process to the WSN.

Future work in this area is to make the code more efficient and reusable, create a unified modeling notation, create control flow for WSN nodes rather than the simpler sensors and actuators, and find a solution for multi-process deployment on a WSN.

You can find out more about the research project at the makeSense website.

Modeling Rewards and Incentive Mechanisms for Social BPM

[link to pdf paper]

Ognjen Scekic of the Vienna University of Technology presented a paper on incentives in social BPM, where he defines social business processes as those executed by an ad hoc assembled team of workers, where the team extends beyond the originator’s organizational scope. In addition to requiring environments to allow for rich collaboration, social processes in business require management and assignment of more complex tasks as well as incentive practices.

In general, incentives/rewards are used to align the interests of employees and organizations. A single incentive targets a specific behavior but may also have unwanted results (e.g., incenting only on call time in a call center without considering customer satisfaction), so typically multiple incentives are combined to produce the desired results. He makes a distinction between an incentive, which is offered before the task is completed, and a reward, which is offered after task completion. Consumer social computing uses simple incentive mechanisms, but that is insufficient for complex business processes; there are no general models and systems for modeling, executing, monitoring and adapting reward/incentive mechanisms.

The research identified seven major incentive mechanisms (e.g., pay-for-performance), each with its own evaluation methods. These can interface with the crowd management systems being used for social computing. Eventually, the idea is to have higher level tools so that an HR manager can assemble the incentives themselves, which will then be deployed to work with the social computing platform. Their Rewarding Model (RMod) provides a language for composing and executing these different incentive mechanisms based on time, state and organizational structure. Essentially, this is a rules-based system that evaluates conditions either at specific time intervals or based on inbound events, and triggers actions in response.

He described a scenario that used information from IBM India’s IT incident management system, generating automated team reorganization and monetary rewards in response to the work performed, as well as a scenario for a rotating presidency with a maximum number of consecutive times holding the position.

Although most of this work does not appear to be specific to social BPM, any of the incentives/rewards that result in automated team structure reorganization are likely only applicable in self-organized collaborative teams. Otherwise, these same methods could be applied to manage incentives in more structured teams and processes, although those likely already have incentive/rewards schemes in place as part of their structure.