bpmNEXT 2018: Last session with a Red Hat demo, Serco presentation and DMN TCK review

We’re on the final session of bpmNEXT 2018 — it’s been an amazing three days with great demos and wonderful conversations.

Exploiting Cloud Infrastructure for Efficient Business Process Execution, Red Hat

Kris Verlaenen, project lead for jBPM as part of Red Hat, presented on cloud BPM infrastructure, specifically for execution and monitoring. Cloud makes BPM lightweight, scalable, embedable and able to take advantage of the larger cloud app ecosystem. They are introducing some new cloud infrastructure, including a controller for managing server deployments, a smart router for delegating and aggregating requests from applications to servers, and monitoring that aggregates process statistics across servers and containers. The demo showed using Red Hat’s OpenShift container application platform (actually MiniShift running on his laptop) to create a new environment and deploy an IT hardware ordering BPM application. He walked through using the application to create a new order and see the milestone-based monitoring of the order, then the hardware provider’s view of their steps in the process to provide information and advance the process to the next stage. The process engine and monitoring engine can be deployed in different containers on different hardware, in any combination of cloud providers and on-premise infrastructure. Applications and servers can be bundled into a single immutable image for easy provisioning — more of a microservices style — or can be deployed independently. Multiple versions of the same application can be deployed, allowing current instances to play out in the original version while new instances use the most recent version, or other strategies that would allow new instances of any version to be created, while monitoring can aggregate instance data from all versions in all containers.

Kris is also live-blogging the conference, check out his posts. He has gone back and included the video of each presentation when they are released (something that I didn’t do for page load performance reasons) as well as providing his commentary on each presentation.

Dynamic Work Assignment, Serco

Lloyd Dugan of Serco has the unenviable position of being the last presenter of the conference, although he gave a presentation of dynamic work assignment implementation rather than an actual demo (with a quick view of the simple process model in the Trisotech animator near the end, plus an animation of the work assignment in action). His company is a call center business process outsourcer, where knowledge workers use a case management application implemented in BPMN, driven by events such as inbound calls and documents, as well as timers. Real-time work prioritization and assignment is necessary because of SLAs around inbound calls, and the task management model is moving from work being selected (and potentially cherry-picked) by workers, to push assignments. Tasks are scored and assigned using decision models that include task type and SLAs, and worker eligibility based on each individual’s skills and training. Although work assignment products exist, this one is specifically for the complex rules around the US Affordable Care Act administration, which requires a combination of decision tables, database table-driven rules, and lower-level coding to provide the right combination of flexibility and performance.

DMN TCK (Technical Compatibility Kit) Working Group

Keith Swenson of Fujitsu (but presenting here in his role on the DMN standards) started on the idea of a set of standardized DMN technical compatibility tests based on conversations at bpmNEXT in 2016, and he presented today on where they’re at with the TCK. Basically, the TCK provides a way for DMN vendors to demonstrate their compliance with the standard by providing a set of DMN models, input data, and expected results, testing decision tables, boxed expressions and FEEL. Vendors who can demonstrate that they pass all of the TCK tests are listed on a github site along with information about individual test results, providing a way for DMN customers to assess the compliance level of vendors. Keith wrote an update on this last September that provides a good summary up to that point, and in today’s presentation he walked through some of the additional things that they’ve done including identifying sections of the DMN specification that require clarifications or additions due to ambiguity that can lead to different implementations. DMN 1.2 is coming out this year, which will require a new set of tests specifically for that version while maintaining the previous version tests; they are also trying to improve testing of error cases and introducing more real-world decision models. If you create and use DMN models, or make a DMN-compliant decision management product, or you’re otherwise interested in the DMN TCK, you can find out here how to get involved in the working group.

That’s it for bpmNEXT 2018. There will be voting for the best in show and some wrapup after lunch, but we’re pretty much done for this year. Another amazing year that makes me proud to be a part of this community.

bpmNEXT 2018: All about bots with Cognitive Technology, PMG.net, Flowable

We’re into the afternoon of day 2 of bpmNEXT 2018, with another demo section.

RPA Enablement: Focus on Long-Term Value and Continuous Process Improvement, Cognitive Technology

Massimiliano Delsante of Cognitive Technology presented their myInvenio product for analyzing processes to determine where gaps exist and create models for closing those gaps through RPA task automation. The demo started with loading historical process data for process mining, which created a process model from the data together with activity resources, counts and other metrics; then comparing the model for conformance with a reference model to determine the frequency and performance of conformant and non-conformant cases. The process discovery model can be transformed to a BPMN model, and simulated performance. With a baseline data set of all manual activities, the system identified the cost of each activity, helping to identify which activities would result in the greatest savings if automated, and fed the data for actual resources used into the simulation scenario; adjusting the resources required by specifying the number of RPA robots that could be deployed at specific tasks allows for a what-if simulation for the process performance with an RPA implementation. An analytics dashboard provides visualization of the original process discovery and the simulated changes, with performance trends over time. Predictive analytics can be applied to running processes to, for example, predict which cases will not meet their deadlines, and some root cause analysis for the problems. Doing this analysis requires that you have information about the cost of the RPA robots as well as being able to identify which tasks could be automated with RPA. Good integration of process discovery, simulation, analysis and ongoing monitoring.

Integration is Still Cool, and Core in your BPM Strategy, PMG.net

Ben Alexander from PMG.net focused on integration within BPM as a key element for driving innovation by increasing the speed of application development: integrating services for RPA, ML, AI, IoT, blockchain, chatbots and whatever other hot new technologies can be brought together in a low-code environment such as PMG. His demo showed a vendor onboarding application, adding a function/subprocess for assessing probability of vendor approval using machine learning by calling AzureML, user task assignment using Slack integration or SMS/phone support through a Twilio connector, and RPA bot invocation using a generic REST API. Nice demo of how to put all of these third-party services together using a BPM platform as the main application development and orchestration engine.

Making Process Personal, Flowable

Paul Holmes-Higgin and Micha Keiner from Flowable presented on their Engage product for customer engagement via chat, using chatbots to augment rather than replace human chat, and modeling the chatbot behavior using standard modeling tools. In particular, they have found that a conversation can be modeled as a case with dynamic injection of processes, with the ability to bring intelligence into conversations, and the added benefit of the chat being completely audited. The demo was around the use case of a high-wealth banking client talking to their relationship manager using chat, with simultaneous views of both the client and relationship manager UI in the Flowable Engage chat interface. The client mentioned that she moved to a new home, and the RM initiated the change address process by starting a new case right in the chat by invoking a context-sensitive digital assistant. This provided advice to the RM about address change regulatory rules, and provided a form in situ to collect the address data. The case is now progressed through a combination of chat message to collaborate between human players, forms filled directly in the chat window, and confirmation by the client via chat by presenting them with information to be updated. Potential issues, such as compliance regulations due to a country move, are raised to the RM, and related processes execute behind the scenes that include a compliance officer via a more standard task inbox interface. Once the compliance process completes, the RM is informed via the chat interface. Behind the scenes, there’s a standard address change BPMN diagram, where the chat interface is integrated through service activities. They also showed replacing the human compliance decision with a decision table that was created (and manually edited if necessary) based on a decision tree generated by machine learning on 200,000 historical address change cases; rerunning the scenario skipped the compliance officer step and approved the change instantaneously. Other chat automated tasks that the RM can invoke include setting reminders, retrieving customer information and more using natural language processing, as well as other types of more structured cases and processes. Great demo, and an excellent look at the future of chat interfaces in process and case management.

Vega Unity 7: productizing ECM/BPM systems integration for better user experience and legacy modernization

I recently had the chance to catch up with some of my former FileNet colleagues, David Lewis and Brian Gour, who are now at Vega Solutions and walked me through their Unity 7 product release. Having founded and run a boutique ECM and BPM services firm in the past, I have a soft spot for the small companies who add value to commercial products by building integration layers and vertical solutions to do the things that those products don’t do (or don’t do very well).

Vega focuses on enterprise content and process automation, primarily for financial and government clients. They have some international offices – likely development shops, based on the locations – and about 150 consultants working on customer projects. They are partners with both IBM and Alfresco for ECM and BPM products for use in their consulting engagements. Like many boutique services firms, Vega has developed products in the course of their consulting engagements that can be used independently by customers, built on the underlying partner technology plus their own integration software:

  • Vega Interchange, which takes one of their core competencies in content migration and creates an ETL platform for moving content and processes between any of a number of systems including Documentum, Alfresco, OpenText, four flavors of IBM, and shared folders on file systems. Content migration is typically pretty complex by the time you consider metadata and permissions mappings, but they also handle case data and process instances, which is rarely tackled in migration scenarios (most just recommend that you keep the old system alive long enough for all instance to complete, or do manual migration). Having helped a lot of companies think about moving their content and process management systems to another platform, I know that this is one of those things that sounds mundane but is actually difficult to do well.
  • Vega Unity, billed as a digital transformation platform; we spent most of our time talking about Unity 7, their latest release, which I’ll cover in more detail below.
  • Vertical solutions for insurance (underwriting, claims, financial operations), government (case management, compliance) and banking (onboarding, loan origination and servicing, wealth management, card dispute resolution).

01 Vega UnityUnity 7 is an integration and application development tool that links third-party content and process systems, adding a consistent user experience layer and consolidated analytics. Vega doesn’t provide any of the back-end systems, although they partner with a couple of the vendors, but provide tools to take that heterogeneous desktop environment and turn it into a single user interface. This has a significant value in simplifying the user environment, since they only need to learn one system and some of the inter-system integration is automated behind the scenes, but it’s also of benefit for replacing one or more of the underlying technologies due to legacy modernization or technology consolidation due to corporate acquisition. This is what systems integrators have been doing for a long time, but Unity makes it into a product that also leverages the deep system knowledge that they have from their Interchange product. Vega can add Unity to simplify an existing environment, or come in on a net-new ECM/BPM implementation that uses one of their partner technologies plus their application development/integration layer. The primary use cases are federated enterprise content search (where content is indexed in Unity Intelligence engine, including semantic searches), case management applications, and creating legacy modernization by creating a new front end on legacy systems to allow these to be swapped out without changing the user environment.

Unity is all about rapid development that includes case-based applications, content management, data and analytics. As we walked through the product and sample applications, there was definitely a strong whiff of FileNet P8 in here (a system that I used to be very familiar with) since the sample was built with IBM Case Manager under the covers, but some nice additions in terms of unified interface and analytics.

Their claim is that the Unity Case Manager would look the same regardless of the underlying technology, which would definitely make it easier to swap out or federate content, case and process management systems behind the scenes. In the sample shown, since IBM Case Manager was primary, the case view was derived directly from IBM CM case data with the main document list from IBM FileNet P8, while the “Other Documents” tab showed related documents from Alfresco. Dynamic foldering can combine content from different systems into common folders to reduce this visual dichotomy. There are role-based views based on the user profile that provide access to data from multiple systems – including CRM and others in addition to ECM and BPM – and federate it into business objects than can include records, virtual folder structures and related objects such as people or claims. Individual user credentials can be passed to the underlying systems, or shared credentials can be used in connectors for retrieving unrestricted information. Search templates, system connectors and a variety of properties are set in a configuration console, making it straightforward to set up and modify standard operations; since this is an XML-based declarative environment, these configuration changes deploy immediately. 17 Vega Unity Intelligence Sankey diagramThe ability to make different types of configuration changes is role-based, meaning that some business users can be permitted to make changes to the shared user interface if desired.

Unity Intelligence adds a layer of visual analytics that aggregates data from the underlying systems and other sources; however, this isn’t just visualization, but can be used to filter work and take action on cases directly via action popup menus or opening cases directly from the analytics interface. They’re using open source tools such as SOLR (search), Lucene (information retrieval) and D3 visualization with good effect: I saw a demo of a Sankey diagram representing the workflow through cases based on realtime data that provided a sort of process mining view of work in progress, and allowed selecting dates for past views of work including completed cases. For case management, in which processes are semi-structured (at best), this won’t necessarily show process anomalies, but can show service interruptions and opportunities for process improvement and standardization.

They’ve published a video showing more about Unity 7 Intelligence, as well as one showing Unity Semantics for creating pivot tables for faceted search on content repositories.

Vega Unity 7 - December 2017

A Perfect Combination: Low Code and Case Management

The paper that I wrote on low code and case management has just been published – consider it a Christmas gift! It’s sponsored by TIBCO, and you can find it here (registration required).

This is an accompaniment to the webinar that I did recently with Roger King and Nicolas Marzin, which is available for replay on demand.

Fun times with low code and case management

I recently held a webinar on low code and case management, along with Roger King and Nicolas Marzin of TIBCO (TIBCO sponsored the webinar). We tossed aside the usual webinar presentation style and had a free-ranging conversation over 45 minutes, with Nicolas giving a quick demo of TIBCO’s Live Apps in the middle.

Not the long tailAlthough preparing for a webinar like this takes just as long as a standard presentation, it’s a lot more fun to participate. I also think it’s more engaging for the audience, even though there’s not as much visual material; I created some slides with a few points on the topics that we planned to cover, including some fun graphics. I couldn’t resist including a visual pun about long tail applications. Smile

You can find the playback here if you missed it, or want to watch again. If you watched it live, there was a problem with the audio for the first couple of slides. Since it was mostly me giving some introductory remarks and a quick overview of case management and low code, we just re-recorded that few minutes and fixed the on-demand version.

I’m finishing up a white paper for TIBCO on case management and low code, stressing that not only is low code the way to go for building case management applications, but that a case management paradigm is the best fit for low code applications. We should have that in publication shortly, so stay tuned. If you attended the webinar, you should receive a link to the paper when it’s published.

Low code and case management discussion with @TIBCO

I’m speaking on a webinar sponsored by TIBCO on November 9th, along with Roger King (TIBCO’s senior director of product management and strategy, and Austin Powers impressionist extraordinaire) and Nicolas Marzon (TIBCO’s director of strategic enablement group). From their registration page:

Supercharge your digital transformation – When low code meets case management

While digital transformation is likely on your company’s agenda, the demand for ever more enterprise apps is not slowing down. How can you both transform and meet this development need?

Process-centric applications that run your business involve content, events, decisions, and automation. Knowledge workers benefit from environments that integrate all of these capabilities in a case management paradigm, which combines automation with human reasoning. And new low-code development platforms will let your business users configure their own case management apps to meet their situational or strategic needs.

This is not a structured presentation or TIBCO demo: instead, I’ll kick off with a couple of level-setting slides on case management and low code platforms, then lead a discussion with Roger and Nicolas on a variety of issues facing us with low code and case management. Some of the things on my list of potential topics:

  • What are the business and technology drivers pushing us towards low code?
  • How do we reconcile citizen developers’ situational applications with a broader architecture and design vision?
  • How are low code platforms and their developers supported by a center of excellence without squashing innovation?
  • How do we handle governance of low code apps to make sure that they don’t do anything that might negatively impact privacy or performance?
  • What sort of organizational links do we need to bring together microservices developers and low code platforms?

If you have some other topics that you’d like to hear us discuss, please add them as comments below and I’ll try to work them in. Sign up for the webinar at the registration link above and join in on November 9th.

I’m also working on a couple of white papers for them on case management and low code, which is coming up in pretty much every business and technical discussion that I have these days. At least one of those papers will be available by the time of the webinar, with the other available shortly after.

Insurance case management: SoluSoft and OpenText

It’s the last session of the last morning at OpenText Enterprise World 2017 — so might be my last post from here if I skip out on the one session that I have bookmarked for late this afternoon — and I’m sitting in on Mike Kremer of OpenText and Kiran Thakrar of SoluSoft showing SoluSoft’s Active Client Management for Insurance, built on OpenText’s Process Suite and case management. SoluSoft originally built this capability on earlier OpenText products (Global 360) but have moved to the new low-code platform. Their app can be used out of the box, or can be configured to suit a particular environment.

The goal of Active Client Management for Insurance is to provide a 360 view of the client, including data from a variety of sources (typically systems of record for policy administration or claims), content from any repository, open tasks and pending actions, checklists and ad hoc notes. It includes the entire customer lifecycle, from onboarding and underwriting, through policy administration and claims; basically, it’s user work management and CRM in one.

The solution is built on the core of Process Suite, using the full entity modeling AppWorks-style low-code development. It also includes process intelligence for analytics, Capture Center for document capture, and Streamserve for customer communication management. Above all of these OpenText building blocks, SoluSoft has built a client management solution accelerator that (I believe) they can use for a variety of vertical applications; below the OpenText layer is a service bus integration to line of business systems. For insurance, they’ve created a number of business processes and request types corresponding to different parts of the business, such as processing a new application, amending a policy, or initiating a claim; each of these can be configured for the specific customer’s terminology, or disabled if they don’t require specific functions. It’s not completely clear, however, how much of the functionality of other insurance systems might be replaced by this rather than augmented: clearly, the core policy administration system stays as the system of record, but an underwriting or claims system workflow might be replaced by this functionality. Having done this a few times with clients that use systems such as Guidewire, I have to say that this is a non-trivial architectural exercise to decide what parts of the flow happen where, and how to properly interact with other systems.

At the heart is a generic capture-driven workflow: scan, capture, index, data entry, process, approve, review, fulfill. The names of these can be aliased for different vertical applications — their example is that “processing” becomes “underwriting” — and steps can be skipped for a specific request type. Actions that can be performed at any of these work steps are configured using checklists. Ad hoc processes can be attached to steps in this master flow, either a single-step task or a more complex flow, and be executed before, after or in parallel to the pre-defined work step. Ad hoc processes can be created at runtime, and secondary request processes created for certain case types. The ability to make any of these configuration changes is restricted by role security. Relationships between clients, policies, brokers, claims, etc. are managed using folders for customers, policies and advisers, driven by entity modeling in Process Suite (AppWorks Low Code); this ability to establish relationships between all of these types of entities is critical for providing the complete view of the customer. They also have integrated iHub analytics for showing case statistics and workload analysis, as well as more complex analysis of risk or profitability for specific customer groups or policy types.

 

Although SoluSoft built some of this in custom code. a lot of the application is built directly in the OpenText low code development environment provided by Process Suite. This means that it’s fast to configure or even do some basic customizations, with the caveats that I mentioned earlier about deciding on where some parts of the workflow might happen when you have existing LOB systems that already do that. It also provides them with native mobile support through AppWorks, rather than having to build a separate mobile application.

We saw the version focused on insurance, but they also have flavors for pensions, financial services, government, healthcare and education. However, it appears that there is an existing legacy of the Global 360-based application, and it’s not clear how long it will take for this new AppWorks version to make its way into the market.

Case management in insurance

Case Management In InsuranceI recently wrote a paper on how case management technology can be used in insurance claims processing, sponsored by DST (but not about their products specifically). From the paper overview:

Claims processing is a core business capability within every insurance company, yet it is
often one of the most inefficient and risky processes. From the initial communication that
launches the claim to the final resolution, the end-to-end claims process is complex and
strictly regulated, requiring highly-skilled claims examiners to perform many of the
activities to adjudicate the claim.

Managing a manual, paper-intensive claims processing operation is a delicate balance
between risk and efficiency, with most organizations choosing to decrease risk at the cost
of lower operational efficiency. For example, skilled examiners may perform rote tasks to
avoid the risk of handing off work to less-experienced colleagues; or manual tracking and
logging of claims activities may have to be done by each worker to ensure a proper audit
trail.

A Dynamic Case Management (DCM) system provides an integrated and automated
claims processing environment that can improve claim resolution time and customer
satisfaction, while improving compliance and efficiency.

You can download it from DST’s site (registration required).

Camunda Community Day: @CamundaBPM technical sessions

I’m a few weeks late completing my report on the Camunda Community Day. The first part was on the community contributions and sessions, while the second half documented here is about Camunda showing new things that could be used by the community developers in the audience.

First up was Vladimirs Katusenoks, core developer on BPMN.io, with a presentation on bpmn-js: how it works, and how to extend it with custom functionality such as adding color to BPMN diagrams, which is a permitted extension to BPMN XML. His live coding presentation showed changing the colour of a shape background, either statically in code for the element class or by adding a colour picker to an individual element context palette; this was based on the bpmn-js core BPMN functionality, using bpmn-moddle to read/write into the metamodel and diagram-js to render it. There are a number of other bpmn-js examples on Github.

Next, Felix Müller discussed KPI management, expanding on his August blog post on the topic. KPI management is based on quantitative indicators for process cycle-time improvement, including cycle time and overdue time, plus definitions of the time period, unit of measure and calculation method. In Camunda, KPIs are defined in the Modeler, then monitored in Cockpit. He showed how to use the concept of element templates (that extend core definitions) to create custom fields on collaboration object (process) or individual tasks, e.g., KPI unit (hours, days, minutes) and KPI threshold (number). In Cockpit, this appears as a new tab for KPI Overview, showing a list of individual instances and target/current/average duration, plus an indicator of overdue status of the instance and any contained tasks; there is also a decorator bubble on the top right of the task on the process model to show the number of overdue instances on the aggregate model, or overdue status as a check mark or exclamation on individual models. The Cockpit modifications were done by creating a plug-in to display KPI statistics, which queries and calculates on the fly – a potential performance problem that might be improved through pre-aggregation of statistics. He also demonstrated how to modify this basic KPI model to include an expected duration as well as maximum duration. A good start, although I think there’s a lot more that’s needed here.

Thorsen Lindhauer, a Camunda core BPM developer, discussed how to contribute to the Camunda open source community, both at camunda.org (engine and desktop modeler, same as the commercial product) and bpmn.io (JS tools). Possible contributions include answering questions on forums; logging error reports; documenting ideas for new functionality; and working on code. Code contributions typically start by having a forum discussion about planned new functionality, then a decision is made on whether it will be core code (higher quality standards since it will become part of the commercial product, and will eventually be maintained by Camunda) versus a community extension; this is followed by ongoing development, merge and release cycles. Camunda is very supportive of community contributions, even if they don’t become part of the core product: community involvement is critical to the health of any open source project.

The last presentation of the community day was Daniel Meyer discussing the product roadmap. The next release, 7.6, will be on November 30 – they have a strict twice-yearly release cycle. This release includes updates to DMN, CMMN, BPMN, rolling updates, Cockpit features, and UI/UX in web apps; I have captured a few notes here but see the linked roadmap for a more complete and accurate description and the online documentation as it is rolled out.

  • DMN:
    • Simpler decision table editing with drop-down lists of comparison/range operators instead of having to remember FEEL or Juel syntax
    • Ability to add list of selection values (advanced mode still exists for full flexibility)
    • Decisions with literal expressions
    • DMN engine performance 4-6x faster
    • Support for decision requirements diagrams/graphs (DRD/DRG) that can link decision tables; visualization in Modeler and Cockpit are not there yet but the structures are supported – in my experience, this is typical of Camunda, which builds and releases the engine capabilities early then follows with the visualization, allowing for a quicker start for executable diagrams
  • CMMN:
    • Modeler now completely models CMMN including technical attributes such as listeners
    • Cockpit (visualization still incomplete although we saw a brief view) will allow linking models of same or different types
    • Engine feature and functionality improvements
  • Rolling updates allow Camunda process engine to be updated without shutdown: guaranteed backwards compatibility of database schema to allow database to be updated first, then roll updates of engines by taking each offline individually and allowing load balancer to reroute sessions.
  • BPMN:
    • BPMN conditional event supported
    • Improved modeling including labels, collapsing/expanding subprocesses to switch between view types, and field injections in property panel.
  • Cockpit:
    • More flexible/granular human task monitoring
    • New welcome page with links to apps (Cockpit, Tasklist, Admin), user profile, and frequent links
    • Batch operations (cancel, suspend, etc.) based on batch action capability built for instance migration
    • CMMN and DMN DRD visualization

Daniel discussed some other minor improvements based on customer feedback, plus plans for 2017, including a web modeler for collaborative BPMN, CMMN and DMN modeling via a SaaS offering and a future on-premise version. They finished the day with a poll and community feedback to establish priorities for future versions.

I stayed on for the second day, which is actually a separate conference: BPMCon for Camunda’s enterprise (commercial) customers. Rather, I stayed on for Neil Ward-Dutton’s keynote, then ducked out for most of the rest of day, which was in German. Neil’s keynote included results from workshops that he has done with executives on digital transformation, and how BPM can be used to create the bridges between the diverse parts of a digital business (internal to external, automated to people-centric), while tracking and coordinating the work that flows between the different areas.

Disclaimer: Camunda paid my travel expenses to attend both conference days. I was not compensated in any way for attending or for writing this post, and the opinions here are my own.

Take Mike Marin’s CMMN survey: learn something and help CMMN research

CMMN diagram from OMG CMMN 1.0 specification document
CMMN diagram from OMG CMMN 1.0 specification document

Mike Marin, who had a hand in creating FileNet’s ECM platform and continued the work at IBM as chief architect on their Case Manager product, is taking a bit of time away from IBM to complete his PhD. He’s doing research into complexity metrics for the Case Management Model and Notation standard, and he really needs people to complete a survey in order to complete his empirical research. The entire thing will take 45-60 minutes, and can be completed in multiple sessions; 30-40 minutes of that is an optional tutorial, which you can skip if you’re already familiar with CMMN.

Here’s his invitation to participate (feel free to share with your process and case modeling friends):

We are conducting research on the Case Management Modeling and Notation (CMMN) specification and need your help. You don’t need to be familiar with CMMN to participate, but you should have some basic understanding of process technology or graphical modeling (for example: software modeling, data modeling, object modeling, process modeling, etc.), as CMMN is a new modeling notation. Participation is voluntary and no identifiable personal information will be collected.

You will learn more about CMMN with the tutorial; and you will gain some experience and appreciation for CMMN by evaluating two models in the survey. This exercise should take about 45 to 60 minutes to complete; but it can be done in multiple sessions. The tutorial is optional and it should take 30 to 40 minutes. The survey should take 15 to 20 minutes. You can consider the survey a quiz on the tutorial.

As an appreciation for your collaboration, we will donate $6 (six dollars) to a charity of your choice and we will provide you with early results of the survey.

You can use the following URL to take the tutorial and survey. The first page provides more information on the project.

http://cmmn.limequery.org/index.php/338792?lang=en

He wrote a more detailed description of the research over on BPTrends.

Mike’s a former colleague and a friend, but I’m not asking just because of that: he’s also a Distinguished Engineer at IBM and a contributor to standards and technology that make a huge impact in our field. Help him out, take the survey, and it will help us all out in the long run.