HP Consulting’s Standards-Driven Requirements Method at BPMCM15

Tim Price from HP’s enterprise transformation consulting group presented in the last slot of day 2 of the BPM and case management summit (and what will be my last session, since I’m not staying for the workshops tomorrow) with a discussion on how to improve requirements management by applying standards. There are a lot of potential problems with requirements: inconsistency, not meeting the actual needs, not designed for change, and especially the short-term thinking of treating requirements as project rather than architecture assets. Price is pretty up-front about how you can’t take a “garden variety” business analyst and have them create BPMN diagrams without training, and that 50% of business analysts are unable to create lasting and valuable requirements.

Although I haven’t done any quantitative studies on this, I would tend to agree that the term “business analyst” covers a wide variety of skill levels, and you can’t just assume that anyone with that title can create reusable requirements models and assets. This becomes especially important when you move past written requirements — that need the written language skills that many BAs do have — to event-driven BPMN and other models; the main issue is that these models are actually code, albeit visual code, that may be beyond the technical analysis capabilities of most BAs.

Getting back to Price’s presentation, he established traceability as key to requirements: between BPMN or UML process models and UML use cases, for example; or upwards from processes to capabilities. Data needs to be modeled at the same time as processes, and processes should be modeled as soon as the the high level use case is defined. You can’t always created a one-to-one relationship between different types of elements: an atomic BPMN activity may translate to a use case (system or human), or to more than one use cases, or to only a portion of a use case; lanes and pools may translate to use case actors, but not necessarily; events may represent states and implied state transitions, although also not necessarily. Use prose for descriptions, but not for control flow: that’s what you use process models for, with the prose just explaining the process model. Develop the use case and process models first, then write text to explain whatever is not obvious in the diagrams.

He walked through a case study of a government welfare and benefits organization that went through multiple failed implementations, which were traced back to poor requirements: structural problems, consistency issues, and designs embedded in the specification. Price and his team spent 12 months getting their analysts back on track by establishing standards for creating requirements — with a few of the analysts not making the transition — that led to CMMI recognition of their new techniques. Another case study applied BPMN process models and UML use cases for a code modernization process: basically, their SDLC was the process being improved. A third case study used BPMN to document as-is and to-be processes, then use case models with complete traceability from the to-be processes to the use cases, with UML domain class models being developed in parallel.

The lessons learned from HP’s experiences:

    • Apply existing standards consistently, including BPMN, CMMN, DMN, UML

    • Use graph-structured languages for structure and logic, and prose for description

    • Use repository-based modeling tools to allow for reusability and collaboration

    • Be concise, be precise, be consistent

    • Create requirements models that are architecture assets, not just short-term project assets

    Some good lessons for requirements analysis; although this was developed for complex more waterfall-y SDLCs, some of these can definitely be adapted for more agile implementations.

    The Enterprise Digital Genome with Quantiply at BPMCM15

    “An operating system for a self-aware quantifiable predictive enterprise” definitely gets the prize for the most intriguing presentation subtitle, for an afternoon session that I went to with Surendra Reddy and David Chaney from Quantiply (a stealth startup that has just publicly launched), and their customer, a discount brokerage service whose name I have been requested to remove from this post.

    Said customer has some significant event data challenges, with a million customers and 100,000 customer interactions per day across a variety of channels, and five billion log messages generated every day across all of their product systems and platforms. Having this data exist in silos with no good aggregation tools means fragmented and poor customer support, and also significant challenges in system and internal support.

    To address these types of heterogenous data analysis problems, Quantiply has a two-layer tool: Edge Cloud for the actual data analysis, which can then be exposed to different roles based on access control (business users, operational users, data scientists, etc.); and Pulse for connecting to various data sources including data warehouses, transactional databases, BPM systems and more. It appears that they’re using some sort of dimensional fact models, which is fairly standard data warehouse analytical tools, but their Pulse connectors is allowing them to pour in data on a near-real-time basis, then make the connections between capabilities and services to be able to do fast problem resolution on their critical trading platforms. Because of the nature of the graph connectivity that they’re deriving from the data sources, they’re able to not only resolve the problem by drilling down, but also determine what customers were impacted by the problem in order to follow up. In response to a question, the customer said that they had used Splunk and other log analytics tools, but that this was “not Splunk”, in terms of both the real-time nature, and the front-end user experience, plus deeper analytical capabilities such as long-term interaction trending. In some cases, the Quantiply representation is sufficient analysis; in other cases, it’s a starting point for a data scientist to dig in and figure out some of the more complex correlations in the data.

    There was a lot of detail in the presentation about the capabilities of the platform and what the customer is doing with it, and the benefits that they’re seeing; there’s not a lot of information on the Quantiply website since they’re just publicly launching.

    Update: The original version of this post included the name of the customer and their representative. Since this was a presentation at a public conference with no NDA or confidentiality agreements in place, not even a verbal request at any time during the session, I live-blogged as usual. A day later, the vendor, under pressure from the customer’s PR group, admitted that they did not have clearance to have this customer speak publicly, which is a pretty rookie mistake on their part, although it lines up with my general opinion on their social media skills. As a favor to the conference organizers, who put a lot of effort into making a great experience for all of us, I’ve decided to remove the customer’s name from this post. I’m sure that those of you who really want to know it won’t have any trouble finding it, because of this thing called “the internet”.

    The Digital Enterprise Graph with @denisgagne at BPMCM15

    Yesterday, Denis Gagné demonstrated the modeling tools in the Trisotech Digital Enterprise Suite, and today he showed us the Digital Enterprise Graph, the semantic layer that underlies the modeling tools and allows for analysis of relationships between them. There are many stakeholders involved in defining and implementing a digital enterprise, including enterprise architects, business architects and process analysts; each of these roles has a different view on transformation of the enterprise and different goals for their work. He sees a need for a more emergent enterprise architecture rather than a structured top-down architecture effort: certainly, architects need to create the basic structure, but rather than trying to build out every artifact that might exist in the architecture before making use of it, a more pragmatic approach is for a “just-in-time” architecture that is a bit more self-organizing.

    A graph, in general, is a powerful but simple contstruct: it consists only of nodes and links, but can provide meaningful connections of loosely-coupled business entities that can be easily modified. Think about a social graph, such as Facebook’s social graph: it’s just people and their connections, but it’s a rich context for analyzing the relationships between nodes (people) in the graph depending on the nature of the links (friends, likes, etc.) between them. Trisotech’s Digital Enterprise Graph links the who, what, when, where, why and how of an organization by mapping every model that is added to the Graph onto those types of nodes and links, whether the model originates with one of their own modelers (BPMN, CMMN, DMN) or an external EA modeling tool (Casewise, SAP PowerDesigner, Sparx). This provides an intelligent fabric for automated reasoning about the current relationships between parts of the organization, but also allows estimation of the impact of changes in one area on other parts of the organization. Their Insight Analyzer tool allows you to introspect the graph, providing views such as interconnectivity between nodes as part of impact analysis, or tracing responsibility for a capability up through the organizational structure. The analysis isn’t automated, but provides visualization tools for analysts and planners, based on a single integrated scheme that allows for straightforward queries.

    He gave us a demo of the Graph in action, starting with a BPMN model that uses the Sparx EA accelerator for SOA architecture artifacts, and tracing through that loose coupling to the architectural components in the EA framework, with similar linkages for roles from a Casewise business architecture framework and definitions of contracts from the Financial Business Industry Ontology (FIBO). The idea is that the Graph provides an underlying mesh of semantic linkages from elements in a model to other frameworks, ontologies and models while still retaining business understandability at the model level. In the Insight Analyzer, we saw how to explore linkages between different types of elements, such as RACI-type relationships between roles and activities, as well as a more detailed introspection that allows drilling down on any node to see what other nodes and models that it is linked to, and traversing those links.

    Interesting ideas about how to bring together all of the architecture, process, case and decision models and frameworks into a single graph for analysis of your digital enterprise.

    Wearable Workflow by @wareFLO at BPMCM15

    Charles Webster gave a breakout session on wearable workflow, looking at some practical examples of combining wearables — smart glasses, watches and even socks — with enterprise processes, allowing people wearing these devices to have device events integrated directly into their work without having to break to consult a computer (or at least a device that self-identifies as a computer). Webster is a doctor, and has a lot of great case studies in healthcare, such as detecting when a healthcare worker hasn’t washed their hands before approaching a patient by instrumenting the soap dispenser and the worker. Interestingly, the technology for the hand hygiene project came from smart dog collars, and we’re now seeing devices such as Intel’s Curie that are making this much more accessible by combining sensors and connectivity as we commercialize the internet of things (IoT).

    He was an early adopter of Google Glass, and talked to us about the experience of having a wearable integrated into his lifestyle, such as for voice-controlled email and photography, plus some of the ideas for Google Glass that he has for healthcare workflows where electronic health records (EHR) and other device information can be integrated with work patterns. Google Glass, however, was not a commercial success since it is too bulky and geeky-looking, as well as requiring frequent recharging if you’re using it a lot. It requires more miniaturization to be considered as a possibility for most people, but that’s a matter of time, and probably a short amount of time, especially if they’re integrated directly into eyeglass frames that likely have a lot of unused volume that could be filled with electronic components.

    Webster talked about a university curriculum for healthcare technology and IoT that he designed, which would include the following courses:

    • Wearable human factors and workflow ergonomics
    • Data and process mining wearable data, since wearables generate so much more interesting data that needs to be analyzed and correlated
    • Designing and prototyping wearable products

    IMG_20150623_104530He is working on a prototype for a 3D-printed, Arduino-based wearable interactive robot, MrRIMP, intended to be used by pediatric healthcare professionals to amuse and distract their young patients during medical examinations and procedures. He showed us a video of a demo of he and MrRIMP interacting, and the different versions that he’s created. Great ideas about IoT, wearables and healthcare.

    Day 2 Keynote at BPMCM15

    Second day at the BPM and Case Management summit in DC, and our morning keynote started with Jim Sinur — former Gartner BPM analyst — discussing opportunities in BPM and case management. He pointed out the proven benefits of process and case management, in terms of improving revenue, costs, time to market, innovation and visibility, while paving a path to digital transformation. However, these tried-and-true ROI measures aren’t just enough these days: we also need to consider customer loyalty, IoT, disruptive companies and business models, and in general, maintaining competitive differentiation in whatever way necessary to thrive in the emerging marketplace. In order to accommodate this, as well as attract good workers, it’s necessary to break the specialist mindset and allow people to become knowledge workers. I gave a workshop last week at the IRM BPM conference on the future of work, and I agree that this is a key part of it: more of the routine work is being automated, leaving the knowledge work for the people in the process; this requires a work environment that allows people to do the right thing at the right time to achieve a goal, not just work at a pre-defined task in a pre-defined way. Sinur cited a number of examples of processes that are leveraging emerging technologies, including knowledge workers’ workbenches that incorporate smart automated agents and predictive analytics; and IoT applications in healthcare and farming. The idea is to create goal-driven and proactive “smarming” processes that figure out on their own how to accomplish a goal through both human and automated intelligence, then assemble the resources to do it. Instead of pre-defining processes, you provide goals, constraints, analytics and contexts; the agents — including people, services, bots and sensors — create each process instance on the fly to best meet the situation. Although his case studies included a number of other technologies, he finished with a comment on how BPM and case management can be used to coordinate and orchestrate these processes as we move to a new world of digital transformation of the customer experience.

    Next up was Tom Debevoise, now with Signavio to help promote their recently-released DMN modeler; we had a sneak peek of the DMN modeler at bpmNEXT. He talked about three levels of decisions — strategic (e.g., should we change our business model), tactical (e.g. which customers to target) and operational (e.g., which discount to apply to this transaction) — and how these tend to be embedded within process models and business application logic, rather than externalized into decision models where they can be explicitly managed. Most organizations manage their decisions very poorly, both human and automated, resulting in inconsistent or just plain wrong decisions being made. In other words, our business decisions are at the same point now as business processes were a decade or more ago, before BPM systems became widespread, and the path to improving this is to consider decision management as a discipline as well as the systems to model and automate decisions. We now have a decision modeling standard, DMN 1.0; this is expected to drive the adoption of decision modeling in organizations in the same way that BPMN did for process modeling. He proposed a decision management lifecycle similar to a BPM lifecycle, starting with decision discovery that allows modeling using the DMN-standard elements of a decision, input data, knowledge sources, information requirements, authority requirements and knowledge requirements. He wrapped up with the linkage between process and decision models, particularly using the Signavio BPMN and DMN modelers: how decisions that are defined external to a process can be used to assign process activity participants, decide on next steps, select the process pathway, define data access control, or detect and respond to events. We saw yesterday how Trisotech’s tools combine BPMN, CMMN and DMN, and today how Signavio combines BPMN and DMN; as more process modeling vendors expand to include decision modeling, we are going to see more implementations of these modeling standards integrated.

    The last speaker in the keynote was Lloyd Dugan, on how business architecture and BPM work together, in response to a paper that he wrote last year with Neal McWhorter. Although dense (I recommend checking out the paper at the link), his presentation discussed some of the issues with reconciling business architecture and BPM, such as reconciling value stream, balanced scorecard and other BA models with activities within a process model. He reviewed a number of definitions and model types, cutting a wide swath through pretty much everything even remotely related to process and architecture, and highlighting some of the failures of mapping enterprise architecture frameworks to BPMN. He finished with a spectrum from business model perspectives (what the business is doing) to the operational model perspective (how the business is doing it), and how the business architecture versus BPM viewpoints differ, but can still both use BPMN as a modeling language. Pretty sure of two things from this: 1) I missed a lot of the detail 2) Dugan has never heard that you’re supposed to have less than 500 words on each PowerPoint slide.

    BPMN, CMMN and DMN with @denisgagne at BPMCM15

    Last session of day 1 of the BPM and Case Management Summit 2015 in DC, and Denis Gagne of Trisotech is up to talk about the three big standards: the Business Process Model and Notation (BPMN), the Case Management Model & Notation, and the Decision Model & Notation. BPMN has been around for a few years and is well-established — pretty much every business process modeling and automation vendor uses BPMN in some form in their process modelers, and it is OMG’s most-adopted standard — but CMMN and DMN are much newer and less widespread in the market. There are a few vendors offering CMMN modelers and even fewer offering DMN. There are two major benefits to standards such as BPMN, CMMN and DMN, in addition to the obvious benefit of providing an unambiguous format for modeling processes, management and decisions: they can be used to create models that can be interchanged between different vendors’ products; and they provide a common and readily-transferable “language” that is learned by analysts. This interchangeability, both of models and skills, means that organizations don’t need to be quite so worried about which modeling tool that they use, or the people that they hire to use it. Denis was at the Model Interchange Working Group (MIWG) OMG meeting in Berlin last week, where they showed all types of interchange for BPMN; with luck, we’ll be seeing the same activities for the other standards as they become widely adopted.

    There are some grey areas about when to use BPMN versus CMMN, since both are (sort of) process-based. However, the main focus in BPMN is on activities within processes, whereas CMMN focuses on events that impact cases. He showed a chart comparing different facets of the three standards:

    BPMN CMMN DMN
    Processes Cases Decisions
    Activities Events Rules
    Transitional Contextual Applied
    Data Information Knowledge
    Procedural Declarative Functional
    Token Event Condition Action (ECA) First Order Logic (FOL)

    The interesting part (at least to me) comes when we look at the bridges between these standards: in BPMN, there is a business rule task that can call a decision in DMN; in CMMN, there is a process task that can call a process defined in BPMN. Trisotech’s version of all of these modelers (not yet in the standards, but count on Denis to get them in there) also provides for a case task type in BPMN that can call a CMMN case, and a decision task in CMMN that can call a DMN decision. There are some patterns to watch for when modeling that might indicate that you should be using another model type:

    • In BPMN, if you have a lot of gateways expressing business logic, then consider moving the gateway logic to DMN
    • In BPMN, if you have a lot of events especially boundary events, then consider encapsulating that portion into a CMMN case
    • In BPMN, if you have a lot of ad hoc subprocesses, then consider using CMMN to allow for greater specification of the ad hoc activities
    • In CMMN, if you have a lot of task interdependencies, consider using BPMN to replace the temporal dependencies with flow diagrams

    The recognition and refactoring of these patterns is pretty critical for using the right model type, and are likely a place where a more trained technical analytical eye might be able to suggest improvements to models created by a less-technical analyst who isn’t familiar with all of the model types or how to think about this sort of decomposition and linking.

    He demonstrated integration between the three model types using the Trisotech BPMN, CMMN and DMN modelers, where a decision task in the BPMN modeler can link directly to a decision within a model in the DMN modeler, and a case task in BPMN can link directly to a case model in the CMMN modeler. Nice integration, although it remains to be seen what analyst skill level is required to be able to model across all three types, or how to coordinate different analysts who might be modeling in only one of the three model types each, where the different models are loosely coupled with different authors.

    Disclosure: I’m doing some internal work with Trisotech, which means that I have quite a bit of knowledge about their products, although I have not been compensated in any way for writing about them here on my blog.

    Fannie Mae Case Study on Effective Process Modeling at BPMCM15

    Amit Mayabhate from Fannie Mae (a US government-sponsored mortgage lender that buys mortgages from the banks and packages them for sale as securities) gave a session at the BPM and Case Management Summit on outcome-based process modeling for delivering business value. He had a few glitches getting started — apparently Fannie Mae doesn’t allow employees to download a presentation to their laptop, so he had to struggle through getting connected to the conference wifi and then the Fannie Mae VPN to open a PDF of his presentation — but did tell the best joke of the day when he was restarting his computer in front of us and said “now you know my password…it’s 8 dots in a row”.

    Back on track, he discussed their business architecture efforts and how process modeling fits into it. Specifically, he talked about their multifamily housing division, which had its own outdated and inflexible technology platform that they wanted to change out for a simpler infrastructure that would give them better access to information for better decision-making. To get there, they decided to start with the best possible outcome in mind, but first had to have the organization understand not only that they had problems, but some quantification of how big those problems were in order to set those future goals. They identified several key metrics where they could compare today’s measurements with their desired future goals, such as operational efficiency (manual versus automated) and severability. To map from the current to future state, they needed a transformation roadmap and a framework for achieving the steps on the roadmap; this included mapping their journey to greater levels of process maturity, and creating a business capability model that included 17 capabilities, 65 functions, 262 sub-functions, and around 300 process flows.

    Their business architecture transformation framework started with the business model (how do we make money), the operating model (how do we behave to make money) and the business capability model (what abilities are needed) using the Business Model Canvas framework. They used other architecture analysis tools, such as analyzing their operating model by plotting business process standardization against business process integration both for their current state and desired future state, to help them develop the strategy for moving between them. They used Mega’s business strategy module for most of the architecture analysis, which helps them identify business processes that are ripe for automation, then move to a BPMS for process modeling and automation. In that way, they can do just the process modeling that provides them with some architectural change that they know will provide value, rather than attempting to boil the ocean by modeling all processes in the organization.

    PCM Requirements Linking Capability Taxonomy and Process Hierarchy at BPMCM15

    I’m in Washington DC for a couple of days at the BPM and Case Management Summit; I missed this last year because I was at the IRM BPM conference in London, and in fact I was home from IRM less than 36 hours this weekend before I got back on a plane to head down to DC this morning.

    I’m in a breakout session to hear John Matthias from the Court Consulting Services of the National Center for State Courts, who focuses on developing requirements for court case management systems. As might be expected, the usual method for courts to acquire their case management systems is to just pick the commercial off-the-shelf (COTS) software from the leading packaged solution vendor, then customize it to suit. Except in general, the leading vendor’s software doesn’t meet the current needs of courts’ case workers and court clerks, and Matthias is trying to rework the best practices to create definitive links and traceability between requirements, processes and the business capabilities taxonomy.

    As he noted, justice case management is a prime example of production case management (PCM), wherein there are well-worn paths and complicated scenarios; multiple agents are involved (court and clerks, prosecution, public defender) and the specific order of activities is not always pre-defined, so the key role of the PCM system is to track and respond to state changes in the system of record. There are, however, some points at which there are very specific rules of procedure and deadlines, including actions that need to be taken in the event of missed deadlines. The problem comes with the inflexibility of the existing COTS justice case management software available in the market: regardless of how much study and customization is done at the time of original installation (or, perhaps, in spite of it), the needs change over time and there is no way for the courts to make adjustments to how the customized COTS package behaves.

    To address the issue of requirements, Matthias has developed a taxonomy of business capabilities: a tree structure that breaks each business capability down to increasing specialized capabilities that can be mapped to the capability’s constituent requirements. He’s also looked at a process hierarchy, where process stages break down to process groups, and then to elementary processes. This process hierarchy is necessary for organization of the processes, particularly when it comes to reusability across various case types. Process groups show hand-offs between workers on a case, while the elementary processes are the low-level workflows that may be able to be fully automated, or at least represent atomic tasks performed by workers. The elementary processes are definitely designed to be reusable, so that a process such as “Issue Warrant” can be related to a variety of business capabilities. Managing the relationships between requirements gets complex fast, and they’re looking at requirements management software that allows them to establish relationships between business capabilities, business rules, processes, system requirements and more, then understand traceability when there is a change to one component.

    Unlike systems with completely pre-defined processes, the requirements for PCM systems need to have the right degree of granularity (not too much to overconstrain the workers, and not too little to provide insufficient guidance), have performance measurement built in, and link to systems of record to provide state awareness and enable process automation. The goal is to achieve some amount of process discipline and standardization while will allowing variations in how the case managers operate: provide guidance, but allow for flexible selection of actions. Besides that ability to provide guidance without overconstraining, developing requirements for a PCM isn’t that much different from other enterprise systems: consider the future state, build to change, and understand the limits of the system’s configurability. I would also argue that requirements for any user-facing systems shouldn’t be done using a waterfall methodology where complete detailed requirements are necessary before implementation, but rather a more Agile approach where we collect the high level requirements, then enough detailed requirements to get you to your first implementation in an iterative development cycle. At which time all of the requirements will change anyway.