BPM for Small and Medium Businesses

Tom Bellinson of IT Methods looked at some of the reasons why BPM in small business (under 500 people) is different from that in larger businesses, based on some of the fundamental differences in how small and large businesses work, and therefore how they deal with process improvement. There are some advantages to looking at process improvement in large companies – more human and financial resources, and a longer time frame – but small businesses have the advantage that usually their processes can be understood and tackled end to end rather than piecemeal.

Because of less available resources, a big bang approach sometimes doesn’t work for small companies: the big bang is good for just getting it does all at once, but causes a great deal more stress and requires more concentrated effort. Bellinson proposes using a big bang approach for the analysis and planning phases, then implementing using a more incremental approach. This can still cause significant stress, particularly during the analysis phase when you may be challenging the company owner’s personality directly as it manifests in company culture and processes. Although process analysis always challenges management in some way in any sized company, when the owner personally created specific processes, and those processes are now being called out as broken, that can be uncomfortable all around.

To do a big bang approach to process mapping in a small business, it needs to be made a #1 priority in the company so that it doesn’t get pushed aside when the inevitable emergencies occur; you’ll also need to hire an external consultant to guide this process and gather the information, since the odds of those skills being inside your company and readily available is near zero. This is really a 2-4 week effort, not the months that it might take in a larger company, so although it will be a bit stressful and disruptive during that time, you need to bite the bullet and just get it done. The analysis itself isn’t unique to small businesses – map as-is processes, find and eliminate the non-value added activities, determine ROI – but sometimes the roles are a bit different, with the process consultant actually doing the process improvement exercise and presenting it to the company, rather than internal participants being involved in the reengineering efforts.

I’ve been approached by a few smaller businesses lately who were interested in BPM, and I think that the tools are finally at a price point that SMBs can consider implementing BPM inside their organizations. I agree with Bellinson that many of the techniques are just different for smaller businesses; having started and run two small businesses up to 40-50 people in size, I can certainly understand how the owner’s personality can have a big influence on the corporate culture and therefore the way that business processes improvement has to happen. However, there are still a lot of standard BPM principles and methodologies that can be applied, just on a smaller scale.

SaaS BPM at Surrenda-link

Bruce SPicer of Keystar Consultancy presented on a project that he did with Surrenda-link Investment Management to implement Appian cloud-based BPM for the process around procuring US life settlement assets (individual life insurance policies) to become part of their investment funds. They were specifically looking at a software as a service offering for this, in order to reduce cost and risk (considering the small size of their IT group), since SaaS allows them to scale up and down seamlessly without increasing costs significantly. They’ve built their own portal/user interface, using Appian Anywhere as the underlying process and analytics engine; it surprises me a bit that they’re not using more out of the box UI.

They were overtime and over budget, mostly because they (admittedly) screwed up the process mapping due to immature processes, inexperience with process analysis, and inexperience with gathering requirements versus just documenting the as-is state. Even worse, someone senior signed off on these incorrect process models, which were then used for initial development in the proof of concept before corrections were made. They made some methodology corrections after that, improving their process analysis by looking at broad processes before doing a detailed view of a functional silo, and moving to Agile development methodologies. Even with the mistakes that were made, they’re in production and on track to achieve their three-year ROI.

This should be a compelling case study, but maybe because it was just after lunch, or maybe because his presentation averaged 120+ words per slide, I had a hard time getting into this.

Resolving Case Management Challenges with Dynamic BPM

Dermot McCauley of Singularity discussed case management and its need for dynamism. He’s one of the co-authors of Mastering the Unpredictable: How Adaptive Case Management Will Revolutionize The Way That Knowledge Workers Get Things Done, and started with a definition of case management:

Case management is the management of long-lived collaborative processes that require secure coordination of knowledge, content, correspondence and resources to achieve an objective or goal. The path of execution cannot be predefined. Human judgment is required in determining how to proceed, and the state of a case can be affected by external events.

As he pointed out, cases are inherently unpredictable, emerging and changing over time, and must allow case workers to chart their own course through the process of managing the case, deciding on the right tasks to do and the right information to include at the right time. He discussed 14 key characteristics of case management, including “goal driven”, “information complexity” and “fluid participants and roles”, and how a case management technology platform must include aspects of BPMS, ECM and collaboration technologies in order to effectively support the knowledge workers. He also discussed the criticality of the history of a case, even more so than with structured processes, since cases are typically long-running and might include several workers added in partway through the case timeline. Case workers need a flexible work environment, since that’s the nature of their work, which means that they need to be able to configure their own desktop environment via a mashup-like functionality in order to organize their work effectively.

He also covered off a bit of their own product; interesting to see that there is a process map underlying a case, with a “happy path” showing what the case should be doing, but providing the user at any point with the ability to skip forward or back in the process map, initiate other (pre-defined) tasks, reassign the task to another user, and change case characteristics such as priority and expected completion time. This is not purely unstructured process, where there is no predefined model, but dynamic BPM where the model is predefined but can be readily changed while in flight. They have implemented a solution with the UK Insolvency Service, dealing with individual bankruptcy; this was targeted at a new low-cost program that the Insolvency Service was putting in place to handle the large number of low-asset individual insolvency cases in the face of the recent economic crisis. They used an agile approach, moving the case files from paper to electronic and providing a more flexible and efficient case management process that was live within 12 months of the original government legislation that enacted the program.

Bridging Process Modeling and IT Solutions Design at adidas

Eduardo Gonzalez of the adidas Group talked about how they are implementing BPM within their organization, particularly the transition from business process models to designing a solution, which ties in nicely with the roundtable that I moderated yesterday. The key issue is that process models are created for the purpose of modeling the existing and future business processes, but the linkage between that and requirements documents – and therefore on to solution design – is tenuous at best. One problem is with traceability: there is no way to connect the process models to the thick stack of text-based requirements documents, and from the requirements documents to the solution modules; this means that when something changes in a process model, it’s difficult to propagate that through to the requirements and solution design. Also, the requirements leave a bit too much to the developers imaginations, so often the solution doesn’t really meet the requirements.

The question becomes how to insert the business process models into the software development lifecycle. Different levels of the process model are required, from high-level process flows to executable workflows; they wanted to tie this in to their V-cycle model of solution design and development, which appears to be a modified waterfall model with integrated testing. Increasingly granular process models are built as the solution design moves from requirements and architecture to design and implementation; the smaller and more granular process building blocks, translated into solution building blocks, are then reassembled into a complete solution that includes a BPMS, a rules engine, a portal, and several underlying databases and other operational systems that are being orchestrated by the BPMS.

adidas V-BPM Cycle Reference Model

Gonzalez has based some of their object-driven project decomposition methods on Martyn Ould’s Business Process Management: A Rigorous Approach , although he found some shortcomings to that approach and modified it to suit adidas’ needs. Their approach uses business and solution objects in an enterprise architecture sort of approach (not surprising when he mentioned at the end of the presentation that he is an enterprise architect), moving from purely conceptual object models to logical object models to physical object models. Once the solution objects have been identified, they model the object states through its lifecycle, and object handling cases (analogous to use cases) that describe how the system handles an object through its full lifecycle, including both system and human interaction. He made the point that you have to have the linkage to master data; this is becoming recognized as a critical part of process applications now, and some BPMS vendors are starting to consider MDM connectivity.

The end solution includes a portal, BPMS, BRMS, ESB, MDM, BI and back-end systems – a fairly typical implementation – and although the cycle for moving from process model to solution design isn’t automated, at least they have a methodology that they use to ensure that all the components are covered and in synchronization. Specific models at particular points in their cycle include models from multiple domains, including process and data. They did a proof of concept with this methodology last year, and are currently running a live project using it, further refining the techniques.

Their cycle currently includes the model and execute phases of a standard BPM implementation cycle; next, they want to take on the monitor and optimize phases, and add modeling techniques to derive KPIs from functional and non-functional requirements. They also plan to look at more complex object state modeling techniques, as well as how adaptive case management fits into some of their existing concepts.

I posed a question at the end of my roundtable yesterday: if a tool existed that allowed for the definition of the process model, user interface, business rules and data model, then generated an executable system from that, would there still be a need for written requirements? Once we got past the disbelief that such tools exist (BPMS vendors – you have a job to do here), the main issue identified was one of granularity: some participants in the process modeling and requirements definition cycle just don’t need to see the level of detail that will be present in these models at an executable level. Obviously, there are still many challenges in moving seamlessly from conceptual process models to an executable process application; although some current BPMS provide a partial solution for relatively simple processes, this typically breaks down as processes (and related integrations) become more complex.

Conversation with Keith Harrison-Broninski

You may have noticed that I haven’t been blogging for the first two days of the IRM BPM conference here in London: that’s because I gave a 1/2-day seminar on the BPM technology landscape on Monday, then presented a session on collaboration and BPM yesterday morning, then moderated a roundtable on transforming process models to IT requirements yesterday afternoon. Last night, a small group of us had dinner at the lovely Institute of Directors club, where we had a fascinating conversation about all things related to BPM – off the record, of course. 🙂

This morning, we started the day with Roger Burlton, the conference organizer, interviewing Keith Harrison-Broninski about the future of work. Keith, who I first met at the BPMG conference here in London four years ago, created the theory of Human Interaction Management (HIM), with the idea that you start with the complex human relationships – strategy, goals and deliverables – and work your way out to the transactional stuff. In other words, get a handle on the collaborative human-to-human processes first, with no technology involved, then use the successes in that sort of process improvement to gain support for the greater funding and time commitments required for implementing a BPMS. When Roger said that HIM sounds a lot like project management, Keith replied that project management is a use case of HIM.

Keith comes across as a bit of an old-school technophobe: he pooh-poohs blogging, tweeting and all other social media, and (based on his involvement in my roundtable yesterday afternoon) considers BPMS implementations to take much too long and cost too much although he appears to have little practical experience with any modern-day model-driven BPMS. Ignoring that, he does have some interesting ideas that get back to the definition of BPM that we all give lip service to, but often ignore: the management practice of improving processes, separate from the technology. This is about knowledge work, however, not routine work, where people are given goals and deliverables and work out how to achieve those based on their own knowledge. He refers to these as information-based processes, and everything that could be represented by a process model as task-based processes, where these mundane task-based processes are merely programs (in the software sense) to be implemented with much time and effort by the lowly engineers and developers. The answer to all this, of course, is his software, HumanEdj, and the workshops and services that he provides to help you implement it.

An interesting discussion, showing some of the huge gaps that exist in BPM today, especially between how we deal with knowledge work versus routine work.

Wrapping Up BPM2010

I’m off on a week’s vacation now, then to speak at the IRM BPM conference in London the week of September 27th, but I wanted to give a final few notes on the BPM 2010 conference that happened this week.

The conference was hosted by Michael zur Muehlen at Stevens Institute in Hoboken, NJ: the first time that it’s been held in North America, and the first time (to my knowledge) that it’s had an industry track in addition to the usual research track. This allowed many people to attend – academics, practitioners, vendors and analysts – who might not normally be able to attend a European conference; that has raised the awareness of the conference significantly, and should help to continue its success in the future. Michael did a great job of hosting us (although I don’t think that he slept all week), with good logistics, good food and great evening entertainment in addition to the outstanding lineup of presenters.

I attended the research track, since it gives me a glimpse of where BPM will be in five years. The industry track, as good as it was, contains material that I can see at any of the several other BPM industry conferences that I attend each year. I started out in the BPM and Social Software workshop on Monday, then attended presentations on business process design, people and process, BPM in practice and BPM in education. Collaboration continues to be a huge area of study, fueled by the entry of many collaborative BPM products into the marketplace in the past year.

A key activity for me this week (which caused me to miss the process mining sessions, unfortunately) was the first organizational meeting for the Process Knowledge body of knowledge (BoK, and yes, I know that the two K’s are a bit redundant). Based on research from the Queensland University of Technology into what’s missing from current BoKs, a small group of us are getting the ball rolling on an open source/Creative Commons BoK, with a wide variety of contributors, and freely available for anyone to repurpose the material. I published an initial call to action, and Brenda Michelson, who is chief cat-herder in this effort, added her thoughts.

BPM 2011 will be in southern France the week of August 29th, so mark your calendar.

Process Knowledge Call to Action

I spent a good part of today with Michael Rosemann and Wasana Bandara of Queensland University of Technology, Paul Harmon and Celia Wolf  of BPTrends, Kathleen Barret and Kevin Brennan of IIBA, and Brenda Michelson of Elemental Links to plan a new initiative around a body of knowledge (BoK) for business process knowledge. The idea for a non-commercial, open source style BoK came from a paper written by Wasana, Michael and Paul, “Professionalizing Business Process Management: Towards a Common Body of Knowledge for BPM”, presented by Wasana in this afternoon’s research session at BPM 2010.

We’ve created a Call to Action for all interested parties, which provides a bit more detail:

Dr. Bandara’s paper, co-authored by Paul Harmon of BPTrends and Dr. Michael Rosemann of QUT, calls for the creation of a comprehensive, extensible, open source, community-driven Business Process Management Body of Knowledge (BoK).  To be deemed successful, the resultant BoK must be understandable and relevant to business process management professionals, academics and industry technology and service providers.

To realize the vision of a truly open, comprehensive and accessible process knowledge base, the entire business process community – practitioners, methodologists, academics, vendors, analysts and pundits – must get involved.

In this call for action stage, we are seeking business process community members who are interested in contributing to, or supporting, the BoK creation effort.

There’s a form on that page for you to indicate your interest, with a number of categories to indicate your primary and secondary interests in being involved with a process knowledge BoK:

  • Advocate
  • BPM Practitioner / End-user
  • Community Reviewer
  • Content Contributor
  • Funding Sponsor
  • Media Sponsor

I’m involved in this because I believe that we need an open source, Creative Commons sort of BoK in BPM, created by a broad community and acting as a meta BoK (pointing to other related BoKs) as well as containing unique content. I’m particularly interested in enabling more community involvement to really open things up, not just in community contributions of content, but also in community tagging for the purposes of creating personalized views onto the BoK as well as generating a folksonomy. I have a hard time getting on board with proprietary walled gardens of any sort, and especially in the area of information that should be freely available to all types of BPM stakeholders (free as in beer), and freely reusable in a variety of contexts (free as in speech) – the idea is that the Process Knowledge BoK is free in both of those respects. And speaking of free, this is not a commercial venture for me: I’m volunteering my time as a special advisor because I think that it’s an important initiative.

In addition to just indicating your interest by filling out the form on the website, we’re looking for a small number of organizations to participate in our Catalyst program over the next three months while we prepare for the official launch: we’ll be getting the initiative set up, expanding the website into a proper collaboration space in preparation for content creation, and sorting out the methodology, process and ontology for the BoK. To be clear, by “participate”, I mean “write a check”, and in exchange for a bit of near-term seed funding, you’ll get a package of goodies including participation in press releases, ads on the BPTrends and OMG websites and newsletters, and a credit towards our ongoing sponsorship program. You also get bragging rights as a thought leader in supporting this new BoK. Ping me if you’re interested.

Research in BPM in Education

Professionalizing BPM: Towards a Common Body of Knowledge for BPM

First up was a paper from QUT and BPTrends on the status of BPM as a profession, and what is required from a body of knowledge (BoK) about process-related information to support process professionals. Although several valuable resources of BPM information exist, none of them provide the complete picture; this research evaluated existing BoKs and established the core list of essential features for developing a more comprehensive BoK for process knowledge.

Related BoKs include the American Society of Quality’s Black Belt BoK and Lean Six Sigma Certification, IIBA’s BABOK, and ABPMP’s BPM CBoK. The research looked in detail at the ABPMP work because it’s most directly targeted at BPM practitioners. However, it has a number of weaknesses that need to be addressed, as indicated by literature review of design science and conceptual modeling, as well as BPM community input. Fundamentally, a BoK should be evaluated on completeness, extendibility, understandability, application and utility; the research found that the ABPMP CBoK lacked critical capabilities in all of these areas.

The challenge, then, is how to create something that does work as a BPM BoK? They propose an empirically-validated, open source body of knowledge, and are welcoming feedback and interest in participation at an initial Process Knowledge BoK website. There are a lot of key skills in developing a BoK, and the paper presents a proposed ontology for what information should be in the BoK and how it could be organized.

I’m involved with the paper authors and a number of other participants in getting a Process Knowledge BoK started up; check the blog post immediately following for more information.

Service Learning and Teaching Foundry: Building a BPM and SOA Education Community

Next up, from the University of New South Wales, is a paper on a BPM and SOA education community for an industry-relevant curriculum. This was motivated by student feedback on what started as a web application engineering course, and now covers a range of service orientation topics including business process modeling. The plan is to develop a service learning and teaching foundry as a community spanning multiple skill levels and multiple learning institutions.

The core of the foundry includes an information model of the areas of study, with resources such as use cases, sample web services, tutorials, assignments and programming exercises built on that core. It also contains a sandbox and demo environment, plus an access layer with a browser interface and an API for building the content into other applications.

The services technologies module of the learning and teaching foundry was developed as the base for a specific course offered at UNSW, including all of the materials needed for lectures, labs and assignments. This includes not just the materials to be provided to the students, but supporting elements such as event processing services that they can use in the completion of their assignments. This environment allows them to easily have lab exercises that build on previous exercises.

They’re using an open forum for collecting feedback on the course on an ongoing basis, using a “wiki/blog style” (not sure which) rather than just course evaluations at the end of the semester. You can check out the website here.

I skipped the last paper because the session was running a bit late and I wanted to get over to Keith Swenson’s fireside chat – more from there.

Research on BPM In Practice

The first research session this afternoon was on BPM in practice, looking at what people are actually doing with BPM.

How Novices Model Business Processes

I missed the first paper in this section, but arrived just in time for the presentation from Queensland University of Technology on how people who have no prior knowledge of formal process modeling techniques actually create process models.

Conveniently, they had a well-controlled group of research subjects: new students in the process modeling course. They looked at the students’ prior experience in modeling methodology, knowledge of the business domain and drawing skills, with the goal to correlate that with the diagram classification (i.e., what type of diagram that they drew) and semantic correctness of diagrams. Their diagram classification ranged across five basic types of design from pure textual descriptions to purely graphical representation; most commonly used was a classical flowchart design (type 2), with boxes annotated with text, connected by arrows, but there were also hybrid designs that added graphics to standard flowcharts, as well as storyboard styles (type 4). In some cases, they overlaid additional information, such as a flowchart of activities with a timeline plotted beneath it.

They did find some prior knowledge factors that could predict the most likely design type used by an individual: low levels of object-oriented model knowledge predicted a preference for storyboard-style design, whereas high levels of previous domain knowledge was more likely to result in a flowchart-style design. Design quality, that is, semantic correctness of the model, was correlated with design type and previous domain knowledge. Personally, I’d like to see the correlation with the students’ Myers-Briggs scores.

Considering the debates going on now about the need for flowcharting, and by extension, BPMN and other modeling notations, I find it interesting that the most common design style that students use without any prior process modeling experience is the flowchart. However, they see a more graphical and free-form representation as possibly also adding value, considering that a rigid notation such as BPMN might restrict creativity in process modeling.

IT Requirements of BPM in Practice – An Empirical Study

The last paper in this section, from the University of Bern, looked at the link between low BPM maturity rates within organizations and the inability of currently BPM tools to meet the key requirements of those organizations.

They conducted a survey of 130 companies across various industry sectors within the Forbes Global 2000 list, asking questions about their business processes, the related applications used for managing processes, and the process scope and duration. They also asked how (e.g., text versus graphical modeling notation) and why (e.g., compliance) they documented their processes; interestingly, more than a third were documented as text, 20% as tables, and about 30% combined for all process-specific modeling languages.

The use of BPM tools is a prerequisite for reaching the highest level of BPM maturity, so the survey also contained questions on the importance of specific capabilities of a BPM tool; the most important factor was supporting (manual) task execution, with automation of tasks coming in at #4. Most companies are still using Visio and Word [and based on my experience, probably PowerPoint as well] to document their processes; the first BPM tool (ARIS) doesn’t appear until #4 on the list.

Relating this back to the BPMM maturity levels, process documentation is critical in moving from level 2 to level 3, with textual descriptions indicating a lower maturity level than graphical models; most of the companies surveyed are still at level 2, but are in a position to transition to level 3. Using some sort of BPM execution software – whether a dedicated BPM system, an ERP system or something else – is critical to reaching level 4.

Getting back to the capabilities of the BPM tools and how this impacts maturity levels, they found that both usability of the tools and software integration requirements were not satisfied by the current crop of BPM tools. At the point that this paper was written, the details of those unsatisfied requirements were not fully established; this can be expected in their future research.

Research in People and Processes

The afternoon session of research papers focused on people and processes.

From People to Services to UI: Distributed Orchestration of User Interfaces

The first paper, from University of Trento and Huawei Technologies, focused on a model for distributed user interfaces, bringing together people, web services and UIs in a single tool, rather than developing process models and UIs independently. They consider UI components (effectively a widget) as objects representing process state; changes in the UI will cause underlying process services to be invoked, and changes in the underlying process/data will change the UI representation.

The goal is to bring together the needs of UI synchronization and service orchestration into a single language, even though UIs are event-based and services are invoked as part of control flows, and the don’t typically speak the same language. To do this, they extended BPEL to create BPEL4UI, which is the standard business process execution language with UI-specific modeling constructs. This manifests as a three new basic object types: pages, actors and UI components. They extended a BPEL Eclipse plug-in to included these new object types, allowing the UI to modeled as part of the BPEL process model, and added hooks so that the BPEL engine calls a UI engine server rather than the UI directly.

Other solutions tend to focus on either one or two of services, UI support and people; the tool that they have built, MarcoFlow, supports all three in a unified environment.

A Collaborative Approach to Maturing Process-Related Knowledge

The second paper, from SAP Research and University of Applied Sciences Northwestern Switzerland, introduced an approach for supporting knowledge workers in sharing process-related knowledge using a hybrid between a top-down process model and a bottom-up “wisdom of the crowds” for capturing work practices in process models.

A predefined process model provides a context for experience sharing, allowing people to collaborate around tasks in the process by creating “task patterns” that include descriptions of shared experiences in executing that task, and the related resources such as documents or links. Others executing that same task can access the associated task pattern to see what other people did in the same situation in the past, including problem and solution scenarios. The task pattern content is stored in MediaWiki, and linked directly to the execution environment for a task (although it doesn’t go as far as one of the social software workshop participants recommended yesterday in integrating the collaborative feedback directly into the execution environment).

Currently, all of the information in the task pattern is entered manually, but in response to an audience question, he said that they are looking at adding process analysis that can include more automated process intelligence and mining data as well.

Self-adjusting Recommendations for People-driven Ad-hoc Processes

The last paper of the session, from Vienna University of Technology and German Research Center for Artificial Intelligence, looked at providing guidance to a user in ad hoc workflows, allowing for the exploitation of best practices while still allowing individual flexibility. By flexibility, they specifically mean the ability to adapt a process flow on the fly by adding, skipping or re-ordering process steps. As in the previous presentation, they noted that processes may be based on pre-defined process models or crowdsourced best practices, but rarely both. In some cases, personalized recommendations can be beneficial to a specific user at a relatively high skill level, representing their personal best practices and high degree of flexibility that may not be suited for the general population of users. Crowd-based recommendations, however, may be more useful for less skilled users doing more general work who need to follow overall best practices in order to ensure process quality and efficiency.

Comparing the actual actions of a user relative to the recommendation establishes and reinforces their classification as either Eagle (expert user with personalized recommendations and work methods) or Flock (follows crowd-based recommendations for best practices); an Eagle will be reset to the middle of the classification if there is a repeatedly high error rate, and from there will migrate either back to Eagle or to Flock, depending on their behavior. A Flock user may migrate to Eagle over time as they become more skilled, or may stay as Flock based on their performance.

The prototype was not in a standard BPM system, but an overlay on email exchanges, intercepting and analyzing message traffic to detect process steps. An initial simple sequential process is modeled – and later refined by the user actions – and when the system recognizes that these steps are occurring in the email traffic, it pops up the process recommendations and tracks the user’s actual actions.