Deciding on process modeling tools #GartnerBPM

Bill Rosser presented a decision framework for identifying when to use BPA (business process analysis), EA (enterprise architecture) and BPM modeling tools for modeling processes: all of them can model processes, but which should be used when?

It’s first necessary to understand why you’re modeling your processes, and the requirements for the model: these could be related to quality, project validation, process implementation, as part of a larger enterprise architecture modeling effort and many other reasons. In the land of BPM, we tend to focus on modeling for process implementation because of the heavy focus on model-driven development in BPMS, hence model within our BPMS, but many organizations have other process modeling needs that are not directly related to execution in a BPMS. Much of this goes back to EA modeling, where several levels of process modeling that occur in order to fulfill a number of different requirements: they’re all typically in one column of the EA framework (column 2 in Zachman, hence the name of this blog), but stretch across multiple rows of the framework such as conceptual, logical and implementation.

Different types and levels of process models are used for different purposes, and different tools may be used to create those models. He showed a very high-level business anchor model that shows business context, a conceptual process topology model, a logical process model showing tasks within swimlanes, and a process implementation model that looked very similar to the conceptual model but included more implementation details.

As I’ve said before, introspection breeds change, and Rosser pointed out that the act of process modeling reaps large benefits in process improvement since the process managers and participants can now see and understand the entire process (probably for the first time), and identify problem areas. This premise is what’s behind many process modeling initiatives within organizations: they don’t plan to build executable processes in a BPMS, but model their processes in order to understand and improve the manual processes.

Process modeling tools can come in a number of different guises: BPA tools, which are about process analysis; EA tools, which are about processes in the larger architectural context; BPM tools, which are about process execution; and process discovery tools, which are about process mining. They all model processes, but they provide very different functionality around that process model, and are used for different purposes. The key problem is that there’s a lot of overlap between BPA, EA and BPM process modeling tools, making it more difficult to pick the right kind of tool for the job. EA tools often have the widest scope of modeling and analysis capabilities, but don’t do execution and tend to be more complex to use.

He finished by matching up process modeling tools with BPM maturity levels:

  • Level 1, acknowledging operational inefficiencies: simple process drawing tools, such as Visio
  • Level 2, process aware: BPA, EA and process discovery tools for consistent process analysis and definition of process measurement
  • Levels 3 and 4, process control and automation: BPMS and BAM/BI tools for execution, control, monitoring and analysis of processes
  • Levels 5 and 6, agile business structure: simulation and integrated value analysis tools for closed-loop connectivity of process outcomes to operational and strategic outcomes

He advocates using the simplest tools possible at first, creating some models and learning from the experience, then evaluating more advanced tools that cover more of the enterprise’s process modeling requirements. He also points out that you don’t have to wait until you’re at maturity level 3 to start using a BPMS; you just don’t have to use all the functionality up front.

The Open Group’s Service Integration Maturity Model and SOA Governance Framework

I had a chance last week for a pre-release briefing from The Open Group’s Chris Harding, Forum Director for SOA and Semantic Interoperability, on two new standards that they are releasing today: the Service Integration Maturity Model (OSIMM) and the SOA Governance Framework. These are both vendor-neutral (although several large vendors were involved in their creation), and are available for free on The Open Group’s site. In their words:

OSIMM will provide an industry recognized maturity model for advancing the continuing adoption of SOA and Cloud Computing within and across businesses. The SOA Governance Framework is a free guide for organizations to apply proven governance standards that will accelerate service-oriented architecture success rates.

OSIMM is a strategic planning tool: it is used to assess where you are in your SOA initiatives relative to a standard, vendor-neutral maturity model, and help create a roadmap for how to move on to the higher levels of maturity. At the heart of it is the OSIMM matrix, with maturity levels as columns progressing from left to right, and the different organizational dimensions being measured as rows: business view, governance and organization, methods, applications, architecture, information, and infrastructure and management.

OSIMM Matrix

Within each cell of the matrix are the indicators for that dimension and maturity level: for example, if you’re using object oriented modeling methods, that indicates that your methods are at level 2, whereas using service oriented modeling would move you up to level 4 or 5 in the methods dimension. Behind this matrix, OSIMM includes a full set of maturity indicators and attributes, plus assessment questions that organizations can use to determine where they are in terms of maturity: each dimension can be (and likely will be) at a different level of maturity.

This has the potential to be an incredibly useful self-assessment tool for organizations: rather than the very product-specific measurements that you see from vendors (“Not using our product? Oh, you’re not at all advanced in your SOA efforts…”), this is independent of whatever products that you’re using: it’s more about the type of products, and the methods and governance that you’re using to apply them. You’ll be able to use it to understand services and SOA, assess the maturity of your organization, and develop a roadmap to reach your goals.

The first version of the OSIMM Technical Standard will be available here for free download, although that link was still not working at the time that I wrote this. Other industry-specific standards organizations are free to use OSIMM directly, or extend it with their own dimensions and indicators as required.

The other major announcement today is about the SOA governance framework, which helps an organization to define their governance processes and methods. This is more of a practical framework for defining policies aligned between business and IT, aiding communication and capturing vendor-neutral best practices. This includes best practices around both lifecycle management and portfolio management, for both services and service-based solutions.

Governance Processes

Lifecycle and portfolio management are quite different: for example, a service lifecycle would include the idea or motivator for the service, the service definition, service creation, putting the service into operation, modifying and maintaining the service, and eventually retiring the service from operation. Service portfolio management is more concerned with reusability, and the practice of looking in the portfolio in the early stages of service lifecycle to see if there is an existing service that suits the requirements. The same applies to solution lifecycle and portfolio management; this differs from any other type of solution governance since there may be service-specific issues such as composition to be considered.

This generic reference model for SOA governance is provided as a standard, to be used by companies to create (and constantly monitor and update) their own specific governance model and best practices. The SOA governance framework may be used in the context of another governance framework, such as COBIT or ITIL; the SOA working group did a mapping of COBIT to this framework as part of the framework development process, and plan to do more in the future in order to help organizations preserve their investment in COBIT/ITIL training and implementation.

The SOA Governance Framework will be available here for free download.

Enterprise Architects in the cloud

A couple of weeks ago, I attended the Open Group’s Enterprise Architecture conference in Toronto (my coverage here), and ended up being invited to speak on Dana Gardner’s panel on how the cloud is pushing the enterprise architects’ role beyond IT into business process optimization.

You can now find the podcast here, subscribe to the Briefings Direct podcast series on iTunes here, and read the transcript here.

Dana Gardner’s panel on EA skills in a downturn #ogtoronto

I was in a panel here on Monday, hosted by Dana Gardner and also including John Gøtze of the Association of Enterprise Architects, and Tim Westbrock of EAdirections, where we discussed the issues of extending the scope of architecture beyond the enterprise. This was recorded and will be included in Dana’s usual podcast series within a couple of weeks; I’ll post a link to it when I see it, or you can subscribe in iTunes and catch it there..

Today, he’s moderating two more panels, and I sat in on the beginning of the one about which skills and experience differentiate an enterprise architect in a downtown, although I unfortunately had to duck out to a client meeting before the end (the disadvantage of attending a conference in my home city). This one featured James de Raeve and Len Fehskens of The Open Group, David Foote of Foote Partners, and Jason Uppal of QRS. From the conference program description:

As the architect’s role continues to evolve in scope with the increasingly global and distributed enterprise so to do the core skills and experiences required of them. Professional certifications, such as ITAC, ITSC and TOGAF, can be career differentiators at any time, but are particularly crucial during periods of economic downturn such as we’re currently experiencing.

This panel will examine the evolving job requirements of today’s enterprise architect, discuss the value of professional certification programs and how certifications help to not only legitimize and validate the profession, but also provide much-needed demand for the skills, capabilities and experience that certified professionals have within their organizations.  The panel will also include perspectives on how certification can affect market demand and salary levels for those certified.

It’s almost impossible to capture everything said on a panel or who said what, so just a few unattributed comments:

  • A lot of organizations are in panic mode, trying to cut costs but not lose (any more) customers; IT is afraid of being blamed for costs and inefficiencies
  • There needs to be more coherence around the definition of EA so that this position doesn’t get squeezed out during budget cuts due to lack of common understanding of what EAs do (I’m thinking it’s a bit late for that)
  • Issues in governance are becoming critical; architects need to have knowledge and skills in governance in order to remain relevant
  • Architects need to have the low level technical skills, but also have to develop the higher-level strategic and collaborative skills
  • Many organizations don’t have a career development framework in place to develop an EA team
  • In many cases, you will need to perform the role of architect before anyone is willing to call you that (that seems obvious to me): it’s as much about experience as about any sort of certification
  • HR doesn’t, in general, understand what an architect does, but sees it as just a fancy title that you give to someone in IT instead of giving them a raise (much the way that we used to give them the “project manager” title 15 years ago)

I hated to leave mid-session, but I’ll catch the replay on Dana Gardner’s Briefings Direct in a couple of weeks. I’m hoping to be back for at least some of the panel later today on cloud security, and likely stick around for CloudCamp tonight.

Cloud Computing Business Scenario Workshop #ogtoronto

I’ve never attended an Open Group event before, but apparently interactive customer requirements workshops are part of what they do. We’re doing a business scenario workshop to gather requirements for cloud computing, led by Terry Blevins of MITRE, also on the board of the Open Group. The goal is to capture real business requirements, with the desired outcome to have the vendors understand and respond to customers’ needs. The context presented for this is a call to action for cloud vendors to develop and adhere to open standards, and we were tasked with considering the following questions:

  • What are the pain points and ramifications of not addressing the pain points, relative to cloud computing?
  • What are the key processes that would take advantage of cloud computing?
  • What are the desired objectives of handling/addressing the pain points?
  • Who are the human actors and their roles?
  • What are the major computer actors and their roles?
  • What are the known needs that cloud computing must fulfill to help improve the processes?

We started with brainstorming on the pain points: in the context of cloud computing, given my critical use of Google Apps and Amazon S3, I found myself contributing as an end user. My key pain point (or it was, before I solved it) was the risk of losing data in a physical disaster such as fire/flood/theft and the need for offsite backup. There were a ton of other pain points:

  • Security – one person stated that their security is better since moving to cloud applications
  • Sizing and capacity
  • Flexibility in bundling and packaging their own products for selling
  • Complex development environments
  • Pressure to reduce capital investments
  • Operating costs
  • Ineffective support
  • Functional alignment to business needs
  • Need to align IT with business
  • Cost of physical space and energy (including green concerns)
  • Cost of failure discourages innovation
  • Compliance standards
  • Difficulties in governance and management
  • Incremental personnel costs as applications are added
  • Infrastructure startup cost barrier
  • Time to get solutions to market
  • Hard to separate concerns
  • Operational risk of using old equipment
  • Resource sharing across organizations
  • No geographic flexibility/location independence
  • Training cost and time
  • Loss of control by users provisioning cloud resources on their own
  • No access to new technology
  • Dependency on a few key individuals to maintain systems
  • Being stifled by in-house IT departments
  • Need to understand the technology in order to use it
  • Do-it-yourself in-house solutions
  • Lack of integrated, well-managed infrastructure
  • Overhead of compliance requirements, particularly in multinational context
  • Long time to market
  • Disposal costs of decommissioned systems
  • Cost of help desk
  • Legal/goodwill implications of security breaches
  • Can’t leverage latest ideas

This is a rough list thrown out by audience members, but certainly lots of pain here. This was consolidated into 9 categories:

  1. Resource optimization
  2. Cost
  3. Timeliness
  4. Business continuity (arguably, this is part of risk)
  5. Risk
  6. Security
  7. Inability to innovate
  8. Compliance
  9. Quality of IT

Things then got even more participatory: we all received 9 post-it notes, giving us 9 votes for these categories in order to collaboratively set priorities on them. We could cast all of our votes for one category, vote once for each category, or anything in between; this is intended to be from our own perspective, not our customers or what we feel is best for enterprises in general. For me, the key issues are business continuity and security, so I cast three votes for each. Cost is also important, so I gave it two votes, and timeliness got one vote. I’ve seen this same voting technique used before, but never with so much ensuing confusion over what to do. 🙂 Blevins pointed out that it sometimes works better to hand out (fake) money, since people understand that that they’re assigning value to the ideas if they’re dividing up the money between them.

The three winners were 1, 2, and 3 from the original list, which (no surprise) translate to better, cheaper and fast. The voting fell out as follows:

Category # of votes
Resource optimization 37
Cost 34
Timeliness 41
Business continuity 8
Risk 20
Security 29
Inability to innovate 29
Compliance 17
Quality of IT 16

Great session, and some really good input gathered.

TOGAF survey results #ogtoronto

Another flashback to Monday, when Jane Varnus of Bank of Montreal and Navdeep Panaich of Capgemini presented the results of a survey about TOGAF 9. They covered a lot of stats about EAs and their organizations, a few of which I found particularly interesting:

  • Architects form 2-4% of IT staff (the fact that the question was asked this way just reinforces the IT nature of architecture)
  • Most architecture practices started within the past 4-5 years
  • 65% of architecture initiatives are charged centrally rather than to individual projects
  • More than 60% of architecture groups are sponsored by the CTO or CIO, and more than 70% report up to the CTO or CIO
  • EAs have surprisingly low levels of responsibility and authority and decision-making in both enterprise efforts and projects, but are usually involved or consulted
  • The primary driver for EA, with 44%, is business-IT alignment; better strategic planning and better IT decision-making come in next at 11% each
  • Business-IT alignment is also one of the key benefits that companies are achieving with EA; when they look at the future desired benefits, this expands to include agility, better strategic planning, and better IT decision-making
  • 32% of organizations have no KPIs for measuring EA effectiveness, and another 34% have 1-5 KPIs
  • More thought needs to be given to EA metrics: 40% of the KPIs are perception-oriented (e.g., stakeholder satisfaction), 33% are value-oriented (e.g., cost reduction) and 26% are activity-oriented (e.g., number of artifacts created)
  • EA frameworks are not yet used in a standardized fashion: 27% are using a standard architecture framework in a standardized manner (this is from a survey of Open Group members!), 44% have selected a framework but its use is ad hoc, and 27% select and use frameworks on an ad hoc basis
  • TOGAF (8 and 9 combined) is the predominant framework, used in more than 50% of organizations, with Zachman coming in second at 24%
  • Drivers for architect certification are unclear, and many organizations don’t require it

There’s a ton of additional information here; the whole presentation is here (direct PDF link), although it may be taken down after the conference.

Ndu Emuchay, IBM, on standards in cloud computing #ogtoronto

Today has an track devoted mostly to cloud computing, and we started with Ndu Emuchay of IBM discussing the cloud computing landscape and the importance of standards. IBM is pretty innovative in many areas of new technology – I’ve blogged in the past about their Enterprise 2.0 efforts, and just this morning saw an article on what they’re doing with the internet of things where they’re integrating sensors and real-time messaging, much of which would be cloud-based, by the nature of the objects to which the sensors are attached.

He started with a list of both business and IT benefits for considering the cloud:

  • Cost savings
  • Employee and service mobility
  • Responsiveness and agility in new solutions
  • Allows IT to focus on their core competencies rather than running commodity infrastructure – as the discussion later in presentation pointed out, this could result in reduced IT staff
  • Economies of scale
  • Flexibility of hybrid infrastructure spanning public and private platforms

From a business standpoint, users only care that systems are available when they need them, do what they want, and are secure; it doesn’t really matter if the servers are in-house or not, or if they own the software that they’re running.

Clouds can range from private, which are leased or owned by an enterprise, to community and public clouds; there’s also the concept of internal and external clouds, although I’m not sure that I agree that anything that’s internal (on premise) could actually be considered as cloud. The Jericho Forum (which appears to be part of the Open Group) publishes a paper describing their cloud cube model (direct PDF link):

There’s a big range of cloud-based services available now: people services (e.g., Amazon’s Mechanical Turk), business services (e.g., business process outsourcing), application services (e.g., Google Apps), platform services and infrastructure services (e.g., Amazon S3); it’s important to determine what level of services that you want to include within your architecture, and the risks and benefits associated with each. This is a godsend for small enterprises like my one-person firm – I use Google Apps to host my email/calendar/contacts, synchronized to Outlook on my desktop, and use Amazon S3 for secure daily backup – but we’re starting to see larger organizations put 10’s of 1000’s of users on Google Apps to replace their Exchange servers, and greatly reduce their costs without compromising functionality or security.

Emuchay presented a cloud computing taxonomy from a paper on cloud computing use cases (direct PDF link) that includes hosting, consumption and development as the three main categories of participants.

There’s a working group, organized using a Google Group, that developed this paper and taxonomy, so join in there if you feel that you can contribute to the efforts.

As he points out, many inhibitors to cloud adoption can be addressed through security, interoperability, integration and portability standards. Interoperability is the ability for loose coupling or data exchange between that appear as a black box to each other; integration combines components or systems into an overall system; and portability considers the ease of moving components and data from one system to another, such as when switching cloud providers. These standards impact the five different cloud usage models: end user to cloud; enterprise to cloud to end user; enterprise to cloud to enterprise (interoperability); enterprise to cloud (integration); and enterprise to cloud (portability). He walked through the different types of standards required for each of these use cases, highlighting where there were accepted standards and some of the challenges still to be resolved. It’s clear that open standards play a critical role in cloud adoption.

Alain Perry, Treasury Board Secretariat, on EA in Canadian government #ogtoronto

Also on Monday, we heard from Alain Perry from the CIO branch of the Treasury Board Secretariat on how enterprise architecture, supported by the use of TOGAF, is making its way into the Canadian government at all levels. The EA community of practice is supporting the use of TOGAF 9 in order to enable a common vocabulary and taxonomy for creating and using architectural artifacts, and to create reusable reference models, policy instruments and standards to support the architectural work.

Using the classic “city planning” analogy that we hear so often in EA discussions, Perry showed how they break down their levels of detail into strategic/enterprise architecture (“city scapes”) for vision and principles required for long-term direction, program/segment architecture (“district designs”) for specific programs and portfolios to provide context for solution architectures, and solution architecture (“detailed building design and specs”) for a specific project.

They used an adaptation of TOGAF to create the common content model for each of those three levels: architecture vision, architecture requirements, business architecture, architecture (including data and application architecture), technology architecture, and architecture realization.

They’ve created the Canadian Governments Reference Model (CGRM) that allows different levels of government to share standards, tools and capabilities: in Canada, that includes at least federal, provincial and municipal, plus sometimes regional, all with their own political agendas, so this is no mean feat.

Allen Brown of Open Group on their internal use of TOGAF #ogtoronto

I was taking notes in a paper notebook at the conference on Monday, and only now have had time to review them and write up a summary.

The general sessions opened with Allen Brown of the Open Group discussing their own use of TOGAF in architecting their internal systems. Since they’re making a push to have TOGAF used in more small and medium businesses, using themselves as a test case was an excellent way to make their point. This seemed to be primarily a systems architecture exercise, responding to threats such as operational risk and security; however, the problem that they had was primarily that of systems, not of the general organizational strategy.

So far, as part of the overall project, they’ve replaced their obsolete financial system, outsourced credit card handling, moved to hosted offsite servers, added a CRM system, and are adding a CMS to reduce their dependency on a webmaster for making website updates. These are all great moves forward for them, but the interesting part was how they approached architecture: they involved everyone in the organization and documented the architecture on the intranet, so that everyone was aware of what was happening and the impacts that it would have. Since this included business priorities and constraints, the non-technical people could contribute to and validate the scenarios, use cases and processes in the business architecture.

They developed a robust set of architecture artifacts, documenting the business, applications, data and technology architectures, then identified opportunities and solutions as part of an evaluation report that fed into the eventual system implementations.

This was a great case study, since it showed how to incorporate architecture into a small organization without a lot of resources. They had no budget to hire new staff or invest in new tools, and had to deal with the reality of revenue-generating work taking precedence over the architecture activities. They weren’t able to create every possible artifact, so were forced to focus on the ones most critical to success (a technique that could be used well in larger, better-funded organizations). Yet they still experienced a great deal of success, since TOGAF forced them to think at all levels rather that just getting mired in the technical details of any of the specific system upgrades: this resulted in appropriate outsourcing and encouraged reuse. At the end of the day, Brown stated that they could not have achieved half of what they have without TOGAF.

Heather Kreger, IBM, on SOA standards

It’s impossible for me to pass up a standards discussion (how sad is that?), so I switched from the business analysis stream to the SOA stream for Heather Kreger’s discussion of SOA standards at an architectural level. OASIS, the Open Group and OMG got together to talk about some of the overlapping standards impacting this: they branded the process as “SOA harmonization” and even wrote a paper about it, Navigating the SOA Open Standards Landscape Around Architecture (direct PDF link).

As Kreger points out, there are differences between the different groups’ standards, but they’re not fatal. For example, both the Open Group and OASIS have SOA reference architectures; the Open Group one is more about implementation, but there’s nothing that’s completely contradictory about them. Similarly, there are SOA governance standards from both the Open Group and OASIS

They created a continuum of reference architectures, from the most abstract conceptual SOA reference architectures through generic reference architectures to SOA solution architectures.

The biggest difference in the standards is that of viewpoint: the standards are written based on what the author organizations are trying to do with them, but contain a lot of common concepts. For example, the Open Group tends to focus on how you build something within your own organization, whereas OASIS looks more at cross-organization orchestration. In some cases, specifications can be complementary (not complimentary as stated in the presentation 🙂 ), as we see with SoaML being used with any of the reference architectures.

Good summary, and I’ll take time to review the paper later.