Document, document, document

Phil Wainewright posted one of the IT commandments today: Thou shalt document all thy works. This is a perfect followup to my post yesterday about SOA and data, although it may not appear obvious at first. Problem: application developers don’t use services when they should; in yesterday’s post, I was talking about how they squirrel away data in their own application-specific silos, but the real issue is more widespread than that. Wainewright hits it on the head:

Failure to document is thus one of the biggies of ZDNet’s IT Commandments, high up in the mortal sin rankings with the likes of ‘Thou shalt not kill’. For if you don’t document your work, how is anyone else supposed to reuse any of it? From your greater sin flows a multitude of others’ lesser transgressions.

In addition to yesterday’s more easily defeated arguments that developers don’t use services because the data may not be accurate or it make take too long, we add this one that’s harder to counter: the developers don’t use services because they’re not properly documented. This is often blamed on developers having a “not invented here” attitude, and wanting to build everything themselves, but I disagree (in general).

I’ve been a developer, and I used anything available to me as long as I understood how to use it, how it worked, and its limitations. In other words, I used third-party code/services if they were properly documented, and I could determine from that documentation that they suited my needs. If they weren’t documented and I had to walk through the code (if that was even available) to figure out how it worked, then I was more likely to just rewrite it myself, on the basis that if someone couldn’t write proper documentation then maybe they couldn’t write proper code either. Later, when I ran a development team, I made the developers write the documentation first. They bitched about it, but we had a high level of code reuse.

There is no way to achieve SOA without reusable services, and there is no way to achieve reusable services without proper documentation of those services.

A bit of meat to go with the whine

Yesterday, I posted a rather whiny entry about rude customers (and Bob McIlree was kind enough to give me a comforting pat on the shoulder, virtually speaking — thanks Bob!) so today I decided to get a bit more productive. Moving from whine to wine, I finally made my first cut of a Squidoo lens about Australian Wine in Toronto (yes, I’m a geek and this is how I spent part of my Sunday). Sort of a niche topic, true, but that’s what Squidoo lenses are all about: it allows you to quickly build a one-page portal with links to other sites, Amazon products, eBay, RSS feeds, and a number of other kinds of information. Since it’s all on the web, you can update it anywhere, which is why I’ve moved quite a bit of information about both wine and BPM from my websites to my two Squidoo lenses.

I want to add a bit of meat to this post to offset the whine of yesterday, and coincidentally (before I saw his comment), I was reading Bob’s post on SOA and Data Issues and the need to maintain a source system of record (SSoR) for data. In particular, he discusses a conversation that was passed along to him from another organization:

A, the SOA implementer, argues that application-specific databases have no need to retain SSoR data at all since applications can invoke services at any time to receive data. He further opined that the SOA approach will eliminate application silos as his primary argument in the thread.

B, the applications development manager, is worried that he won’t get the ‘correct’ value from A’s services and that he has to retain what he receives from SSoRs to reconcile aggregations and calculated values at any point in time.

Since I’m usually working on customer projects that involve the intersection of legacy systems, operational databases, BPMS and analytical databases, I see this problem a lot. In addition to B’s argument about getting the “correct” value, I also hear the efficiency argument, which usually manifests as “we have to replicate [source data] into [target system] because it’s too slow to invoke the call to the source system at runtime”. If you have to screen-scrape data from a legacy CICS screen and reformat it at every call, I might go for the argument to replicate the mainframe data into an operational database for faster access. However, if you’re pulling data from an operational database and merging it with data from your BPMS, I’m going to find it harder to accept efficiency as a valid reason for replicating the data into the BPMS. I know, it’s easier to do it that way, but it’s just not right.

When data is replicated between systems, the notion of the SSoR, or “golden copy”, of the data is often lost, the most common problem being when the replicated data is updated and never synchronized back to the original source. This is exacerbated by synchronization applications that attempt to update the source but were written by someone who didn’t understand their responsibility in creating what is effectively a heterogeneous two-phase commit — if the update on the SSoR fails, no effective action is taken to either rollback the change to the replicated data or raise a big red flag before anyone starts making further decisions based on either of the data sources. Furthermore, what if two developers each take the same approach against the same SSoR data, replicating it to application-specific databases, updating it, then trying to synchronize the changes back to the source?

I’m definitely in the A camp: services eliminate (or greatly reduce) the need to replicate data between systems, and create a much cleaner and safer data environment. In the days before services ruled the earth, you could be forgiven for that little data replication transgression. In today’s SOA world, however, there are virtually no excuses to hide behind any more.

Gartner BPM summit day 3 and wrap-up

The last day at the Gartner conference was a short one for me: I skipped the vendor sessions in the morning, so only attended Daryl Plummer’s session “BPM in the Service Oriented Architecture” and the Andrew Spanyi talk at lunch before I had to leave for the airport.

Plummer’s session description started with the phrase “Is BPM in my SOA or is SOA in my BPM?”where have I heard that before — then asked the questions “Where do BPM and SOA cross paths? How can SOA be leveraged for the business process? How can BPM be leveraged for an SOA?” There was quite a bit of recycled material in here, or maybe I was just getting conferenced-out by that point, but he did introduce a new (to me) acronym: ISE, or integrated service environment, which is apparently the process developer view of composite applications as opposed to BPMS, which is the business view of composite applications. He made a strong point that ISE is not just an IDE plus BPM, but is the following:

  • A development environment that enables creation, assembly, orchestration, deployment, automation and maintenance of composite applications based on services from the perspective of a process-centric developer.
  • Automates and manages the productivity of developers through frameworks, process flow, page flow and service invocations.
  • Provides the development work environment for an application platform suite to assemble services into processes and composite applications.
  • Supports SOA principles and XML Web services standards, as well as traditional component and modular code mechanisms.

First of all, it’s not clear to me why this isn’t just BPM plus some array of development tools. Second, it’s also not clear to me that a BPMS is the business view of composite applications: that’s one aspect of a BPMS, but most of them also provide a huge part of the process developer’s view as well. Is ISE a valid distinction in this ever-changing SOA environment, or just the buzzword du jour?

Spanyi’s talk at lunch was a bit lost in the hub-bub of a room full of people eating and — in the case of two people at my table — carrying on completely unrelated conversations, but I did pick up a copy of his latest book so can presumably get the gist of it from that.

One last note to Matt, who I sat with at lunch on Monday: send me your contact info, I want to hear more about your open source workflow project and I want to connect you to someone who is doing something similar.

Gartner BPM summit day 2: Daryl Plummer

Daryl Plummer’s Tuesday keynote, “How Do You Measure and Justify Business Agility” made a few good points about the completely over-hyped notion of business agility:

  • Agility is a legitimate management practice. If you don’t have people focussed on agility, it’s unlikely to just happen by accident.
  • Agility is as important, or more important, than planning in order to be able to react to the unexpected. Remember “Built To Last“? That’s so last year; now it’s “Built To Change.”
  • Agility and speed are not synonymous. You can very quickly create another legacy environment (and probably already have).

My only major disagreement with what he said is that I see agility as a characteristic that can be measured at any point in an organization’s life, not an end goal to which an organization aspires. He also introduces the Agility Quotient, which is “…calculated by measuring the things that inhibit agility and examining how willing you are to overcome them”, which ultimately strikes me as a new age-y business measurement that does more to increase Gartner’s consulting revenues than their customers’ agility.

He finishes up with some comments on some of the technological components and ideas critical to business agility: decoupled business components related through event passing (“beyond SOA”), mobile access, identity management, and how to bring some of these things together. Although he’s not explicit about it, he seems to indicate that business agility isn’t a problem for the business so much as it is for the IT groups that support them, which is certainly something that I’ve seen playing out in practice.

Gartner BPM summit day 1: Dale Vecchio

I attended Dale Vecchio’s session on “Using SOA from Legacy in BPMS”, which promised “ways to create web services out of existing systems and use them in BPM solutions”. An interesting tidbit: he claimed that the most common surprise during the application archaeology required for Y2K rework projects was that IT departments didn’t know what was connected to what, and that the requirement to understand legacy linguine still often leads to the decision to do nothing rather than have to understand and unravel the mess.

I also liked his definition of a business process:

A set of activities & tasks performed by resources (humans & machines)
Using a variety of information (structured & unstructured)
Interacting in various ways (predictable & unpredictable)
Guided by business policies and principles (business rules & decision criteria)

since it sums it up nicely for those who still think that an application is a business process. With that as a focus, he talked about the importance of multilevel modelling, where you model a business architectural view, a system architectural (by which I think that he meant application and data/information architectural) view and multiple technical views: not fundamentally different from what we do when using an enterprise architecture approach to BPM.

He went through a couple of different approaches for modernizing legacy systems in order to allow them to be consumed as services by BPM and other systems, lining up nicely with Janelle Hill’s earlier talk on leveraging existing IT assets in BPM. Nothing earth-shattering, but some good stuff on separating the presentation layer and replacing it with a services layer versus just wrapping the legacy app, and on different approaches to determining service granularity that still maintains the philosophy of a service as a business function.

Gartner BPM summit day 1: Janelle Hill

I know, it’s already the third (and last) day of the Gartner BPM conference and I’m still blogging about day 1 — consider that to be a good reflection on the sessions that I don’t have lots of time to blog about them because I’m too busy listening to them. I’m comfortably ensconced in my “Nashville office”, a.k.a., the women’s restroom right beside the main conference rooms. You men may not realize it, but the women’s facilities in most large hotels include a lounge with comfy chairs, a few side tables, a box of tissues (in case a presentation brings you to tears, I suppose) and sometimes even flowers. This one also happens to have power for my laptop and a full-strength wifi connection — sometimes gender discrimination works in our favour. 🙂

One of my favourite sessions on Monday was by Janelle Hill: “Leveraging Existing IT Assets in BPM Initiatives.” For an analyst, Hill has a very practical view on BPM implementations, and her session was focussed on changing the behaviour of systems by putting a services face on some of those old legacy applications to allow them to participate in BPM. She discussed three basic strategies:

  • Surrond strategy, where the legacy system is wrapped with a services layer and BPM is used to streamline the exceptions that are triggered by the legacy applications. I’ve done an implementation of exactly this strategy in many cases; one example is the claims processing functionality in an insurance company, where exceptions that are raised in their mainframe-based auto-adjudication system are picked up by a BPM system and routed to the appropriate people (and services) for manual adjudication or data repair-and-resubmit.
  • Extend strategy, where the previously unautomated steps are automated — often paper-based steps such as data collection forms, or ad hoc human processes such as collaboration. Case management, something that I work on with all of my insurance customers, as well as other financial services falls into both of those categories: it’s often a loosely structured process that is tracked on paper, since older systems didn’t lend themselves well to enabling this type of functionality.
  • Leverage-what-you-have strategy, where the legacy databases are integrated directly (usually for read-only operations), or the legacy functions are called via EAI-type adapter technology. I see direct access of legacy databases quite commonly for consolidated reporting, for example.

I’m not sure that the lines are so clearly drawn between each of these three strategies: the distinction between the type of functions that would constitute surrond versus extend is fuzzy in a lot of areas, and the technologies used for surrond and leverage-what-you-have can overlap significantly. However, it’s a useful categorization to start looking at how to start eating the legacy system integration elephant.

I also like Hill’s definition of SOA — an architectural style that is modular, distributable and loosely coupled — because it (rightly) takes the focus away from specific products or technologies. She goes on to define a service, or component, as a software process that acts in response to a request, and a web service as a special case that uses specific protocols and standards. Again, this takes the focus away from web services specifically in favour of just services, something that is essential if you want to maintain flexibility in how you create and consume services both inside and outside your organization. Consider that mashups typically consume services that use lighter-weight protocols than web services, for example.

One of the lessons to take away from this is that SOA and BPM can be done independently, but when combined, the whole is much greater than the sum of the parts. SOA provides the framework for creating services that will be consumed (assembled and orchestrated) by BPM; in turn, BPM puts a human face on SOA which makes it easier for the business to understand.

I’m off to the airport, so this conference is over for me. The blogging, however, goes on…

Gartner BPM summit day 1: Sinur and Melenovsky

The conference opened with the two key faces of Gartner’s BPM vision — Jim Sinur and Michael Melenovsky — giving a brief welcome talk that focussed on a BPM maturity model, or what they are calling BPM3. There was only one slide for their presentation (if you don’t count the cover slide) and it hasn’t been published for the conference attendees, so I’ll rely on my sketchy notes and somewhat imperfect memory to give an overview of the model:

  • Level 0: Acknowledge operational inefficiences, with potential for the use of some business intelligence technology to measure and monitor business activities. I maintain that there is something lower than this, or maybe a redefinition of level 0 is required, wherein the organization is in complete denial about their operational inefficiences. In CMM (the Capability Maturity Model for software development processes), for example, level 0 is equivalent to having no maturity around the processes; level 1 is the “initial” stage where an organization realizes that they’re really in a lot of trouble and need to do something about it.
  • Level 1: Process aware, using business process analysis techniques and tools to model and analyze business processes. Think Visio with some human intelligence behind it, or a more robust tool such as those from Proforma, iGrafx or IDS Scheer.
  • Level 2: Process control, the domain of BPMS, where process models and rules can now be executed, and some optimization can be done on the processes. They admitted that this is the level on which the conference focusses, since few organizations have moved very far beyond this point. Indeed, almost every customer that I have that uses BPM is somewhere in this range, although many of them are (foolishly) neglecting the optimization potential that this brings.
  • Level 3: Enterprise process management, where BPM moves beyond departmental systems and becomes part of the corporate infrastructure, which typically also opens up the potential for processes that include trading partners and customers. This is a concept that I’ve been discussing extensively with my customers lately, namely, the importance of having BPM (and BRE and BI) as infrastructure components, not just embedded within departmental applications, because it’s going to be nearly impossible to realize any sort of SOA vision without these basic building blocks available.
  • Level 4: Enterprise performance management, which starts to look at the bigger picture of corporate performance management (which is what Gartner used to call this — are they changing CPM to EPM??) and how processes tie into that. I think that this is a critical step that organizations have to be considering now: CPM is a great early warning indicator for performance issues, but also provides a huge leap forward in issues such as maintaining compliance. I just don’t understand why Cognos or other vendors in this space aren’t at this conference talking about this.
  • Level 5: Competitive differentiation, where the business is sufficiently agile due to control over the processes that new products and services can be easily created and deployed. Personally, I believe that competitive differentiation is a measure of how well that you’re doing right from level 1 on up, rather than a separate level itself: it’s an indicator, not a goal per se.

That’s it for now, I’m off to lunch. At this rate, I’ll catch up on all the sessions by sometime next week. 🙂

SOA webinars this week

Two worthwhile SOA webinars from ebizQ in the past two days. Yesterday, it was Developing an SOA Ecosystem with Steve Craggs from the Integration Consortium. Craggs, who also runs a U.K.-based consulting company, Saint Consulting, spoke for a solid 40 minutes, starting with a quick but thorough definition of SOA, moving on to the concept and components of an SOA ecosystem as it should exist in your organization, and finishing up with some tips for success for building an ecosystem that can support 100’s of services. And, I loved two of his quotes near the end: “SOA is not just about technology — it’s a transformation of the business” and “SOA ecosystems ensure sustainable advantage”, very business-oriented views of SOA benefits that are often lost in the more-frequent technical discussions about SOA.

He talked about how SOA has changed from a focus purely on web services (the little blue bit in the diagram below) to a focus on all the other things that need to exist in order to make SOA successful:

My only argument with his taxonomy of the ecosystem is that he puts orchestration as part of the mediation services, and BPM floating somewhere out there as a consumer of the SOA ecosystem, almost as part of the application layer. However, I consider orchestration to be part of the larger definition of BPM, hence believe that BPM belongs as part of the SOA ecosystem itself and in fact includes most of the mediation services that he lists.

My favourite bit was where he referred to web services as “an answer looking for a problem”, and went on to list why just using web services over HTTP is problematic in building an SOA, and why it led to the development of the SOA ecosystem concept. He also gave some compelling reasons why the “best of breed” approach is still the best route in assembling your SOA ecosystem today, and the value of an enterprise service bus and the service registry. None of this is rocket science — in fact, his bit on the importance of correct granularity for services is really just an SOA twist on the age-old programmers’ dilemma of granularity — but it is a nicely packaged talk, delivered in a way that’s understandable to an SOA newbie. Definitely worth a listen.

Today, Frank Kenney from Gartner gave a short but impassioned talk on Policy-Driven SOA, followed by a fairly informative session from Sean Fitts, chief architect at Amberpoint, the webinar sponsor.

Under the heading “SOA does not come for free”, Kenney talked about everything that’s required to get SOA in place: infrastructure, methodology, services creation, testing, etc., but states that it’s worth the price — an argument that I seem to be making to customers over and over again.

He feels that you need to have a certain type of infrastructure that allows you to govern what’s going on in SOA. It’s a different view than Cragg’s SOA ecosystem shown above — kind of a slice across the lower half of Cragg’s diagram, or maybe a third dimension.

Kenney talks about services, processes and policies as assets that can be used for competitive differentiation, which really lines up with Cragg’s statement about ensuring sustainable advantage. What they’re both saying is that the old cowboy web services methods aren’t enough any more: if you really expect to reap the potential benefits of SOA, you need to start putting some discipline in place.

He finished up by stating that governance is a business issue, and that no one vendor can sell governance, but they can help you to achieve governance.

Replays are available at the links. I’d love to see these available as downloadable video for my iPod, by the way.