TUCON: Predictive Trade Lifecycle Management

I switched over to the architecture stream to see the session on trade lifecycle management using BusinessWorks and iProcess, jointly presented by Cognizant and TIBCO. Current-day trading systems are under a great deal of stress because of increased volume of trades, more complex cross-border trades, and greater compliance requirements.

When trades fail, for a variety of reasons, there is a risk of increased costs to both the buyer and seller, and failed trade management is a key process that bridges between systems and people, potentially in different companies and locations. If the likelihood of a trade failure could be predicted ahead of time — based on some combination of securities, counterparties and other factors — those most likely to fail can be escalated for remediation before the trade hits the value date, avoiding some percentage of failed trades.

The TIBCO Event Fabric platform can be used to do some of the complex event processing required to do this; in fact, failed trade management is an ideal candidate for CEP since the underlying reasons for failure have been studied extensively and are fairly well-understood (albeit complex). Adding BPM into the mix allows the predicted failed trade situation to be pumped into BPM for exception processing.

The only thing that surprised me is that they’re not doing automated detection of the problem scenarios, but relying on experienced users to identify which combinations of parameters are likely to result in failed trades.

TUCON: Using BPM to Prioritize Service Creation

Immediately after the Spotfire-BPM session, I was up to talk about using BPM to drive top-down service discovery and definition. I would have posted my slides right away, but one of the audience members pointed out that the arrows in the two diagrams should be bidirectional (I begged forgiveness on the grounds that I’m an engineer, not a graphic artist), so I fixed that up before posting to Slideshare:

My notes that I jotted down before the presentation included the following:

  • SOA should be business focused (even owned by the business): a top-down approach to service definition provides better alignment of services with business needs.
  • The key is to create business-granular services corresponding to business functions: a business abstraction of SOA. This requires business-IT collaboration.
  • Build thin applications/processes and fat services to enable agile business processes. Fat services may have multiple operations for different requirements, e.g., retrieving/updating just the customer name versus the full customer record in an underlying system.
  • Shared business semantics are key to identifying reusable business services: ensure that business analysts creating the process models are using the same terminology.
  • Seek services that have the greatest business value.
  • Use cases can be used to identify candidates for services, as can boundary crossings activity diagrams.
  • Process decomposition can help identify reusable services, but it’s not possible to decompose and reengineer every process: look for ineffective processes with high strategic value as targets for decomposition.
  • Build the SOA roadmap based on business value.
  • SOA isn’t (just) about creating services, it’s about building business processes and applications from services.
  • Services should be loosely-coupled and location-independent.

There were some interesting questions arising from this, one being when to put service orchestration in the services layer (i.e., have one service call another) and when to put it in the process layer (i.e., have a process call the services). I see two facets to this: is this a business-level service, and do you want transparency into the service orchestration from the process level? If it’s not a business-level service, then you don’t want business analysts having to learn enough about it to use it in a process. You can still do orchestration of technical services into a business service using BPM, but do that as a subprocess, then expose the subprocess to the business analyst; or push that down to the service level. If you’re orchestration business-level services into coarser business-level services, then the decision whether to do this at the service or process level is about transparency: do you want the service orchestration to be visible at the process level for monitoring and process tracing?

This was the first time that I’ve given this presentation, but it was so easy because it came directly out of my experiences. Regardless, it’s good to have that behind me so that I can focus on the afternoon sessions.

TUCON: BPM with Spotfire Analytics

Lars Bauerle and Brendan Gibson of TIBCO showed us how Spotfire analytics are being integrated with data from iProcess to identify process improvement. I hadn’t seen Spotfire in any detail before the demo that I saw on Tuesday, and it’s a very impressive visualization and analysis tool; today, they showed iProcess process runtime data copied and pasted from Excel into Spotfire, but it’s not clear that they’ve done a real integration between the iProcess process statistics and Spotfire. Regardless, once you get the data in there, it’s very easy to do aggregations on the fly then drill into the results, comparisons of portions of the data set, and filtering by any attributes. You can also define KPIs and create dashboard-style interfaces. Authoring and heavy-duty analysis are done using an installed desktop application with (I believe) a local in-memory engine, but light-weight analysis can be done using a zero-install web client and all analysis done on the server.

In addition to local data, it’s possible to link directly from enterprise databases into the Spotfire client, which effectively gives the Spotfire user the ability to do queries to bring data into the in-memory engine for visualization and analysis — in other words, there doesn’t appear to be any technical barriers to establishing a link to the statistics in an iProcess engine. They showed a model of data flowing from the iProcess server to a data mart, which would then be connected to Spotfire; realistically, you’re not going to let your analytics hit your production process engine directly, so this makes sense, although there can be latency issues with this model. It’s not clear if they provide any templates for doing this and for some standard process analytics.

They did a demo of some preconfigured analytics pages with process data, such as cases in progress and missed SLAs, showing what this could look like for a business manager or knowledge worker. Gibson did refer to "when you refresh the data from the database" which indicates that this is not real-time data, although it could be reasonably low latency depending on the link between iProcess and the data mart, and client refresh frequency.

Then, the demo gods reared their heads and Spotfire froze, and hosed IE with it. Obviously, someone forgot to do the animal sacrifice this morning…

They went to questions while rebooting, and we found out that it’s not possible to stream data in real-time to Spotfire (as I suspected from the earlier comments); it needs to load data from a data source into its own in-memory engine on a periodic basis. In other words, you’re not going to use this as a real-time monitoring dashboard, but as an advanced visualization and analytics tool.

Since this uses an in-memory engine for analytics, there are limitations based on the physical memory of the machine doing the processing, but Spotfire does some smart things in terms of caching to disk, and swapping levels of aggregation in and out as required. However, at some point you’re going to have to consider loading a subset of your process history data via a database view.

There was a question about data security, for example, if a person should only be able to drill down on their own region’s data; this is done in Spotfire by setting permissions on the queries underlying the analysis, including row-level security.

iProcess Analytics is being positioned as being for preconfigured reporting on your process data, whereas Spotfire is positioned for ad hoc analysis and integration with other data sets.

Spotfire could add huge value to iProcess data, but it appears that they don’t quite have the whole story put together yet; I’m looking forward to seeing how this progresses, some real world case studies when customers start to use it, and the reality of what you need to do to preprocess the ocean of process data before loading it into Spotfire for analysis.

TUCON: Centralized BPM Platform at HBOS

The last session of the day was a bit of a tough choice: I was thinking about heading over to see the session on in-process analytics through the integration of Spotfire and BusinessEvents, but decided in favor of hearing Richard Frost of HBOS (a UK-based financial services organization) discuss their centralized BPM platform and center of excellence strategy. Since they were created from a merger of Halifax and Bank of Scotland, and are made up of a number of brands, there’s quite a bit of vertical IT within the individual organizations. They’ve been moving some of this into shared services (what they call Group IT), including a business process layer based on TIBCO’s iProcess.

They had some significant drivers for BPM, allowing for growth while containing costs, and codifying processes and knowledge to reduce the impact of employee turnover. They had a variety of process types to manage as well, from straight-through processing with integration to their existing systems to high-touch human-centric and collaborative processes, so needed a product that could handle both well. They deployed BPM in a number of stages:

  • Digitizing, with human workflow and case management based on scanned documents
  • Automation
  • Optimization, through automation and separation of process logic from operational systems

As they roll this out, the benefits from automation have been most apparent and used in future business cases, and implementation costs are expected to reduce through reusability.

Instead of each division deploying their own BPM, they are moving to a centralized platform for a number of reasons:

  • Shared processes, such as complaints handling
  • Shared platform for cost savings
  • Shared resources
  • Best practices and governance
  • Architecture simplification

On this common software and hardware platform, each division has their own unique services, processes, rules and parameters; they’re now building a common services layer that will be reusable across divisions as well as consolidating onto the same physical hardware and software platform. They’ve had to determine ownership of each layer — which is owned by the divisions, shared services application development, and shared services technology — as well as governance of these layers by a business-led user group, an IT-led process certification board and a joint business-IT change approvals board.

They see the business opportunity for BPM is to remove the IT problems from what the business has to consider by providing a common platform, allowing them to focus on business and process improvement. Frost showed a chart that mapped process types (simple, regular, complex) against solutions (manual work distribution and handling, imaging and workflow with minimal integration, BPM with application integration) in order to identify the key processes to consider for BPM: although the conventional wisdom is to go for the simple processes that can be fully automated with BPM and application integration, he also feels that there’s huge benefits in looking at the complex processes that require a lot of human knowledge work. They also use this as a guideline for both simplifying processes and pushing for a greater degree of automation.

In an example of one of their insurance arrears processes, they’ve removed 60% of the human effort by automating most of the steps involved, while improving both service times and consistency.

His recommendations:

  • Understand your organizational model, recognizing where you are in your process efforts and aligning your BPM and SOA strategies
  • Don’t obsess on software selection, or the divisions will just do their own thing instead of waiting for the common platform
  • It will be hard work and will take a significant piece of time — HBOS has spent two years from when they did their first TIBCO pilot to where they are today with a shared platform
  • Reviewing and optimizing processes is crucial so that you’re automating the right processes
  • Needs a combined effort of a business push and an IT pull

An interesting message here is that although we all want 3-month delivery cycles for BPM projects, creating a shared BPM platform across multiple divisions takes a lot longer. A roadmap that allows divisional installations of the enterprise-standard platform in the interim, to be converged on the shared platform at a later date, is essential to allow progress on BPM applications within divisions while the shared platform is being developed.

TUCON: BPM Product Update

Roger King and Justin Brunt of TIBCO gave us an update of what’s happened lately with their BPM product, and what’s coming up.

In the past year, Business Studio has added a lot of new features:

  • Support for BPMN 1.0 and XPDL 2.0
  • In-line service binding and mapping, through direct connections to Business Works, web services, Java, databases, email and scripts
  • Direct deployment to the iProcess engine
  • Management of models using any source control system that supports Eclipse, or using their packaged Subversion option
  • Visual Forms Editor for creating forms directly in Business Studio using platform-independent models at design time and platform-specific models for run time: General Interface now, and other platforms to follow. Forms can be created from a step definition with a default layout based on the exposed parameters, then the forms editor can be used to add other UI widgets.
  • In-line subprocesses and a number of other modeling niceties.

The iProcess Workspace (end-user browser client) has been simplified and updated using an Outlook visual paradigm, based on General Interface. This is supported on IE 6 and 7 (no mention of Firefox). It’s also possible to use GI Builder to create your own BPM client, since the components are provided for easy inclusion, allowing iProcess functionality to be embedded into web pages or as portlets, with no knowledge of the iProcess APIs.

The iProcess Suite has a number of other improvements, including generic data plugins and direct deployment from Business Studio, plus support for 64-bit Windows and SUSE Linux. There’s also been repackaging and installation improvements. As we heard this morning, there’s also event-driven real-time worklist management, where a user can be alerted when something in a queue changes rather than having to poll it manually. There’s also updated LDAP authentication.

iProcess also has a new version of its web services plugin providing improved inbound and outbound web services security (at the transport layer with SSL and digital signatures and at the message layer through signatures, encryption and tokens), plus enhanced authentication.

The big thing in my mind is that Business Studio 3.0 now contains all key iProcess Modeler features so that it’s no longer necessary to use iProcess Modeler as an intermediate step in moving processes from Business Studio to the iProcess execution engine: Business Studio is the new BPM IDE. At TUCON last year, I said that this definitely needed to happen, and I’m very happy to see that it has since it represents a significant advance into full model-driven development for TIBCO’s BPM.

Their vision for BPM going forward is that the complexity of process models can be pushed down into the infrastructure, and free the business process modeling/design tools from the technical details that have made process modeling into a technical rather than business role over the past years. This will allow business people to do what the BPM vendors have always told us that they could do: design executable process models without having to be a technical expert. King feels that the key to this is service and data virtualization, since data is BPM’s "dirty secret": synchronization of data within a business process with other systems is one of the key drivers for having a technical person do the process models instead of a business person. Virtualizing the location, ownership, form and transport of the data means that you don’t need to worry about a business analyst doing something inappropriate with data in the course of process modeling.

The idea is that BPM suites will become model-driven composite application development and deployment platforms (wait! isn’t that what they’re supposed to be already?), with more latitude for business sandboxes and mashups for prototyping and building situational applications.

They’re working on breaking off the front end of the process engine to allow the creation of a single enterprise "work cloud" that can be used for any source of information or work coming at someone: sort of like event processing, but at a higher semantic level.

In addition to all the event-driven goodies, they’re also focused on covering the entire domain of process patterns (as in the full academic set of process patterns), so that any process could be modeled and executed using TIBCO’s BPM. We’ll also see some enhanced resource and organizational modeling, plus scheduling, capability requirements, SLAs and more models corresponding to real-world work.

TUCON: Architect’s Guide to SOA and BPM

I enjoyed Paul Brown’s seminar in Toronto a few weeks back, so I attended his session today on planning and architecture for SOA and BPM: how to define the services that we need and rationalize our data architecture in the face of managing end-to-end processes that span functional silos? Although many organizations have systems within those functional silos, the lines of communication — both person-to-person and system-to-system — always cross those silos in any real business process.

A lot of new skills are required in order to adopt SOA and BPM across the enterprise, from high-level executive support to a worker-level understanding of how this changes their day-to-day work. To make all of this work, there needs to be a total architecture perspective, including business processes, people, information and systems all coalescing around a common purpose. Business needs to re-engage with IT — in many organizations, they’ve been scared away for a long time — in order to get that business-IT collaboration happening.

Brown covered some of the same ground about separating out services, processes and presentation on as he did in the seminar, which I won’t repeat here but recommend that you check out the link above for more details.

He went on to discuss the TIBCO BPM.SOA execution model. First, develop the execution strategy for the entire program:

  • Develop vision and program roadmap
  • Define and implement organization and governance
  • Define and implement technical infrastructure and standards

Then, move on to solutions and operations for each project:

  • Analyze process and develop project roadmap
  • Design, build and deploy business process
  • Operate the business

This last point highlights the importance of setting and measuring goals for the project; you don’t know whether your project was successful until it’s been in operation a while and some measurements have been taken.

He had some pointers for how to get started with BPM and SOA:

  • Focus on business processes first: they’re the source of business value, and the glue that binds the people and systems together.
  • Separate service access mediation (access control, security, routing, distribution) from services.
  • Acknowledge different types of processes, both unmanaged and managed/orchestrated.
  • Separate processes and presentation.
  • Embrace total architecture with a cross-functional architecture team

He finished up with some case studies of organizations that have taken an architectural approach to rolling out SOA and BPM, and how this has made IT departments much more responsive to new business requirements. Findings by one organization included that they wanted to have more IT involvement in business processes in order to better align the business processes with the underlying services. For services that will be used across multiple systems, it’s critical to have an enterprise architecture group review these for reusability.

His final summary: keep the business process focus as the source of business process; BPM and SOA provide opportunities for improving business process; and the major challenges are organizational, not technical.

TUCON: Design for People, Build for Constant Change

Connie Moore kicked off the Process Improvement track with the Forrester message "design for people, build for change" and dynamic business applications to a packed room. Check out my coverage of her keynote from the Forrester technology leadership conference last year for some background to this theme.

She discussed how methods of working are changing to put the worker at the center, with access to their information, processes, functions and other components as required: the modern information worker decides what he needs to complete any given task. In order to accommodate this, workers need dynamic applications that provide a highly-contextual dashboard/portal interface that might include client information, a calendar of events related to that client’s data, what-if tools for financial analysis, tools such as online enrolment for selling additional products to the client, and other information that’s related to what’s happening right now, not static information.

She sees BPM as going mainstream, and dragged out the hockey-stick growth predictions that all the big analysts love; I’m still seeing a lot of niche and departmental applications of BPM and think that these growth projections may only be met if the analysts continue to change the boundaries of what is considered to be BPM.

She covered several of the reasons for deploying BPM, and walked through some best practices for getting started:

  • Start with a major process that is causing pain: there will be less resistance to change, and easier support and funding. Typically, these are customer-facing, high-volume processes with lots of steps and handoffs. I’m also a big fan of this approach, since no one ever justified enterprise-wide deployment of BPM by doing a proof of concept with managing expense reports.
  • Look for quick hits, using an incremental approach and targeting 3-month release phases. I’m also completely behind this idea, and always recommend getting something simpler into production sooner, then adding on functionality and fine-tuning processes incrementally. I’ve found that BPM implementations lend themselves particularly well to Agile methodologies.
  • Design for real-world processes by doing effective process discovery: avoid interviewing the managers and reading the out-of-date procedures documentation in favor of talking with the people who really know how the process currently works and where the pain points are that need fixing. You don’t want to get too granular here, but use some process modeling tools to sketch things out and identify subprocesses and services. I’m going to expanding on this topic tomorrow in my breakout session, Using BPM to Prioritize Service Creation.
  • Link BPM and SOA. 71% of large companies surveyed by Forrester said that SOA was very important to their BPM efforts: the availability of services is what makes it possible to create and modify processes quickly and easily.
  • Keep the financials in mind. Link projects to the line of business rather than infrastructure, and don’t burden the first project with the infrastructure cost. Measure the results and ROI to use for future project justifications. For ROI calculations, she listed conservative estimates of saving 30-50% of clerical workers’ time, and 20-35% for knowledge workers, with transaction-focused processes seeing even greater benefit.
  • Develop a competency center from the start, including a cross-functional and collocated team of developers and business analysts, strong involvement from the vendor, and judicious use of systems integrations for specific targeted parts of the project. Forrester has seen a strong correlation between the existence of a competency center and measurable benefits in BPM projects.

She recently interviewed a financial services client of TIBCO’s, and they shared a few of their lessons learned:

  • Reengineer the process first, then pick the tool
  • Set the tools aside and focus on the process
  • Be prepared for staffing challenges
  • A competency center is critical

This was really a whirlwind tour of Forrester’s view of BPM, much too much information for a 50-minute presentation but lots of good stuff in here.

Architecture & Process: Doug Reynolds

Doug Reynolds of AgilityPlus Solutions presented on critical success factors in a BPM implementation. I’ve known Doug a long time — back in 2000 when he was at Meta Software and I was at FileNet — and have had a lot of discussions with him over the years about the BPM implementations that we’ve seen, both the successes and the failures.

He talked about how BPM is similar to other performance improvement initiatives, but that there’s some key differences, too. Any successful BPM project has several facets: solution, project management and change management. Breaking the solution component down further, it includes people, process and technology. He feels that process is the place to start with any solution, since people and technology will need to meet the needs of the process.

In order to talk about success factors, it’s important to look at why projects fail. With IT projects, we have a lot of failures to choose from for examination, since various analyst sources report failure levels of up to 70% in IT projects. In many cases, projects fail because of poor requirements or ill-defined scope; in BPM projects, the business process analysis drives the requirements, which in turn drive the solution, highlighting the critical nature of business process analysis.

He highlighted eight signs of a healthy BPM implementation, using the word semiotic as a mnemonic:

  • Stability. The system must be stable so that the business is able to rely on its availability.
  • Exploitation. You need to exploit the technology — put it to work and continually improve your usage of it — in order to get the benefit. Buying a system that was very successful for someone else doesn’t automatically confer their level of success onto you.
  • Management and leadership. You need executive sponsorship with vision and direction, but also have to consider the impact on middle management, who are heavily affected by changing processes in terms of how they manage their workforce.
  • Inertia. You need to actively change the way people work, or they’ll keep doing things the old way with the new system.
  • Ownership. Ownership of the solution needs to stay with the business, not transfer to IT, in order to have the focus stay on benefit rather than capability.
  • Transparency. Some aspects of work may appear to be less transparent — e.g., you can’t tell how much work there is to do by walking around and looking at the piles of paper — but the metrics gathered by a BPMS actually provide much more information than manual methods. This “Big Brother” view of individual performance can be threatening to some people, and their perceptions may need to be managed.
  • Integration. Integration with other systems can be a huge contributor to the benefits by facilitating automation and decoupling process from data and functional services. However, this can be too complex and cause project delays if too much integration is attempted at once. I completely agree with this, and usually advocate getting something simpler in production sooner, then adding in the integration bits as you go along.
  • Change management. Change management is key to bringing people, process and technology together successfully, and needs to be active throughout a project, not as a last-minute task just before deployment.

Doug encouraged interaction throughout the presentation by asking us to identify which two of these eight are the most foundational, and eventually identified his two foundational picks as exploitation and inertia: using the system the best way possible, and ensuring that change happens, are two things that he often sees missing in less-than-successful implementations, and are required before the rest of the things can occur.

Architecture & Process: Robert Pillar

The first breakout session of the day was on connecting BPM, SOA and EA for enterprise transformation, with Robert Pillar of Microsoft. He’s talking about how compliance is the key driver to the coalition of BPM, SOA and EA, but that the coalition starts with holistic collaboration. There are barriers to this:

  • Organizational barriers: IT organizations and silos between EA, SOA and BPM groups
  • Cultural barriers: lack of understanding the business value, lack of understanding the concepts, and old-style mentality
  • Political barriers: resistance to change
  • Collaboration barriers: resistance to meetings and collaboration

Risks and benefits must be measured.

At this point, someone in the audience spoke up and said “we understand all this, can you just skip ahead to any solutions to these issues that you have to present?” Incredibly rude, and really put the speaker on the spot, but he had a point.

He had a summary slide on why to choose SOA:

  • It offers a focus on business processes and goals: supports customer centric view of the business, allows design of solutions that keep requirement changes (agility) in mind
  • It offers an iterative and incremental approach following EA and BPM initiatives: make change happen over time, allow employees learn about the concept of services
  • It offers a means to reap the benefits of existing investments on technology: reuse IT resources, focus on business problems without being entangled in the technology

He sees EA and BPM as leading us to SOA, which is a valid point: if you do EA and BPM, you’ll definitely start to do SOA. However, I see many organizations starting with SOA in the absence of either EA or BPM.