Vivek Ranadivé Opening Keynote at TUCON

We’re done with the analyst day (although I swear that my handler had me RFID-chipped, since she found me with no problem in the large auditorium at the keynote this morning 😉 ), and on to the general conference.

TIBCO skipped their user conference last year, as did many other technology companies, and there are some significant product announcements that they’ve been saving up for us. We started out with Vivek Ranadivé giving us a longer version of the address that he gave to the analysts yesterday, with TIBCO’s vision of what they can do for customers in an event-driven world. Although many of us are making fun of them for referring to this as “Enterprise 3.0”, and stating that “Enterprise 2.0” is the client-server era from the 80’s to today (which is not the generally accepted definition of Enterprise 2.0), the message is about the “Two Second Advantage”: being able to make decisions faster in order to serve customers better.

By having everything as an event on the bus, and analyzing it with in-memory analytics, companies can take advantage of opportunities that they would otherwise miss if they didn’t have a view into not just the event, but what those events mean in the context of their business.

TIBCO Products Update

Tom Laffey, EVP of Products and Technology, gave us an update at the analyst session yesterday on their new product releases (embargoed until today), but started with an interesting timeline of the their acquisitions. Unlike some companies, who make acquisitions just to remove a competitor from the market, TIBCO appears to have made some thoughtful buys over the years in order to build out a portfolio of infrastructure products. More than just being a Wall Street messaging company with Rendezvous, they have a full stack of mission-critical event processing, messaging, process management, analytics and more that puts them squarely in competition with the big players. Their competition differs for the different product segments: IBM is their biggest competitor, but others including Oracle, some small players and even open source in some cases. They offer fully-responsive 7×24 support through a series of worldwide support centers, handling more than 40,000 support requests per year.

Unfortunately, this leaves them with more than 200 products: a massive portfolio that makes it difficult for them to explain, and even more difficult for customers to understand. A core part of the portfolio is the “connect” part that we heard about earlier: moving point-to-point integrations onto a service bus, using products such as Rendezvous, EMS, BusinessWorks, all manner of adapters, ActiveMatrix, BusinessConenct, CIM, ActiveSpaces and tibbr. On the “automate” part of the platform is all of their BPM offerings: iProcess, the newly-announced ActiveMatrix BPM, Business Studio and PeopleForms. Laffey claimed up front that iProcess is not being replaced by ActiveMatrix BPM (methinks he doth protest too much), which means that there is likely some functionality overlap. The third part, “optimize”, includes Spotfire Suite, S+, BusinessEvents and Netrics.

He discussed their cloud strategy, which includes “internal clouds” (which, to many of us, are not really clouds) as well as external clouds such as AWS; the new Silver line of products – CAP, Grid, Integrator, Fabric, Federator and BPM – are deployable in the cloud.

The major product suites are, then:

  • ActiveMatrix (develoment, governance and integration)
  • ActiveMatrix BPM (BPM)
  • Spotfire (user-driven analytics and visualization)
  • BusinessEvents (CEP)
  • ActiveSpaces (in-memory technologies, datagrid, matching, transactions)
  • Silver (cloud and grid computing)

He dug back into the comparison between iProcess and ActiveMatrix BPM by considering the small number of highly-complex core business processes (such as claims processing) that are the focus for iProcess, versus the large number of tactical or situational small applications with simple workflows that are served by PeopleForms and ActiveMatrix BPM. He gave a quick demo that shows this sort of simple application development being completely forms-driven: create forms using a browser-based graphical form designer, then email it to a group of people to gather responses to the questions on the form. Although he referred to this as “simple BPM” and “BPM for the masses”, it’s not clear that there was any process management at all: just an email notification and gathering responses via a web form. Obviously, I need to see a lot more about this.

TIBCO’s Recent Acquisitions: DataSynapse, Foresight, Netrics and Spotfire

No rest for the wicked: at the analyst lunch, we had sessions on four of TIBCO’s recent acquisitions while we were eating:

DataSynapse

This is a significant part of TIBCO’s cloud and grid strategy, with a stack of four key products:

  • Grid Server, which allows multiple servers to be pooled and used as a single resource
  • Fabric Server, which is the platform-as-a-service platform on top of Grid Server
  • Federator, a self-service provisioning portal
  • DataSynapse Analytics, providing metering of the grid

The real meat is in the Grid Server, which has been used to create private clouds of over 40,000 connected cores; these can be either internal or externally-facing, so are being used for customer-facing applications as well as internal ones. They position Grid Server for situations where the application and configuration complexity are just beyond the capabilities of a platform like VMWare, and see three main use cases:

  • Dynamic application scalability
  • Server virtualization to improve utilization and reduce deployment times
  • Rolling out new applications quickly

Foresight

A recent acquisition, Foresight is used for transaction modernization and cross-industry EDI, although they have some very strong healthcare solutions. They have several products:

  • Gateway/portal for managing healthcare insurance transactions between parties
  • EDISIM, for EDI authoring, testing and compliance
  • HIPAA Validator, for compliance and validation of HIPAA transactions
  • Instream, for routing, acknowledgement, management and translation of messages and events
  • Community Manager, for mass testing and migration

From cloud to EDI was a bit of a retro comparison, although there’s a lot of need for both.

Netrics

Netrics does data matching of (semi-)structured data, such as name matching in databases, in order to clean up data, reduce errors and repeats, and improve decision-making. They have two products:

  • Matching Engine models human similarity measures for comparing data
  • Machine Learning Engine models human decisions on data

Interesting discussion about some of the algorithms that they’re using, that go far beyond the simple soundex-type calculations that are more commonly available.

Spotfire

Spotfire is the oldest acquisition of the four presented here (three years ago), and was shown as much to show TIBCO’s model for acquisition and assimilation, as it was to talk about Spotfire’s capabilities.

Spotfire, as I’ve written about previously, provides easy-to-use visual analytics, using in-memory data for near-instantaneous results. Since becoming part of TIBCO, they’ve integrated with other TIBCO products to become visualization for a wide range of process and event-driven applications. their integration with iProcess BPM was shown back in 2008, and they’ve developed links with the SOA and CEP products as well.

This acquisition shows how TIBCO’s acquisition process works with these smaller companies – different from either the Borg or death by 1000 cuts methods of their competitors – first of all since they tend to target companies specifically that allow them to leapfrog their competition technologically by buying cool and innovative technology. Once acquired, Spotfire had access to TIBCO’s large base of customers, partners and markets, providing an immediate boost to their sales efforts. As they reorganized, the product group focused on preserving what worked at Spotfire, while optimizing for execution within the larger TIBCO context. Alongside this, the Spotfire product group worked with other TIBCO areas to integrate to other technologies, weaving Spotfire into the TIBCO portfolio.

TIBCO Go-To-Market Strategy and Regional Sales Update

Following the product update (which is embargoed until tomorrow), Ram Menon was up to talk about their go-to-market strategy. TIBCO has really been known as a powerhouse in the financial services in the past, but given the meltdown in the financial markets over the past two years, they’ve probably realized that this former cash cow doesn’t always stay on its feet. However, their event-based products can go way beyond that into retail pipeline management (think RFID tags on items for sale), government and many other areas; they just need to figure out how to sell into those markets. They have a number of vertical marketing messages prepped to go, but as a couple of analyst tweets pointed out, it’s a bit of a confusing message when they don’t have the applications to back it up, and the case studies are almost identical to those of IBM’s Smarter Planet, which doesn’t give them a lot of differentiation.

They have a 40-company road show planned, as well as vertical market pushes through their SI partners. In the panel of the regional sales VPs discussing what’s actually happening out there, we saw a chart of the industry verticals where financial services is the biggest single sector, but only around 25% of the total (I think – the slide went by quickly). Discussions on the panel indicated that SOA is their biggest business in the US (basic integration middleware, really, for non-intrusive implementations rather than rip-and-replace), but is still in the early stages in Asia, where messaging is the hot topic. BPM sales in the Americas typically also include SOA infrastructure, indicating that they’re leaning heavily on the value of the stack for BPM sales rather than its standalone capabilities: not sure if that’s intentional positioning, or an artifact of the product, sales force, or both. There is a lot of interest in newer ideas such as in-memory analytics: as one of the panelists put it, the customers “just get it” when you show the value proposition of reducing response time by having information available faster. It will be interesting to see how their vertical marketing efforts line up with the existing market penetration both by industry and product.

All in all, TIBCO’s branding feels a bit behind the times: Enterprise 3.0 is becoming a joke amongst the analysts attending here today (we’re tweeting about staging an intervention), and the “ending r with no preceding vowel” of tibbr is so 2006. The new TIBCOSilver brand covers all of their grid and cloud offerings, but doesn’t really make you think “cloud” when you hear the name. Like Brenda Michelson, however, I like the “Two Second Advantage” message: it’s short, memorable, and actually means something.

TIBCO’s Enterprise 3.0 Vision

Murray Rode, TIBCO’s COO, started the TIBCO analyst day with their vision and strategy. The vision: Enterprise 3.0. Srsly. They seem to have co-opted the Enterprise 1.0/2.0 terms to mean what they want it to mean rather than the more accepted views: they define Enterprise 2.0, for example, as everything from the 80’s to 2009, including client-server. I don’t mean to sound negative, but that’s not what we mean by Enterprise 2.0 these days, and whoever came up with that idea for their branding has just made them sound completely out of touch. Their spectrum goes from Enterprise 1.0 data processing from the 60’s to the 80’s, Enterprise 2.0 client-server, and Enterprise 3.0 predictive analytics and processing: using in-memory data grids rather than databases, and based more on events than transactions.

Putting aside the silliness of the term Enterprise 3.0, I like their “Two Second Advantage” tagline: when fast processing and analysis of events can make a competitive difference. Their infrastructure platform has three pieces:

  • Connect (SOA), fed by messaging and data grids
  • Analyze and optimize
  • Automate (BPM)

They can used the cloud as a deployment mechanism for scalability, although that’s just an option. In addition to the usual infrastructure platform, however, they’re also following the lead of many other vendors by pushing out vertical solutions.

We’re about to head into the product announcements, which are embargoed until tomorrow, so things might get quiet for a while, although I’m sure that there will be lots of conversation around the whole Enterprise 3.0 term.

Conference Season Begins

It’s been quiet for several months for conferences, but things are heating up again for the next four weeks. Here’s my upcoming schedule:

  • This week, I’m at PegaWorld in Philadelphia, including chairing a workshop on Wednesday morning on case management
  • The week of May 3rd, IBM Impact in Las Vegas
  • The week of May 10th, TIBCO’s TUCON in Las Vegas
  • The week of May 17th, SAP SAPPHIRE in Orlando

If you’re attending any of these events, be sure to look me up. I’ll be blogging from all of them. You can find these, and many other BPM-related events, at the BPM Events calendar. If you have an event to add to the calendar, just let me know.

Disclosure: each of the vendors pays my travel expenses for me to attend their user conference. They do not, however, have any editorial control over what I write while at the conference.

TUCON: Process Plans using iProcess Conductor

The last session of the day — and likely the last one of the conference for me, since I think that the engineering roundtables tomorrow morning are targeted at customers — was Enrique Goizueta of TIBCO discussing a "Lego approach" to creating business processes: dynamic BPM using the iProcess Conductor. Bruce Silver raved about the Conductor after seeing it at the solutions showcase on Tuesday, and it seems to have been a well-kept secret from those of us who watch the BPM space.

Goizueta started by discussing complex processes such as the cross-selling bundling processes seen in telecommunications and financial services, or claims management that may include both property damage and bodily injury exposures. In many cases, there are too many alternatives to realistically model all process possibilities explicitly, or the process is dynamic and specific instances may change during execution. The key is to identify reusable parts of the process and publish them as discrete processes in a process library, then mix them together at runtime as required for the specific situation. Each of these is a fully-functional, self-contained process, but the Conductor packages up a set of these at runtime and manages them as a "plan", presenting this package as a Gantt chart similar to a project plan. As with tasks in a project plan, you can set dependencies within a plan in Conductor, e.g., not starting one process until another is completed, or starting one process two weeks after another process starts. The iProcess engine still executes the processes, but Conductor is a layer above that to allow you to manage and monitor all the processes together in order to manage dependencies and identify the critical path across the set of processes.

TIBCO iProcess Conductor

This is very cool just as it is, but the Conductor also allows you to change a plan while it’s executing, adding and canceling processes on the fly.

He gave us a demo of Conductor for auto insurance claims management, where both vehicle damage and personal injury claims have been made, and these must be completed before the liability claim can be started processing.

For processes that always run together as single instances, such as a loss adjustment report followed by a vehicle repair claim, I’m not sure why you would represent these as separate processes that are put in the plan as end-to-end rather than subprocesses called by a single process, but there are other parts of this where the benefit of using Conductor is more clear, such as the ability to dynamically add a second liability claim a week into the process.

As Bruce pointed out, this is really case management, but it’s pretty nice case management. SLAs and critical paths can now be managed across the entire plan as well as for each individual process within it, and there’s lots of examples of complex processes that could benefit from this type of dynamic BPM.

Tonight we’re all off to the Exploratorium, where TIBCO is hosting a private party for us to check out the fun and interactive science exhibits. I’m flying back to Toronto tomorrow, which might give me a few hours on the flight to finish up some other blog posts that I’ve been working on, and watch for my coverage of SAPPHIRE next week from Orlando.

TUCON: BPM Health Insurance Case Study

Both Patrick Stamm (CTO) and Kevin Maloney (CIO) of Golden Rule Insurance were on hand to discuss their experiences in building a BPM infrastructure. They started out looking at BPM because of the multiple redundant systems and applications that they have, which is endemic in insurance: multiple ratings engines, multiple policy systems and multiple claims systems due to acquisitions and leapfrogging technologies. They needed to be more responsive and agile to changing business requirements, and increase end-to-end process visibility and management.

As they started looking at enterprise-wide BPM, they had a number of objectives:

  • Improving scalability
  • Improving cycle time and quality of process
  • Facilitating self-service on the web
  • Harvest rules from custom legacy systems
  • Reduce reliance on paper

This presentation focused on their new business process, from application submission through underwriting to issuance of the policy. Not surprisingly, adding BPM to underwriting was one of their significant challenges here; underwriting is often perceived as being as much of an art as a science, and I’ve seen a lot of resistance to introducing BPM into underwriting in many organizations that I’ve worked with.

They wanted to be strategic about how they implemented BPM, and established governance for the entire BPM program early on in the process. This allowed them take a big-picture approach, and led them to change how they do development by incorporating offshore development for the first time. The architecture of the TIBCO toolset allows them to get a lot of reusability across the different business silos (which still stay separate above the common platform), and the scalability helped them with both business continuity and business growth.

They have a 5-layer logical architecture:

  • UI layer, including General Interface, VB and other UI platforms
  • Services layer, strangely shown above the BPM layer, although it is called directly from the UI layer in some cases as well as from the BPM layer
  • BPM layer, which seems to actually show their queues rather than their business processes, which makes me wonder what the processes actually look like beyond a simple one-step queue
  • EAI layer, including all the adapters
  • Data access layer

Some of the highlights of their New Business process in BPM:

  • Mainframe integration to eliminate redundant data entry, triggering multiple mainframe transactions from a single BPM interface
  • Integration of business rules to eliminate error for incorrect riders, saving the underwriters’ time in researching which riders are applicable in which state
  • Integration with third parties, such as MIB (Medical Information Bureau) to automatically retrieve data from these sources rather than having users look it up manually on those parties’ web pages

The results that they’ve seen in less than a year since they’ve deployed:

  • New business volume is up over 50% with essentially the same number of staff
  • Applications processed per FTE is up over 30%
  • Cycle time is significantly reduced, as much as 30% in some cases
  • Better quality and consistency, with several error types eliminated
  • Improved visibility into business processes through better and more timely metrics and reporting

Their lessons learned:

  • Implementation partner selection is key: they’ve been happy with TIBCO as a product partner, but they had a bit of a rocky time with their first TIBCO integration partner and started over four months later. They still did the implementation in 11 months total, so really seven months from the point of restart.
  • You need to develop internal expertise in the tool and technology.
  • The first project should not be mission critical, and there must be a contingency plan. Funny, they didn’t consider New Business to be mission critical, but in reality, reverting to paper is an easy fallback in that case.
  • Don’t underestimate the impact that BPM will have on operational management and work culture.

This sounds like a fairly standard insurance implementation (I’ve done a few of these), but I like how they’re moving into the use of rules, and see the introduction of rules as having a significant impact on their process efficiency and cycle time.

TUCON: Predictive Trade Lifecycle Management

I switched over to the architecture stream to see the session on trade lifecycle management using BusinessWorks and iProcess, jointly presented by Cognizant and TIBCO. Current-day trading systems are under a great deal of stress because of increased volume of trades, more complex cross-border trades, and greater compliance requirements.

When trades fail, for a variety of reasons, there is a risk of increased costs to both the buyer and seller, and failed trade management is a key process that bridges between systems and people, potentially in different companies and locations. If the likelihood of a trade failure could be predicted ahead of time — based on some combination of securities, counterparties and other factors — those most likely to fail can be escalated for remediation before the trade hits the value date, avoiding some percentage of failed trades.

The TIBCO Event Fabric platform can be used to do some of the complex event processing required to do this; in fact, failed trade management is an ideal candidate for CEP since the underlying reasons for failure have been studied extensively and are fairly well-understood (albeit complex). Adding BPM into the mix allows the predicted failed trade situation to be pumped into BPM for exception processing.

The only thing that surprised me is that they’re not doing automated detection of the problem scenarios, but relying on experienced users to identify which combinations of parameters are likely to result in failed trades.

TUCON: Using BPM to Prioritize Service Creation

Immediately after the Spotfire-BPM session, I was up to talk about using BPM to drive top-down service discovery and definition. I would have posted my slides right away, but one of the audience members pointed out that the arrows in the two diagrams should be bidirectional (I begged forgiveness on the grounds that I’m an engineer, not a graphic artist), so I fixed that up before posting to Slideshare:

My notes that I jotted down before the presentation included the following:

  • SOA should be business focused (even owned by the business): a top-down approach to service definition provides better alignment of services with business needs.
  • The key is to create business-granular services corresponding to business functions: a business abstraction of SOA. This requires business-IT collaboration.
  • Build thin applications/processes and fat services to enable agile business processes. Fat services may have multiple operations for different requirements, e.g., retrieving/updating just the customer name versus the full customer record in an underlying system.
  • Shared business semantics are key to identifying reusable business services: ensure that business analysts creating the process models are using the same terminology.
  • Seek services that have the greatest business value.
  • Use cases can be used to identify candidates for services, as can boundary crossings activity diagrams.
  • Process decomposition can help identify reusable services, but it’s not possible to decompose and reengineer every process: look for ineffective processes with high strategic value as targets for decomposition.
  • Build the SOA roadmap based on business value.
  • SOA isn’t (just) about creating services, it’s about building business processes and applications from services.
  • Services should be loosely-coupled and location-independent.

There were some interesting questions arising from this, one being when to put service orchestration in the services layer (i.e., have one service call another) and when to put it in the process layer (i.e., have a process call the services). I see two facets to this: is this a business-level service, and do you want transparency into the service orchestration from the process level? If it’s not a business-level service, then you don’t want business analysts having to learn enough about it to use it in a process. You can still do orchestration of technical services into a business service using BPM, but do that as a subprocess, then expose the subprocess to the business analyst; or push that down to the service level. If you’re orchestration business-level services into coarser business-level services, then the decision whether to do this at the service or process level is about transparency: do you want the service orchestration to be visible at the process level for monitoring and process tracing?

This was the first time that I’ve given this presentation, but it was so easy because it came directly out of my experiences. Regardless, it’s good to have that behind me so that I can focus on the afternoon sessions.