Vivek Ranadivé Opening Keynote at TUCON

We’re done with the analyst day (although I swear that my handler had me RFID-chipped, since she found me with no problem in the large auditorium at the keynote this morning 😉 ), and on to the general conference.

TIBCO skipped their user conference last year, as did many other technology companies, and there are some significant product announcements that they’ve been saving up for us. We started out with Vivek Ranadivé giving us a longer version of the address that he gave to the analysts yesterday, with TIBCO’s vision of what they can do for customers in an event-driven world. Although many of us are making fun of them for referring to this as “Enterprise 3.0”, and stating that “Enterprise 2.0” is the client-server era from the 80’s to today (which is not the generally accepted definition of Enterprise 2.0), the message is about the “Two Second Advantage”: being able to make decisions faster in order to serve customers better.

By having everything as an event on the bus, and analyzing it with in-memory analytics, companies can take advantage of opportunities that they would otherwise miss if they didn’t have a view into not just the event, but what those events mean in the context of their business.

TIBCO Products Update

Tom Laffey, EVP of Products and Technology, gave us an update at the analyst session yesterday on their new product releases (embargoed until today), but started with an interesting timeline of the their acquisitions. Unlike some companies, who make acquisitions just to remove a competitor from the market, TIBCO appears to have made some thoughtful buys over the years in order to build out a portfolio of infrastructure products. More than just being a Wall Street messaging company with Rendezvous, they have a full stack of mission-critical event processing, messaging, process management, analytics and more that puts them squarely in competition with the big players. Their competition differs for the different product segments: IBM is their biggest competitor, but others including Oracle, some small players and even open source in some cases. They offer fully-responsive 7×24 support through a series of worldwide support centers, handling more than 40,000 support requests per year.

Unfortunately, this leaves them with more than 200 products: a massive portfolio that makes it difficult for them to explain, and even more difficult for customers to understand. A core part of the portfolio is the “connect” part that we heard about earlier: moving point-to-point integrations onto a service bus, using products such as Rendezvous, EMS, BusinessWorks, all manner of adapters, ActiveMatrix, BusinessConenct, CIM, ActiveSpaces and tibbr. On the “automate” part of the platform is all of their BPM offerings: iProcess, the newly-announced ActiveMatrix BPM, Business Studio and PeopleForms. Laffey claimed up front that iProcess is not being replaced by ActiveMatrix BPM (methinks he doth protest too much), which means that there is likely some functionality overlap. The third part, “optimize”, includes Spotfire Suite, S+, BusinessEvents and Netrics.

He discussed their cloud strategy, which includes “internal clouds” (which, to many of us, are not really clouds) as well as external clouds such as AWS; the new Silver line of products – CAP, Grid, Integrator, Fabric, Federator and BPM – are deployable in the cloud.

The major product suites are, then:

  • ActiveMatrix (develoment, governance and integration)
  • ActiveMatrix BPM (BPM)
  • Spotfire (user-driven analytics and visualization)
  • BusinessEvents (CEP)
  • ActiveSpaces (in-memory technologies, datagrid, matching, transactions)
  • Silver (cloud and grid computing)

He dug back into the comparison between iProcess and ActiveMatrix BPM by considering the small number of highly-complex core business processes (such as claims processing) that are the focus for iProcess, versus the large number of tactical or situational small applications with simple workflows that are served by PeopleForms and ActiveMatrix BPM. He gave a quick demo that shows this sort of simple application development being completely forms-driven: create forms using a browser-based graphical form designer, then email it to a group of people to gather responses to the questions on the form. Although he referred to this as “simple BPM” and “BPM for the masses”, it’s not clear that there was any process management at all: just an email notification and gathering responses via a web form. Obviously, I need to see a lot more about this.

TIBCO’s Recent Acquisitions: DataSynapse, Foresight, Netrics and Spotfire

No rest for the wicked: at the analyst lunch, we had sessions on four of TIBCO’s recent acquisitions while we were eating:

DataSynapse

This is a significant part of TIBCO’s cloud and grid strategy, with a stack of four key products:

  • Grid Server, which allows multiple servers to be pooled and used as a single resource
  • Fabric Server, which is the platform-as-a-service platform on top of Grid Server
  • Federator, a self-service provisioning portal
  • DataSynapse Analytics, providing metering of the grid

The real meat is in the Grid Server, which has been used to create private clouds of over 40,000 connected cores; these can be either internal or externally-facing, so are being used for customer-facing applications as well as internal ones. They position Grid Server for situations where the application and configuration complexity are just beyond the capabilities of a platform like VMWare, and see three main use cases:

  • Dynamic application scalability
  • Server virtualization to improve utilization and reduce deployment times
  • Rolling out new applications quickly

Foresight

A recent acquisition, Foresight is used for transaction modernization and cross-industry EDI, although they have some very strong healthcare solutions. They have several products:

  • Gateway/portal for managing healthcare insurance transactions between parties
  • EDISIM, for EDI authoring, testing and compliance
  • HIPAA Validator, for compliance and validation of HIPAA transactions
  • Instream, for routing, acknowledgement, management and translation of messages and events
  • Community Manager, for mass testing and migration

From cloud to EDI was a bit of a retro comparison, although there’s a lot of need for both.

Netrics

Netrics does data matching of (semi-)structured data, such as name matching in databases, in order to clean up data, reduce errors and repeats, and improve decision-making. They have two products:

  • Matching Engine models human similarity measures for comparing data
  • Machine Learning Engine models human decisions on data

Interesting discussion about some of the algorithms that they’re using, that go far beyond the simple soundex-type calculations that are more commonly available.

Spotfire

Spotfire is the oldest acquisition of the four presented here (three years ago), and was shown as much to show TIBCO’s model for acquisition and assimilation, as it was to talk about Spotfire’s capabilities.

Spotfire, as I’ve written about previously, provides easy-to-use visual analytics, using in-memory data for near-instantaneous results. Since becoming part of TIBCO, they’ve integrated with other TIBCO products to become visualization for a wide range of process and event-driven applications. their integration with iProcess BPM was shown back in 2008, and they’ve developed links with the SOA and CEP products as well.

This acquisition shows how TIBCO’s acquisition process works with these smaller companies – different from either the Borg or death by 1000 cuts methods of their competitors – first of all since they tend to target companies specifically that allow them to leapfrog their competition technologically by buying cool and innovative technology. Once acquired, Spotfire had access to TIBCO’s large base of customers, partners and markets, providing an immediate boost to their sales efforts. As they reorganized, the product group focused on preserving what worked at Spotfire, while optimizing for execution within the larger TIBCO context. Alongside this, the Spotfire product group worked with other TIBCO areas to integrate to other technologies, weaving Spotfire into the TIBCO portfolio.

TIBCO Go-To-Market Strategy and Regional Sales Update

Following the product update (which is embargoed until tomorrow), Ram Menon was up to talk about their go-to-market strategy. TIBCO has really been known as a powerhouse in the financial services in the past, but given the meltdown in the financial markets over the past two years, they’ve probably realized that this former cash cow doesn’t always stay on its feet. However, their event-based products can go way beyond that into retail pipeline management (think RFID tags on items for sale), government and many other areas; they just need to figure out how to sell into those markets. They have a number of vertical marketing messages prepped to go, but as a couple of analyst tweets pointed out, it’s a bit of a confusing message when they don’t have the applications to back it up, and the case studies are almost identical to those of IBM’s Smarter Planet, which doesn’t give them a lot of differentiation.

They have a 40-company road show planned, as well as vertical market pushes through their SI partners. In the panel of the regional sales VPs discussing what’s actually happening out there, we saw a chart of the industry verticals where financial services is the biggest single sector, but only around 25% of the total (I think – the slide went by quickly). Discussions on the panel indicated that SOA is their biggest business in the US (basic integration middleware, really, for non-intrusive implementations rather than rip-and-replace), but is still in the early stages in Asia, where messaging is the hot topic. BPM sales in the Americas typically also include SOA infrastructure, indicating that they’re leaning heavily on the value of the stack for BPM sales rather than its standalone capabilities: not sure if that’s intentional positioning, or an artifact of the product, sales force, or both. There is a lot of interest in newer ideas such as in-memory analytics: as one of the panelists put it, the customers “just get it” when you show the value proposition of reducing response time by having information available faster. It will be interesting to see how their vertical marketing efforts line up with the existing market penetration both by industry and product.

All in all, TIBCO’s branding feels a bit behind the times: Enterprise 3.0 is becoming a joke amongst the analysts attending here today (we’re tweeting about staging an intervention), and the “ending r with no preceding vowel” of tibbr is so 2006. The new TIBCOSilver brand covers all of their grid and cloud offerings, but doesn’t really make you think “cloud” when you hear the name. Like Brenda Michelson, however, I like the “Two Second Advantage” message: it’s short, memorable, and actually means something.

TIBCO’s Enterprise 3.0 Vision

Murray Rode, TIBCO’s COO, started the TIBCO analyst day with their vision and strategy. The vision: Enterprise 3.0. Srsly. They seem to have co-opted the Enterprise 1.0/2.0 terms to mean what they want it to mean rather than the more accepted views: they define Enterprise 2.0, for example, as everything from the 80’s to 2009, including client-server. I don’t mean to sound negative, but that’s not what we mean by Enterprise 2.0 these days, and whoever came up with that idea for their branding has just made them sound completely out of touch. Their spectrum goes from Enterprise 1.0 data processing from the 60’s to the 80’s, Enterprise 2.0 client-server, and Enterprise 3.0 predictive analytics and processing: using in-memory data grids rather than databases, and based more on events than transactions.

Putting aside the silliness of the term Enterprise 3.0, I like their “Two Second Advantage” tagline: when fast processing and analysis of events can make a competitive difference. Their infrastructure platform has three pieces:

  • Connect (SOA), fed by messaging and data grids
  • Analyze and optimize
  • Automate (BPM)

They can used the cloud as a deployment mechanism for scalability, although that’s just an option. In addition to the usual infrastructure platform, however, they’re also following the lead of many other vendors by pushing out vertical solutions.

We’re about to head into the product announcements, which are embargoed until tomorrow, so things might get quiet for a while, although I’m sure that there will be lots of conversation around the whole Enterprise 3.0 term.

Webinar on BPM and Case Management

I’m giving a webinar on Thursday this week, sponsored by Pegasystems, where I’ll talk about BPM and adaptive case management (ACM). Emily Burns from Pega will be there to give an overview of their ACM functionality. You can register for the webinar here.

There’s been so much written on BPM and ACM lately; I’ve been reading an advance copy of Mastering The Unpredictable (a collection of ACM topics by some of the great minds in the industry), tracking the conversations on blogs, seeing the vendor buzz at all the conferences this month, and hearing what my own clients are saying about their needs to do something less structured than the usual BPMN-modeled processes. This has been a great chance for me to bring together a lot of these ideas with many of my earlier thoughts on social/collaborative BPM, which I’ve been writing about since 2006.

Ravin’ About RAVEN Cloud: Generate Process Diagrams From Plain Text

David Ruiz, who founded ViewStar (an early document imaging and workflow package from the 1980’s that I remember well, and was eventually absorbed by Global360) is now with Ravenflow, which specializes in natural language processing for visual requirements definition.

RAVEN Cloud: architectureTheir RAVEN engine is a natural language solution that they are applying to a couple of different solutions, one of them being a language-based solution to business process and requirements definition. Think about business analysts writing requirements documents, then the gap between those requirements documents and a process model; RAVEN is intended to analyze those written requirements and provide the visualization. RAVEN Cloud is a cloud-based version of that, built on Azure and Silverlight, designed for business people who need a quick process diagram.

In short, RAVEN Cloud automatically generates a process diagram from plain English text. That’s pretty cool.

The RAVEN services include:

  • Natural language text processing
  • Basic lexicon, which is common for everyone using a common natural language; although they only support English at this time, changing out this lexicon service would allow support for other languages.
  • Business glossary, which is optional, and specific to an organization or industry.
  • Visualization; RAVEN Cloud generates a XML file that can be pushed to a cloud visualization service such as Visio, or mapped into a standardized format such as BPMN.

RAVEN Cloud demo: process map generated from textI did my Masters work in pattern recognition and image analysis, and I have a huge soft spot for recognition technology. I was not disappointed by the demo. You start out either with one of the standard text examples or by entering your own text to describe the process; you can use some basic text formatting to help clarify, such as lists, indenting and fonts. Then, you click the big red button, wait a few seconds, and voilà: you have a process map. Seriously.

Once you have the map, you can re-orient for horizontal or vertical swimlanes, and can move connection points from one side of an element to another, but you can’t edit the basic topology of the map since that would break the synchronization between the text and the diagram. You can view the process terms and functions used to analyze the text, and highlight the actors, functions and objects that were analyzed in the narrative in order to create the process map.

You can do all of this without even signing up for the service: you only need to sign up if you want to export the process map. Currently, only JPG images are supported for export – useful for process documentation, but not directly useful for process automation – but editable formats are planned for the full release in the fall and later.

Although some purists will believe that you shouldn’t be describing processes, but just drawing them, the reality is that many complex application development projects still involve written requirements that include text descriptions of processes, which are then drawn by the analyst in Visio (or, shudder, PowerPoint), and then redrawn in the BPMS tool by a developer. If you can’t have model-driven development, then this at least replaces the step of the business analyst drawing a process model that has to be redone in another tool (without round-tripping) anyway. For the 50% of BA’s who Forrester claims can’t meet the cut as process analysts, this could help them to at least provide work of value on a process project.

Ravenflow

David and I discussed what, to me, seemed to be a natural direction for this: looking at natural language processing to generate rules as well as process models, possibly based on an initiative such as RuleSpeak. I think that there’s a huge potential to take the natural language and parse out both process and rules from the description, which would be a really good starting point for ongoing automation of the process or rules independently, or both.

The public beta launched at the Web 2.0 Expo this week, with a subscription-based service to follow in the fall; by then, they will have beefed up their exporting capabilities, with Visio, BPMN 2.0, UML 2.0/SysML and Office document formats on their roadmap. You can try the beta for free now. They’re also considering the potential for companies to host the solution privately, with that organization’s own process examples instead of the standard ones in RAVEN Cloud; I think that this could (and should) be accomplished using private data on the public cloud version, although I know how touchy some companies get about hosting their own data.

BPM 2010 in North America For The First Time

The past couple of years, I’ve been attending the academic/research BPM conference – BPM 2008 in Milan, BPM 2009 in Ulm – where BPM researchers from corporate research facilities and universities present papers and findings all about BPM. This is BPM of the future, and if you’re interested in where BPM is going, you should be there, too. This year, for the first time, it’s in North America, hosted by the Stevens Institute in Hoboken, NJ, which provides an opportunity to participate for those of us on this side of the pond with little travel budget. Before you look at my coverage from previous years and cringe in horror at the descriptions of papers rife with statistical analysis, keep in mind that this year there will also be an industry track in addition to the educational paper track, showcasing some of the more practical aspects.

If you’re a BPM vendor, you should be sending along people from your architecture and development labs who are thinking about future generations of your product: they will definitely come away with valuable ideas and contacts. You might even find yourself a smart young Ph.D. candidate with research that specifically matches your interests. If you have your own research that you’d like to share, there’s still time to submit a paper for one of the pre-conference workshops.

Vendors, you should also consider sponsoring the conference: this is a prestigious place to have your name associated with BPM, and is likely to have more lasting benefits than sponsoring your standard BPM dog-and-pony show. You can find out more about sponsorship opportunities here. Tell Michael that I sent you. 🙂

Impact Keynote: Agility in an Era of Change

Today’s keynote was focused on customers and how they improving their processes in order to become more agile, reduce costs and become more competitive in the marketplace. After a talk and intro by Carrie Lee, business news correspondent and WSJ columnist, Beth Smith and Shanker Ramamurthy of IBM hosted Richard Ward of Blue Cross Blue Shield of Michigan, Rick Goldgar of the Texas Education Agency and Justin Snoxall of Visa Europe.

The message from yesterday continued: process is king, and is at the heart of any business improvement. This isn’t just traditional structured process management, but social and contextual capabilities, ad hoc and dynamic tasks, and interactions across the business network. As they pointed out, dynamic processes don’t lead to chaos: they deliver consistent outcomes in goal-oriented knowledge work. First of all, there are usually structured portions of any process, whether that forms the overarching framework from which collaborations are launched, or whether structured subprocesses are spawned from an unstructured dynamic process. Secondly, monitoring and controls still exist, like guardrails around your dynamic process to keep it from running off the road.

The Lombardi products are getting top billing again here today, with Blueprint (now IBM BPM Blueprint, which is a bit of a mouthful) positioned as a key collaborative process discovery and modeling tool. There’s not much new in Blueprint since the Lombardi days except for a bit of branding; in other words, it remains a solid and innovative way for geographically (and temporally) separated participants to collaborate on process discovery. Blueprint has far better capabilities than other online process discovery tools, but they are going to need to address the overlap – whether real or perceived – with the free process discovery tools including IBM BlueWorks, ARISalign, InterstageBPM and others.

Smith gave a brief demo of Blueprint, which is probably a first view for many of the people in the audience based on the tweets that I’m seeing. Ramamurthy stepped in to point out that processes are part of your larger business network: that’s the beauty of tools like Blueprint, which allow people in different companies to collaborate on a hosted web application. And since Lombardi has been touting their support of BPMN 2.0 since last September, it’s no surprise that they can exchange process models between Blueprint and process execution engines – not the full advantages of a completely model-driven environment with a shared repository, but a reasonable bridge between a hosted modeling tool and an on-premise execution tool.

As you get into demanding transaction processing applications, however, Smith discussed WebSphere Process Server as their industrial-strength offering for handling high volumes of transactions. What’s unclear is where the Lombardi Edition (formerly TeamWorks) will fit as WPS builds out its human-centric capabilities, creating more of an overlap between these process execution environments. A year ago, I would have said that TeamWorks and WPS fit together with a minimum of overlap; now, there is a more significant overlap, and based on the WPS direction, there will be more in the future. IBM is no longer applying the “departmental” label to Lombardi, but I’m not sure that they really understand how to make these two process execution engines either work together with a minimum of overlap, or merge into a single system. Or maybe they’re just not telling.

It’s not just about process, however: there’s also predictive analytics and using real-time information to monitor and adjust processes, leveraging business rules and process optimization to improve processes. They talked about infusing processes with points of agility through the use/integration of rules, collaboration, content and more. As great as this sounds, this isn’t just one product, or a seamlessly-integrated suite: we’re back to the issue that I discussed with Angel Diaz yesterday, where IBM’s checklist for customers to decide which BPM products that they need will inevitably end up with multiple selections.

The session ended up with the IBM execs and all three customers being interviewed by Carrie Lee; as a skilled interviewer who has obviously done her homework, this had a good flow with a reasonable degree of interaction between the panelists. The need for business-controlled rules was stressed as a way to provide more dynamic control of processes to the business; in general, a more agile approach was seen as a way to reduce implementation time and make the systems more flexible in the face of changing business needs. Ward (from BCBS) said that they had to focus on keeping BPM as a key process improvement methodology, rather than just using TeamWorks as another application development tool, and recommended not going live with a BPMS without metrics for you to understand the benefits. That sounds like good advice for any organization finding themselves going down the rabbit hole of BPMS application development when they really need to focus on their processes.

Using Business Space for Human Workflows

Back to the breakouts for the rest of the afternoon, I attended a presentation and demo by Michael Friess of IBM’s BBlingen R&D lab on using Business Space to build user interfaces for human-centric processes.

Business Space is what I would call a mashup environment, although I think that IBM is avoiding that term because it just isn’t taken seriously in business; in other words, a portal-like composition application development environment where pre-built widgets from disparate sources can quickly be assembled into an application, with a great deal more interaction between the widgets than you would find in a simple portal. Business Space is, in fact, built on the Lotus Mashup Center infrastructure; I think they just prettied it up and gave it a more corporate-sounding name, since it bears a resemblance to the Lotus Mashup Center version that I played with a while back with the FileNet ECM widgets. It’s browser-based and is fairly clean-looking, with easy placement, resizing and configuration of widgets.

Friess considered both “traditional” (predefined structured) and dynamic human BPM, where the dynamic side includes collaboration, allowing the user to organize their own environment, and adaptive case management. Structured BPM typically has fixed user interfaces that have a specific mode of task assignment (get next, personal task list, team task list, or team-based allocation). Business Space, on the other hand, provides a semi-structured framework for BPM user interfaces where the BPM widgets can be assembled under the toolbar-like links to other spaces and pages; the widgets use REST interfaces to back-end IBM services such as WPS, Business Compass, Business Monitor, Business Fabric and ESB, as well as any other services available internally or externally via REST. Templates can be used to create pages with standard functionality, such as a vanilla BPM interface, which can then be customized to suit the specific application.

Each widget can be configured for the content (which tasks and properties are visible and editable to the user), the actions available to the user, and the display modes such as list or table view. Even if a specific user isn’t allowed to choose the widgets that appear on the page, they typically will have the ability to customize the view somewhat through built-in (server-side) filtering and sorting.

Once widgets are placed on a page and configured, they are wired together in order to create interactions between the widgets: for example, a task list widget will be wired to a task details widget so that the item selected in the list will be displayed in the details view.

There are a number of BPM widgets available, including task list, task details, escalations list, human workflow diagram (from the process model, which will change to indicate any new collaboration tasks) and even free-form forms; these in turn allow any sort of BPM functionality such as spawning a collaboration task. Care must be taken in constructing the queries that underlay the list-type widgets, although that would be true in any user interface development that presents a list to a user; the only specific consideration here is that the mashup may not be constructed by an developer, but rather by a business analyst, which may require a developer to predefine some views or queries for use by the widgets.

If you’ve seen any mashup environment, this is all going to look pretty familiar, but I consider that a good thing: the ability to build composite applications like this is critical in many situations where full application development can’t be justified, especially for prototype and situational applications, but also to replace the end user computing applications that your business analysts have previously built in Excel or Access. Unfortunately, I think that some professional services types feel that mashup environments and widgets are toys rather than real application development tools; that’s an unfortunate misconception, since these can be every bit as functional and scalable as writing custom Java code, and a lot more agile. You’re probably not going to use mashups and widgets for every user interface in BPM, but it should be a part of your application development toolkit.