TIBCO Corporate and Technology Analyst Briefing at TUCON2012

Murray Rode, COO of TIBCO, started the analyst briefings with an overview of technology trends (as we heard this morning, mobile, cloud, social, events) and business trends (loyalty and cross-selling, cost reduction and efficiency gains, risk management and compliance, metrics and analytics) to create the four themes that they’re discussing at this conference: digital customer experience, big data, social collaboration, and consumerization of IT. TIBCO provides a platform of integrated products and functionality in five main areas:

  • Automation, including messaging, SOA, BPM, MDM, and other middleware
  • Event processing, including events/CEP, rules, in-memory data grid and log management
  • Analytics, including visual analysis, data discovery, and statistics
  • Cloud, including private/hybrid model, cloud platform apps, and deployment options
  • Social, including enterprise social media, and collaboration

A bit disappointing to see BPM relegated to being just a piece of the automation middleware, but important to remember that TIBCO is an integration technology company at heart, and that’s ultimately what BPM is to them.

Taking a look at their corporate performance, they have almost $1B in revenue for FY2011, showing growth of 44% over the past two years, with 4,000 customers and 3,500 employees. They continue to invest 14% of revenue into R&D with a 20% increase in headcount, and significant increases in investment in sales and marketing, which is pushing this growth. Their top verticals are financial services and telecom, and while they still do 50% of their business in the Americas, EMEA is at 40%, and APJ making up the other 10% and showing the largest growth. They have a broad core sales force, but have dedicated sales forces for a few specialized products, including Spotfire, tibbr and Nimbus, as well as for vertical industries.

They continue to extend their technology platform through acquisitions and organic growth across all five areas of the platform functionality. They see the automation components as being “large and stable”, meaning we can’t expect to see a lot of new investment here, while the other four areas are all “increasing”. Not too surprising considering that AMX BPM was a fairly recent and major overhaul of their BPM platform and (hopefully) won’t need major rework for a while, and the other areas all include components that would integrate as part of a BPM deployment.

Matt Quinn then reviewed the technology strategy: extending the number of components in the platform as well as deepening the functionality. We heard about some of this earlier, such as the new messaging appliances and Spotfire 5 release, some recent releases of existing platforms such as ActiveSpaces, ActiveMatrix and Business Events, plus some cloud, mobile and social enhancements that will be announced tomorrow so I can’t tell you about them yet.

We also heard a bit more on the rules modeling that I saw before the sessions this morning: it’s their new BPMN modeling for rules. This uses BPMN 1.2 notation to chain together decision tables and other rule components into decision services, which can then be called directly as tasks within a BPMN process model, or exposed as web services (SOAP only for now, but since ActiveMatrix is now supporting REST/JSON, I’m hopeful for this). Sounds a bit weird, but it actually makes sense when you think about how rules are formed into composite decision services.

There was a lot more information about a lot more products, and then my head exploded.

Like others in the audience, I started getting product fatigue, and just picking out details of products that are relevant to me. This really drove home that the TIBCO product portfolio is big and complex, and this might benefit from having a few separate analyst sessions with some sort of product grouping, although there is so much overlap and integration in product areas that I’m not sure how they would sensibly split it up. Even for my area of coverage, there was just too much information to capture, much less absorb.

We finished up with a panel of the top-level TIBCO execs, the first question of which was about how the sales force can even start to comprehend the entire breadth of the product portfolio in order to be successful selling it. This isn’t a problem unique to TIBCO: any broad-based platform vendor such as IBM and Oracle have the same issue. TIBCO’s answer: specialized sales force overlays for specific products and industry verticals, and selling solutions rather than individual products. Both of those work to a certain extent, but often solutions end up being no more than glorified templates developed as sales tools rather than actual solutions, and can lead to more rather than less legacy code.

Because of the broad portfolio, there’s also confusion in the customer base, many of whom see one TIBCO product and have no idea of everything else that TIBCO does. Since TIBCO is not quite the household name like IBM or Oracle, companies don’t necessarily know that TIBCO has other things to offer. One of my banking clients, on hearing that I am at the TIBCO conference this week, emailed “Heard of them as a player in the Cloud Computing space.  What’s different or unique about them vs others?” Yes, they play in the cloud. But that’s hardly what you would expect a bank (that uses very little cloud infrastructure, and likely does have some TIBCO products installed somewhere) to think of first when you mention TIBCO.

TIBCO TUCON2012 Day 1 Keynotes, Part 2: Big Honking Data

Back from the mid-morning break, CMO Raj Verma shifted gears from customer experience management to look at one of the other factors introduced in the first part of the session: big data.

Matt Quinn was back to talk about big data: in some ways, this isn’t new, since there has been a lot of data within enterprises for many years. What’s changed is that we now have the tools to deal with it, both in place and in motion, to find the patterns hiding within it through cleansing and transformation. He makes a sports analogy, saying that a game is not just about the final score, but about all of the events that happen to make up the entire game; similarly, it is not sufficient any more to just measure outcomes in business transactions, you have to monitor patterns in the event streams and combine that with historical data to make the best possible decisions about what is happening right now. He referred to this combination of event processing and analytics as closing the loop between data in motion and data at rest. TIBCO provides a number of products that combine to handle big data: not just CEP, but ActiveSpaces (the in-memory data grid) to enable realtime processing, Spotfire for visual analytics and integration with Hadoop.

We saw a demo of LogLogic, recently acquired by TIBCO, which provides analytics and event detection on server logs. This might sound like a bit of a boring topic, but I’m totally on with this: too many companies just turn off logging on their servers because it generates too many events that they just can’t do anything with, and it impacts performance since logging is done on the operational server. LogLogic’s appliance can collect enormous amounts of log data, detect unusual events based on various rules, and integrate with Spotfire for visualization of potential security threats.

Mark Lorion, CMO for TIBCO Spotfire, came up to announce Spotfire 5, with a complete overhaul to the analytics engine, and including the industry’s first enterprise runtime for the R statistical language, providing 10 times the performance of the open source R project for predictive analytics. Self-service predictive analytics, ftw. They are also going beyond in-memory, integrating with Teradata, Oracle and Microsoft SQL Server for in-database analysis. With Teradata horsepower behind it – today’s announcement of Spotfire being optimized for in-database computation on Teradata – you can now do near-realtime exploration and visualization of some shocking amounts of data. Brad Hopper gave us a great Spotfire demo, not something that most TUCON attendees are used to seeing on the main stage.

Rob Friel, CEO of PerkinElmer, took the stage to talk about how they are using big data and analytics in their scientific innovations in life sciences: screening patient data, environmental samples, human genomes, and drug trials to detect patterns that can improve quality of life in some way. They screened 31 million babies born last year (one in four around the globe) through the standard heel-prick blood test, and detected 18,000 with otherwise undiagnosed disorders that could be cured or treated. Their instrumentation is key in acquiring all the data, but once it’s there, tools such as Spotfire empower their scientists to discover and act on what they find in the data. Just as MGM Grand is delivering unique experiences to each customer, PerkinElmer is trying to enable personalized health monitoring and care for each patient.

To wrap up the big data section, Denny Page, TIBCO’s VP of Engineering, came on stage with his new hardware babies: a FTL Message switch and an EMS appliance, both to be available by the end of November 2012.

For the final part of the day 1 keynotes, we heard from an innovators’ panel of Scott McNealy (founder of Sun Microsystems, now chairman of Wayin), Tom Siebel (founder of Siebel Systems, now at C3 Energy where they are using TIBCO for energy usage analytics), Vivek Ranadivé, and KR Sridhar (CEO of Bloom Energy), chaired by David Kirkpatrick. Interesting and wide-ranging discussion about big data, analytics, sentiment analysis, enterprise social media, making data actionable, the internet of things and how a low barrier to platform exit drives innovation. The panel thinks that the best things in tech are yet to come, and I’m in agreement, although those who are paranoid about the impact of big data on their privacy should be very, very afraid.

I’ll be blogging from the analyst event for the rest of the day: we have corporate and technology briefings from the TIBCO execs plus some 1:1 sessions. No pool time for me today!

TIBCO TUCON2012 Day 1 Keynotes, Part 1

The keynotes started with TIBCO’s CEO, Vivek Ranadivé, talking about the forces driving change: a massive explosion of data (big data), the emergence of mobility, the emergence of platforms, the rise of Asia (he referenced the Gangnam Style video, although did not actually do the dance), and how math is trumping science (e.g., the detection and exploitation of patterns). The ability to harness these forces and produce extreme value is a competitive differentiator, and is working for companies like Apple and Amazon.

Raj Verma, TIBCO’s CMO, was up next, continuing the message of how fast things are changing: more iPhones were sold over the past few days than babies were born worldwide, and Amazon added more computing capacity last night than they had in total in 2001. He (re)introduced their concept of the two-second advantage – the right information a little bit before an event is worth infinitely more than any amount of information after the event – enabled by an event-enabled enterprise (or E3, supported by, of course, TIBCO infrastructure). Regardless of whether or not you use TIBCO products, this is a key point: if you’re going to exploit the massive amounts of data being generated today in order to produce extreme value, you’re going to need to be an event-enabled enterprise, responding to events rather than just measuring outcomes after the fact.

He discussed the intersection of four forces: cloud, big data, social collaboration and mobility. This is not a unique message – every vendor, analyst and consultant are talking about this – but he dug into some of these in detail: mobile, for example, is no longer discretionary, even (or maybe especially) in countries where food and resources are scarce. The four of these together all overlap in the consumerization of IT, and are reshaping enterprise IT. A key corporate change driven by these is customer experience management: becoming the brand that customers think of first when the product class is mentioned, and turning customers into fans. Digital marketing, properly done, turns your business into a social network, and turns customer management into fan management.

Matt Quinn, CTO, continued the idea of turning customers into fans, and solidifying customer loyalty. To do this, he introduced TIBCO’s “billion dollar backend” with its platform components of automation, event processing, analytics, cloud and social, and hosted a series of speakers on the subject of customer experience management.

We then heard from a customer, Chris Nordling, EVP of Operations and CIO of MGM Resorts and CityCenter, who use TIBCO for their MLife customer experience management/loyalty program. Their vision is to track everything about you from your gambling wins/losses to your preferences in restaurants and entertainment, and use that to build personalized experiences on the fly. By capturing the flow of big data and responding to events in realtime, the technology provides their marketing team with the ability to provide a zero-friction offer to each customer individually before they even know that they want something: offering reduced entertainment tickets just as you’re finishing a big losing streak at the blackjack tables, for example. It’s a bit creepy, but at the same time, has the potential to provide a better customer experience. Just a bit of insight into what they’re spending that outrageous $25/day resort fee on.

Quinn came back to have a discussion with one of their “loyalty scientists” (really??) about Loyalty Lab, TIBCO’s platform/service for loyalty management, which is all about analyzing events and data in realtime, and providing “audience of one” service and offerings. Traditional loyalty programs were transaction-based, but today’s loyalty programs are much more about providing a more holistic view of the customer. This can include not just events that happen in a company’s own systems, but include external social media information, such as the customer’s tweets. I know all about that.

Another customer, Rick Welts of the Golden State Warriors (who, ironically, play at the Oracle stadium) talked about not just customer loyalty management, but the Moneyball-style analytics that they apply to players on a very granular scale: each play of each game is captured and analyzed to maximize performance. They’re also using their mobile app for a variety of customer service initiatives, from on-premise seat upgrades to ordering food directly from your seat in the stadium.

Mid-morning break, and I’ll continue afterwards.

As an aside, I’m not usually wide awake enough to get much out of the breakfast-in-the-showcase walkabout, but this morning prior to the opening sessions, I did have a chance to see the new TIBCO decision services integrated into BPM, also available as standalone services. Looked cool, more on that later.

CASCON Workshop: Accelerate Service Integration In Your BPM and SOA Applications

I’m attending a workshop at the first morning of CASCON, the conference on software research hosted by IBM Canada. There’s quite a bit of good work done at the IBM Toronto software lab, and this annual conference gives them a chance to engage the academic and corporate community to present this research.

The focus of this workshop is service integration, including enabling new services from existing applications and creating new services by composing from existing services. Hacking together a few services into a solution is fairly simple, but your results may not be all that predictable; industrial-strength service integration is a bit more complex, and is concerned with everything from reusability to service level agreements. As Allen Chan of IBM put it when introducing the session: “How do we enable mere mortals to create a service integration solution with predictable results and enterprise-level reliability?”

The first presentation was by Mannie Kagan, an IBMer who is working with TD Bank on their service strategy and implementation; he walked us through a real-life example of how to integrate services into a complex technology environment that includes legacy systems as well as newer technologies. Based on this, and a large number of other engagements by IBM, they are able to discern patterns in service integration that can greatly aid in implementation. Patterns can appear at many levels of granularity, which they classify as primitive, subflow, flow, distributed flow, and connectivity topology. From there, they have created an ESB framework pattern toolkit, an Eclipse-based toolkit that allows for the creation of exemplars (templates) of service integration that can then be adapted for use in a specific instance.

He discussed two particular patterns that they’ve found to be particularly useful: web service notification (effectively, pub-sub over web services), and SCRUD (search, create, read, updated, delete); think of these as some basic building blocks of many of the types of service integrations that you might want to create. This was presented in a specific IBM technology context, as you might imagine: DataPower SOA appliances for processing XML messages and legacy message transformations, and WebSphere Services Registry and Repository (WSRR) for service governance.

In his wrapup, he pointed out that not all patterns need to be created at the start, and that patterns can be created as required when there is evidence of reuse potential. Since patterns take more resources to create than a simple service integration, you need to be sure that there will be reuse before it is worth creating a template and adding it to the framework.

Next up was Hans-Arno Jacobsen of University of Toronto discussing their research in managing SLAs across services. He started with a business process example of loan application processing that included automated credit check services, and had an SLA in terms of parameters such as total service subprocess time, service roundtrip time, service cost and service uptime. They’re looking at how the SLAs can guide the efficient execution of processes, based in a large part on event processing to detect and determine the events within the process (published state transitions). He gave quite a detailed description of content-based routing and publish-subscription models, which underlie event-driven BPM, and their PADRES ESB stack that hides the intricacies of the underlying network and system events from the business process execution by creating an overlay of pub-sub brokers that filters and distributes those events. In addition to the usual efficiencies created by the event pub-sub model, this allows (for example) the correlation of network slowdowns with business process delays, so that the root cause of a delay can be understood. Real-time business analytics can also be driven from the pub-sub brokers.

He finished by discussing how business processes can actually be guided by SLAs, that is, runtime use of SLAs rather than just for monitoring processes. If the process can be allocated to multiple resources in a fine-grained manner, then the ESB broker can dynamically determine the assignment of process parts to resources based on how well those resources are meeting their SLAs, or expected performance based on other factors such as location of data or minimization of traffic. He gave an example of optimization based on minimizing traffic by measuring message hops, which takes into account both rate of message hops and distance between execution engines. This requires that the distributed execution engines include engine profiling capabilities that allows an engine to determine not only its own load and capacity, but that of other engines with which it communicates, in order to minimize cost over the entire distribute process. To fine-tune this sort of model, process steps that have a high probability of occurring in sequence can be dynamically bound to the same execution engine. In this situation, they’ve seen a 47% reduction in traffic, and a 50% reduction in cost relative to the static deployment model.

After a brief break, Ignacio Silva-Lepe from IBM Research presented on federated SOA. SOA today is mostly used in a single domain within an organization, that is, it is fairly siloed in spite of the potential for services to be reused across domains. Whereas a single domain will typically have its own registry and repository, a federated SOA can’t assume that is the case, and must be able to discover and invoke services across multiple registries. This requires a federation manager to establish bridges across domains in order to make the service group shareable, and inject any cross-domain proxies required to invoke services across domains.

It’s not always appropriate to have a designated centralized federation manager, so there is also the need for domain autonomy, where each domain can decide what services to share and specify the services that it wants to reuse. The resulting cross-domain service management approach allows for this domain autonomy, while preserving location transparency, dynamic selection and other properties expected from federated SOA. In order to enable domain autonomy, the domain registry must not only have normal service registry functionality, but also references to required services that may be in other domains (possibly in multiple locations). The registries then need to be able to do a bilateral dissemination and matching of interest and availability information: it’s like internet dating for services.

They have quite a bit of work planned for the future, beyond the fairly simple matching of interest to availability: allowing domains to restrict visibility of service specifications to authorized parties without using a centralized authority, for example.

Marsha Checkik, also from University of Toronto, gave a presentation on automated integration determination; like Jacobsen, she collaborates with the IBM Research on middleware and SOA research; unlike Jacobsen, however, she is presenting on research that is at a much earlier stage. She started with a general description of integration, where a producer and a consumer share some interface characteristics. She went on to discuss interface characteristics (what already exists) and service exposition characteristics (what we want): the as-is and to-be state of service interfaces. For example, there may be a requirement for idempotence, where multiple “submit” events over an unreliable communications medium would result in only a single result. In order to resolve the differences in characteristics between the as-is and to-be, we can consider typical service interface patterns, such as data aggregation, mapping or choreography, to describe the resolution of any conflicts. The problem, however, is that there are too many patterns, too many choices and too many dependencies; the goal of their research is to identify essential integration characteristics and make a language out of them, identify a methodology for describing aspects of integration, identify the order in which patterns can be determined, identify decision trees for integration pattern determination, and determine cases where integration is impossible.

Their first insight was to separate pattern-related concerns between physical and logical characteristics; every service has elements of both. They have a series of questions that begin to form a language for describing the service characteristics, and a classification for the results from those questions. The methodology contains a number of steps:

  1. Determine principle data flow
  2. Determine data integrity data flow, e.g., stateful versus stateless
  3. Determine reliability flow, e.g., mean time between failure
  4. Determine efficiency, e.g., response time
  5. Determine maintainability

Each of these steps determines characteristics and mapping to integration patterns; once a step is completed and decisions made, revisiting it should be minimized while performing later steps.

It’s not always possible to provide a specific characteristic for any particular service; their research is working on generating decision trees for determining if a service requirement can be fulfilled. This results in a pattern decision tree based on types of interactions; this provides a logical view but not any information on how to actually implement them. From there, however, patterns can be mapped to implementation alternatives. They are starting to see the potential for automated determination of integration patterns based on the initial language-constrained questions, but aren’t seeing any hard results yet. It will be interesting to see this research a year from now to see how it progresses, especially if they’re able to bring in some targeted domain knowledge.

Last up in the workshop was Vadim Berestetsky of IBM’s ESB tools development group, presenting on support for patterns in IBM integration offerings. He started with a very brief description of an ESB, and WebSphere Message Broker as an example of an ESB that routes messages from anywhere to anywhere, doing transformations and mapping along the way. He basically walked through the usage of the product for creating and using patterns, and gave a demo (where I could see vestiges of the MQ naming conventions). A pattern specification typically includes some descriptive text and solution diagrams, and provides the ability to create a new instance from this pattern. The result is a service integration/orchestration map with many of the properties already filled in; obviously, if this is close to what you need, it can save you a lot of time, like any other template approach.

In addition to demonstrating pattern usage (instantiation), he also showed pattern creation by specifying the exposed properties, artifacts, points of variability, and (developer) user interface. Looks good, but nothing earth-shattering relative to other service and message broker application development environments.

There was an interesting question that goes to the heart of SOA application development: is there any control over what patterns are created and published to ensure that they are useful as well as unique? The answer, not surprisingly, is no: that sort of governance isn’t enforced in the tool since architects and developers who guide the purchase of this tool don’t want that sort of control over what they do. However, IBM may see very similar patterns being created by multiple customer organizations, and choose to include a general version of that pattern in the product in future. A discussion about using social collaboration to create and approve patterns followed, with Berestetsky hinting that something like that might be in the works.

That’s it for the workshop; we’re off to lunch. Overall, a great review of the research being done in the area of service integration.

This afternoon, there’s the keynote and a panel that I’ll be attending. Tomorrow, I’ll likely pop in for a couple of the technical papers and to view the technology showcase exhibits, then I’m back Wednesday morning for the workshop on practical ontologies, and the women in technology lunch panel. Did I mention that this is a great conference? And it’s free?

TIBCO Now Roadshow: Toronto Edition (Part 2)

We started after the break with Jeremy Westerman, head of BPM product marketing for TIBCO, presenting on AMX BPM. The crowd is a bit slow returning, which I suspect is due more to the availability of Wii Hockey down the hall than to the subject matter. Most telling, Westerman has the longest timeslot of the day, 45 minutes, which shows the importance that TIBCO is placing on marketing efforts for this new generation of their BPM platform. As I mentioned earlier, I’ve had 3+ hours of briefing on AMX BPM recently and think that they’ve done a good job of rearchitecting – not just refactoring – their BPM product to a modern architecture that puts them in a good competitive position, assuming that they can get the customer adoption. He started by talking about managing business processes as strategic assets, and the basics of what it means to move processes into a BPMS, then moved on to the TIBCO BPM products: Business Studio for modeling, the on-premise AMX BPM process execution environment, and the cloud-based Silver BPM process execution environment. This built well on their earlier messages about integration and SOA, since many business processes – especially for the financial-biased audience here today – are dependent on integrating data and messaging with other enterprise systems. Business-friendly is definitely important for any BPM system, but the processes also have to be able to punch at enterprise weight.

His explanation of work management also covered optimizing people within the process: maximizing utilization while still meeting business commitments through intelligent routing, unified work lists and process/work management visibility. A BPM system allows a geographically distributed group of resources to be treated as single pool for dynamic tunable work management, so that the actual organizational model can be used rather than an artificial model imposed by location or other factors. This led into a discussion of workflow patterns, such as separation of duties, which they are starting to build into AMX BPM as I noted in my recent review. He walked through other functionality such as UI creation, analytics and event processing; although I’ve seen most of this before, it was almost certainly new to everyone except the few people in the room who had attended TUCON back in May. The BPM booth was also the busiest one during the break, indicating a strong audience interest; I’m sure that most BPM vendors are seeing this same level of interest as organizations still recovering from the recession look to optimize their processes to cut costs and provide competitive advantage.

Ivan Casanova, director of cloud marketing for TIBCO, started with some pretty simple Cloud 101 stuff, then outlined their Silver line of cloud platforms: Silver CAP for developing cloud services, Silver Fabric for migrating existing applications, Silver BPM for process management, and Silver Spotfire for analytics. Some portion of the IT-heavy audience was probably thinking “not in my data centre, dude!”, but eventually every organization is going to have to think about what a cloud platform brings in terms of speed of deployment, scalability, cost and ability to collaborate outside the enterprise. Although he did talk about using Fabric for “private cloud” deployments that leverage cloud utility computing principles for on-premise systems, he didn’t mention the most likely baby step for organizations who are nervous about putting production data in the cloud, which is to use the cloud for development and testing, then deploy on premise. He finished with a valid point about how they have a lot of trust from their customers, and how they’ve built cloud services that suit their enterprise customers’ privacy needs; IBM uses much the same argument about why you want to use an large, established, trusted vendor for your cloud requirements rather than some young upstart.

We then heard from Greg Shevchik, a TIBCO MDM specialist, for a quick review of the discipline of master data management and TIBCO’s Collaborative Information Manager (CIM). CIM manages the master data repositories shared by multiple enterprise systems, and allows other systems – such as AMX BPM – to use data from that single source. It includes a central data repository; governance tools for validation and de-duplication; workflow for managing the data repository; synchronization of data between systems; and reporting on MDM.

Last up for the Toronto TIBCO Now was Al Harrington (who was the mystery man who opened the day), giving us a quick view of the new generation of TIBCO’s CEP product, BusinessEvents. There’s a lot to see here, and I probably need to get a real briefing to do it justice; events are at the heart of so many business processes that CEP and BPM are becoming ever more intertwined.

My battery just hit 7% and we’re after 5pm, so I’ll wrap up here. The TIBCO Now roadshow provides a good overview of their updated technology portfolio and the benefits for customers; check for one coming your way.

TIBCO Product Stack and New Releases

We’re overtime on the general session, 2.75 hours without a break, and Matt Quinn is up to talk about the TIBCO product stack and some of the recent releases as well as upcoming releases:

  • Spotfire 3.1
  • BusinessEvents 4.0, with an improved Eclipse-based development environment including a rule debugger, and a multi-threaded engine
  • BEViews (BusinessEvents Views) for creating real-time customizable dashboards for monitoring the high-speed events (as opposed to Spotfire, which can include data from a much broader context)
  • ActiveSpaces Suite for in-memory processing, grid computing and events, with the new AS Transactions and AS Patterns components
  • Silver Suite for cloud deployment, including Fabric, Grid and CAP (Composite Application Platform)
  • PeopleForms, which I saw a brief view of yesterday: a lightweight, forms-based application development environment
  • tibbr, their social microblogging platform; I think that they’re pushing too much of the social aspect here, when I think that their sweet spot is in being able to “follow” and receive messages/events from devices rather than people
  • Silver Analytics
  • ActiveMatrix 3.0, which is an expansion of the lightweight application development platform to make this more of an enterprise-ready
  • ActiveMatrix BPM, which he called “the next generation of BPM within TIBCO” – I’ll have more on this after an in-depth briefing
  • Silver BPM, the cloud-deployable version of BPM
  • Design Collaborator, which is a web-based design discovery tool that will be available in 2011: this appears to be their version of an online process discovery tool, although with more of a services focus than just processes; seems late to be introducing this functionality to the market

I heard much of this yesterday from Tom Laffey during the analyst session, but this was a good refresher since it’s a pretty big set of updates.

TIBCO: Now FTL!

We had a brief comment from Tom Laffey in the general session about TIBCO’s new ultra low latency messaging platform to be released by year end, which breaks the microsecond barrier. They’re calling it FTL, which makes my inner (or not so inner) geek giggle with happiness: for sci-fi fans, that’s the acronym for “Faster Than Light” spaceship drives. I love it when technology companies tip a nod to the geeks who use and write about their products, while remaining on topic.

It’s also new (for TIBCO) since it provides content-based routing and structured data support, which are, apparently, just as important as a cool name.

Deutsche Bank’s Wolfgang Gaertner at TUCON

The third keynote speaker this morning was Wolfgang Gaertner, CIO of Deutsche Bank: we’ve moved from international crime-fighting to the somewhat more mundane – but every bit as international and essential – world of banking. Their biggest challenge over the past few years has been to reduce the paper flow that was slowing the communication between their processing centers, reduce processing time, and improve customer service levels: all of which they have achieved. They’ve used TIBCO to integrate their multiple legacy systems, especially those from mergers and acquisitions such as they had with Berliner Bank, where they wanted to maintain the customer brand but integrate the back-end systems to allow for greater efficiency and governance.

They’re using BPM to manage some of the processes, such as special account opening and exception handling, and are finding that the new technology drives new opportunities: as other areas in the bank see what can be done with integration and BPM, they want to have that for their applications as well. They’re also planning to rip out their core legacy systems and replace them with SAP, and use TIBCO for integration and workflow: TIBCO is a big enabler here, since Deutsche Bank now has sufficient experience with TIBCO products to understand how it can be used to help with this technology transformation.

TIBCO Products Update

Tom Laffey, EVP of Products and Technology, gave us an update at the analyst session yesterday on their new product releases (embargoed until today), but started with an interesting timeline of the their acquisitions. Unlike some companies, who make acquisitions just to remove a competitor from the market, TIBCO appears to have made some thoughtful buys over the years in order to build out a portfolio of infrastructure products. More than just being a Wall Street messaging company with Rendezvous, they have a full stack of mission-critical event processing, messaging, process management, analytics and more that puts them squarely in competition with the big players. Their competition differs for the different product segments: IBM is their biggest competitor, but others including Oracle, some small players and even open source in some cases. They offer fully-responsive 7×24 support through a series of worldwide support centers, handling more than 40,000 support requests per year.

Unfortunately, this leaves them with more than 200 products: a massive portfolio that makes it difficult for them to explain, and even more difficult for customers to understand. A core part of the portfolio is the “connect” part that we heard about earlier: moving point-to-point integrations onto a service bus, using products such as Rendezvous, EMS, BusinessWorks, all manner of adapters, ActiveMatrix, BusinessConenct, CIM, ActiveSpaces and tibbr. On the “automate” part of the platform is all of their BPM offerings: iProcess, the newly-announced ActiveMatrix BPM, Business Studio and PeopleForms. Laffey claimed up front that iProcess is not being replaced by ActiveMatrix BPM (methinks he doth protest too much), which means that there is likely some functionality overlap. The third part, “optimize”, includes Spotfire Suite, S+, BusinessEvents and Netrics.

He discussed their cloud strategy, which includes “internal clouds” (which, to many of us, are not really clouds) as well as external clouds such as AWS; the new Silver line of products – CAP, Grid, Integrator, Fabric, Federator and BPM – are deployable in the cloud.

The major product suites are, then:

  • ActiveMatrix (develoment, governance and integration)
  • ActiveMatrix BPM (BPM)
  • Spotfire (user-driven analytics and visualization)
  • BusinessEvents (CEP)
  • ActiveSpaces (in-memory technologies, datagrid, matching, transactions)
  • Silver (cloud and grid computing)

He dug back into the comparison between iProcess and ActiveMatrix BPM by considering the small number of highly-complex core business processes (such as claims processing) that are the focus for iProcess, versus the large number of tactical or situational small applications with simple workflows that are served by PeopleForms and ActiveMatrix BPM. He gave a quick demo that shows this sort of simple application development being completely forms-driven: create forms using a browser-based graphical form designer, then email it to a group of people to gather responses to the questions on the form. Although he referred to this as “simple BPM” and “BPM for the masses”, it’s not clear that there was any process management at all: just an email notification and gathering responses via a web form. Obviously, I need to see a lot more about this.

WebSphere BPM Product Portfolio Technical Update

The keynotes sessions this morning were typical “big conference”: too much loud music, comedians and irrelevant speakers for my taste, although the brief addresses by Steve Mills and Craig Hayman as well as this morning’s press release showed that process is definitely high on IBM’s mind. The breakout session that I attended following that, however, contained more of the specifics about what’s happening with IBM WebSphere BPM. This is a portfolio of products – in some cases, not yet really integrated – including Process Server and Lombardi.

Some of the new features:

  • A whole bunch of infrastructure stuff such as clustering for simple/POC environments
  • WS CloudBurst Appliance supports Process Server Hypervisor Edition for fast, repeatable deployments
  • Database configuration tools to help simplify creation and configuration of databases, rather than requiring back and forth with a DBA as was required with previous version
  • Business Space has some enhancements, and is being positioned as the “Web 2.0 interface into BPM” (a message that they should probably pass on to GBS)
  • A number of new and updated widgets for Business Space and Lotus Mashups
  • UI integration between Business Space and WS Portal
  • Webform Server removes the need for a client form viewer on each desktop in order to interact with Lotus Forms – this is huge in cases where forms are used as a UI for BPM participant tasks
  • Version migration tools
  • BPMN 2.0 support, using different levels/subclasses of the language in different tools
  • Enhancements to WS Business Modeler (including the BPMN 2.0 support), including team support, and new constructs including case and compensation
  • Parallel routing tasks in WPS (amazing that they existed this long without that, but an artifact of the BPEL base)
  • Improved monitoring support in WS Business Monitor for ad hoc human tasks.
  • Work baskets for human workflow in WPS, allowing for runtime reallocation of tasks – I’m definitely interested in more details on this
  • The ability to add business categories to tasks in WPS to allow for easier searching and sorting of human tasks; these can be assigned at design time or runtime
  • Instance migration to move long-running process instances to a new process schema
  • A lot of technical implementation enhancements, such as new WESB primitives and improvements to the developer environment, that likely meant a lot to the WebSphere experts in the room (which I’m not)
  • Allowing Business Monitor to better monitor BPEL processes
  • Industry accelerators (previously known as industry content packs) that include capability models, process flows, service interfaces, business vocabulary, data models, dashboards and solution templates – note that these are across seven different products, not some sort of all-in-one solution
  • WAS and BPM performance enhancements enabling scalability
  • WS Lombardi Edition: not sure what’s really new here except for the bluewashing

I’m still fighting with the attendee site to get a copy of the presentation, so I’m sure that I’ve missed things here, but I have some roundtable and one-on-one sessions later today and tomorrow that should clarify things further. Looking at the breakout sessions for the rest of the day, I’m definitely going to have to clone myself in order to attend everything that looks interesting.

In terms of the WPS enhancements, many of the things that we saw in this session seem to be starting to bring WebSphere BPM level with other full BPM suites: it’s definitely expanding beyond being just a BPEL-based orchestration tool to include full support for human tasks and long-running processes. The question lurking in my mind, of course, is what happens to FileNet P8 BPM and WS Lombardi (formerly TeamWorks) as mainstream BPM engines if WPS can do it all in the future? Given that my recommendation at the time of the FileNet acquisition was to rip out BPM and move it over to the WebSphere portfolio, and the spirited response that I had recently to a post about customers not wanting 3 BPMSs, I definitely believe that more BPM product consolidation is required in this portfolio.