TUCON: BPM with Spotfire Analytics

Lars Bauerle and Brendan Gibson of TIBCO showed us how Spotfire analytics are being integrated with data from iProcess to identify process improvement. I hadn’t seen Spotfire in any detail before the demo that I saw on Tuesday, and it’s a very impressive visualization and analysis tool; today, they showed iProcess process runtime data copied and pasted from Excel into Spotfire, but it’s not clear that they’ve done a real integration between the iProcess process statistics and Spotfire. Regardless, once you get the data in there, it’s very easy to do aggregations on the fly then drill into the results, comparisons of portions of the data set, and filtering by any attributes. You can also define KPIs and create dashboard-style interfaces. Authoring and heavy-duty analysis are done using an installed desktop application with (I believe) a local in-memory engine, but light-weight analysis can be done using a zero-install web client and all analysis done on the server.

In addition to local data, it’s possible to link directly from enterprise databases into the Spotfire client, which effectively gives the Spotfire user the ability to do queries to bring data into the in-memory engine for visualization and analysis — in other words, there doesn’t appear to be any technical barriers to establishing a link to the statistics in an iProcess engine. They showed a model of data flowing from the iProcess server to a data mart, which would then be connected to Spotfire; realistically, you’re not going to let your analytics hit your production process engine directly, so this makes sense, although there can be latency issues with this model. It’s not clear if they provide any templates for doing this and for some standard process analytics.

They did a demo of some preconfigured analytics pages with process data, such as cases in progress and missed SLAs, showing what this could look like for a business manager or knowledge worker. Gibson did refer to "when you refresh the data from the database" which indicates that this is not real-time data, although it could be reasonably low latency depending on the link between iProcess and the data mart, and client refresh frequency.

Then, the demo gods reared their heads and Spotfire froze, and hosed IE with it. Obviously, someone forgot to do the animal sacrifice this morning…

They went to questions while rebooting, and we found out that it’s not possible to stream data in real-time to Spotfire (as I suspected from the earlier comments); it needs to load data from a data source into its own in-memory engine on a periodic basis. In other words, you’re not going to use this as a real-time monitoring dashboard, but as an advanced visualization and analytics tool.

Since this uses an in-memory engine for analytics, there are limitations based on the physical memory of the machine doing the processing, but Spotfire does some smart things in terms of caching to disk, and swapping levels of aggregation in and out as required. However, at some point you’re going to have to consider loading a subset of your process history data via a database view.

There was a question about data security, for example, if a person should only be able to drill down on their own region’s data; this is done in Spotfire by setting permissions on the queries underlying the analysis, including row-level security.

iProcess Analytics is being positioned as being for preconfigured reporting on your process data, whereas Spotfire is positioned for ad hoc analysis and integration with other data sets.

Spotfire could add huge value to iProcess data, but it appears that they don’t quite have the whole story put together yet; I’m looking forward to seeing how this progresses, some real world case studies when customers start to use it, and the reality of what you need to do to preprocess the ocean of process data before loading it into Spotfire for analysis.

TUCON: Keynote Day 2

Tom Laffey was back hosting the keynote, dressed in a cycling shirt from Team TIBCO, one of the best US women’s pro cycling teams. He was joined briefly by a member of the team who also happens to hold a Ph.D. in biology; like any geeky engineer, Laffey giggled nervously in the presence of an attractive, brainy woman in form-fitting cycling gear, although I suspect that some of the nervousness was due to the pair of cycling shorts that she was handing him to try on. 🙂

Having covered the product announcements yesterday, this morning’s keynote moved to a customer focus, starting with Simon Post, CTO of Carphone Warehouse discussing how they improved the processes within their IT department. He made an excellent point: there is no "ERP for IT", that is, packaged software for running an IT business; this requires large IT groups roll their own process improvement efforts instead. They have the capability to do it, but that’s not the point: the IT departments are there to provide services to the business, not to spend time building systems for themselves unless no packaged software exists or they need custom capability for a competitive advantage. Carphone Warehouse uses TIBCO products extensively for their IT processes and systems: iProcess and BusienssEvents for the process layer, BusinessWorks for system orchestration, and EMS for messaging. They haven’t stopped at IT processes, however; they’re building their service-oriented architecture and rolling out services across the enterprise, facilitating reuse and reducing costs as they set up new locations in several countries.

I ducked out after that to review notes for my presentation, coming up at 11:30, since I want to take the time to see the Spotfire+BPM session that’s on just before mine.

TUCON: Centralized BPM Platform at HBOS

The last session of the day was a bit of a tough choice: I was thinking about heading over to see the session on in-process analytics through the integration of Spotfire and BusinessEvents, but decided in favor of hearing Richard Frost of HBOS (a UK-based financial services organization) discuss their centralized BPM platform and center of excellence strategy. Since they were created from a merger of Halifax and Bank of Scotland, and are made up of a number of brands, there’s quite a bit of vertical IT within the individual organizations. They’ve been moving some of this into shared services (what they call Group IT), including a business process layer based on TIBCO’s iProcess.

They had some significant drivers for BPM, allowing for growth while containing costs, and codifying processes and knowledge to reduce the impact of employee turnover. They had a variety of process types to manage as well, from straight-through processing with integration to their existing systems to high-touch human-centric and collaborative processes, so needed a product that could handle both well. They deployed BPM in a number of stages:

  • Digitizing, with human workflow and case management based on scanned documents
  • Automation
  • Optimization, through automation and separation of process logic from operational systems

As they roll this out, the benefits from automation have been most apparent and used in future business cases, and implementation costs are expected to reduce through reusability.

Instead of each division deploying their own BPM, they are moving to a centralized platform for a number of reasons:

  • Shared processes, such as complaints handling
  • Shared platform for cost savings
  • Shared resources
  • Best practices and governance
  • Architecture simplification

On this common software and hardware platform, each division has their own unique services, processes, rules and parameters; they’re now building a common services layer that will be reusable across divisions as well as consolidating onto the same physical hardware and software platform. They’ve had to determine ownership of each layer — which is owned by the divisions, shared services application development, and shared services technology — as well as governance of these layers by a business-led user group, an IT-led process certification board and a joint business-IT change approvals board.

They see the business opportunity for BPM is to remove the IT problems from what the business has to consider by providing a common platform, allowing them to focus on business and process improvement. Frost showed a chart that mapped process types (simple, regular, complex) against solutions (manual work distribution and handling, imaging and workflow with minimal integration, BPM with application integration) in order to identify the key processes to consider for BPM: although the conventional wisdom is to go for the simple processes that can be fully automated with BPM and application integration, he also feels that there’s huge benefits in looking at the complex processes that require a lot of human knowledge work. They also use this as a guideline for both simplifying processes and pushing for a greater degree of automation.

In an example of one of their insurance arrears processes, they’ve removed 60% of the human effort by automating most of the steps involved, while improving both service times and consistency.

His recommendations:

  • Understand your organizational model, recognizing where you are in your process efforts and aligning your BPM and SOA strategies
  • Don’t obsess on software selection, or the divisions will just do their own thing instead of waiting for the common platform
  • It will be hard work and will take a significant piece of time — HBOS has spent two years from when they did their first TIBCO pilot to where they are today with a shared platform
  • Reviewing and optimizing processes is crucial so that you’re automating the right processes
  • Needs a combined effort of a business push and an IT pull

An interesting message here is that although we all want 3-month delivery cycles for BPM projects, creating a shared BPM platform across multiple divisions takes a lot longer. A roadmap that allows divisional installations of the enterprise-standard platform in the interim, to be converged on the shared platform at a later date, is essential to allow progress on BPM applications within divisions while the shared platform is being developed.

TUCON: BPM Product Update

Roger King and Justin Brunt of TIBCO gave us an update of what’s happened lately with their BPM product, and what’s coming up.

In the past year, Business Studio has added a lot of new features:

  • Support for BPMN 1.0 and XPDL 2.0
  • In-line service binding and mapping, through direct connections to Business Works, web services, Java, databases, email and scripts
  • Direct deployment to the iProcess engine
  • Management of models using any source control system that supports Eclipse, or using their packaged Subversion option
  • Visual Forms Editor for creating forms directly in Business Studio using platform-independent models at design time and platform-specific models for run time: General Interface now, and other platforms to follow. Forms can be created from a step definition with a default layout based on the exposed parameters, then the forms editor can be used to add other UI widgets.
  • In-line subprocesses and a number of other modeling niceties.

The iProcess Workspace (end-user browser client) has been simplified and updated using an Outlook visual paradigm, based on General Interface. This is supported on IE 6 and 7 (no mention of Firefox). It’s also possible to use GI Builder to create your own BPM client, since the components are provided for easy inclusion, allowing iProcess functionality to be embedded into web pages or as portlets, with no knowledge of the iProcess APIs.

The iProcess Suite has a number of other improvements, including generic data plugins and direct deployment from Business Studio, plus support for 64-bit Windows and SUSE Linux. There’s also been repackaging and installation improvements. As we heard this morning, there’s also event-driven real-time worklist management, where a user can be alerted when something in a queue changes rather than having to poll it manually. There’s also updated LDAP authentication.

iProcess also has a new version of its web services plugin providing improved inbound and outbound web services security (at the transport layer with SSL and digital signatures and at the message layer through signatures, encryption and tokens), plus enhanced authentication.

The big thing in my mind is that Business Studio 3.0 now contains all key iProcess Modeler features so that it’s no longer necessary to use iProcess Modeler as an intermediate step in moving processes from Business Studio to the iProcess execution engine: Business Studio is the new BPM IDE. At TUCON last year, I said that this definitely needed to happen, and I’m very happy to see that it has since it represents a significant advance into full model-driven development for TIBCO’s BPM.

Their vision for BPM going forward is that the complexity of process models can be pushed down into the infrastructure, and free the business process modeling/design tools from the technical details that have made process modeling into a technical rather than business role over the past years. This will allow business people to do what the BPM vendors have always told us that they could do: design executable process models without having to be a technical expert. King feels that the key to this is service and data virtualization, since data is BPM’s "dirty secret": synchronization of data within a business process with other systems is one of the key drivers for having a technical person do the process models instead of a business person. Virtualizing the location, ownership, form and transport of the data means that you don’t need to worry about a business analyst doing something inappropriate with data in the course of process modeling.

The idea is that BPM suites will become model-driven composite application development and deployment platforms (wait! isn’t that what they’re supposed to be already?), with more latitude for business sandboxes and mashups for prototyping and building situational applications.

They’re working on breaking off the front end of the process engine to allow the creation of a single enterprise "work cloud" that can be used for any source of information or work coming at someone: sort of like event processing, but at a higher semantic level.

In addition to all the event-driven goodies, they’re also focused on covering the entire domain of process patterns (as in the full academic set of process patterns), so that any process could be modeled and executed using TIBCO’s BPM. We’ll also see some enhanced resource and organizational modeling, plus scheduling, capability requirements, SLAs and more models corresponding to real-world work.

TUCON: Merck’s SAP Integration Strategy

Daniel Freed of Merck discussed their SAP implementation, and how their integration strategy uses TIBCO to integrate with non-SAP systems. As with Connie Moore’s presentation this morning, the room was packed (I’m sitting on the floor and others standing around the perimeter of the room), and I have to believe that TIBCO completely underestimated attendees’ interest in BPM since we’re in a room that is half the size (or less) that for some of the other streams. Of course, this presentation is really about application integration rather than BPM…

They have four main integration scenarios:

  • Master data replication (since each system expects to maintain its own data, but SAP is typically the true master data source), both event-driven publish-subscribe and batch point-to-point.
  • Cross-system business process, using event-driven publish-subscribe and event-driven point-to-point.
  • Analytical extraction/consolidation with batch point-to-point from operational systems to the data warehouse.
  • Business to business, with event-driven point-to-point as well as event-driven publish-subscribe and batch point-to-point.

They have some basic principles for integration:

  • Architect for loosely coupled connectivity, in order to increase flexibility and improve BPM; the key implication is that they needed to move from point-to point integrations to hub-and-spoke architecture, publish from the source to all targets rather than chaining from one system to another, and use canonical data models.
  • Leverage industry standards and best practices
  • Build and use shared services
  • Architect for "real-time business" first
  • Proactively engage the business in considering new opportunities enabled by new integration capabilities
  • Architect to insulate Merck from external complexity
  • Design for end-to-end monitoring
  • Leverage integration technology to minimize application remediation (i.e., changes to SAP) required to support integration requirements

SAP, of course, isn’t just one monolithic system: Merck is using multiple SAP components (ECC, GTS, SCM, etc.) that have out-of-the-box integration provided by SAP through Process Integrator (PI), and Merck doesn’t plan to switch out PI for TIBCO. Instead, PI bridges to TIBCO’s bus, then all other applications (CRM, payroll, etc.) connect to TIBCO.

Gowri Chelliah of HCL (the TIBCO partner involved in the project) then discussed some of the common services that they developed for the Merck project, including auditing, error handling, cross-referencing, monitoring, and B2B services. He covered the error handling, monitoring, cross-reference and B2B services in more detail, showing the specific components, adapters and technologies used for each.

Freed came back up to discuss their key success factors:

  • Organizational
    • Creation of shared services
    • Leverage global sourcing model
  • Strategy
    • Integration strategy updated for SAP
    • Buy-in from business on integration strategy
  • Program management
    • High visibility into the development process
  • Process
    • Comprehensive on-boarding process for quick ramp-up
    • Factory approach to integration — de-skill certain tasks and roles to leverage less experienced and/or offshore resources
    • Thorough and well-documented unit testing
    • Blogs and wiki for knowledge dissemination and sharing within the team, since it was spread over 5 cities
  • Governance
    • Architecture team responsible for consistency and reuse
  • Architecture
    • Defined integration patterns and criteria for applicability
    • Enhanced common services and frameworks
    • Architecture defined to support multiple versions of services and canonical data mocdels
  • Implementation
    • Development templates for integration patterns
    • Canonical data models designed early

In short, they’ve done a pretty massive integration project with SAP at the heart of their systems, and use TIBCO (and its bridge to SAP’s PI) to move towards a primarily event-driven publish-subscribe integration with all other systems.

TUCON: Architect’s Guide to SOA and BPM

I enjoyed Paul Brown’s seminar in Toronto a few weeks back, so I attended his session today on planning and architecture for SOA and BPM: how to define the services that we need and rationalize our data architecture in the face of managing end-to-end processes that span functional silos? Although many organizations have systems within those functional silos, the lines of communication — both person-to-person and system-to-system — always cross those silos in any real business process.

A lot of new skills are required in order to adopt SOA and BPM across the enterprise, from high-level executive support to a worker-level understanding of how this changes their day-to-day work. To make all of this work, there needs to be a total architecture perspective, including business processes, people, information and systems all coalescing around a common purpose. Business needs to re-engage with IT — in many organizations, they’ve been scared away for a long time — in order to get that business-IT collaboration happening.

Brown covered some of the same ground about separating out services, processes and presentation on as he did in the seminar, which I won’t repeat here but recommend that you check out the link above for more details.

He went on to discuss the TIBCO BPM.SOA execution model. First, develop the execution strategy for the entire program:

  • Develop vision and program roadmap
  • Define and implement organization and governance
  • Define and implement technical infrastructure and standards

Then, move on to solutions and operations for each project:

  • Analyze process and develop project roadmap
  • Design, build and deploy business process
  • Operate the business

This last point highlights the importance of setting and measuring goals for the project; you don’t know whether your project was successful until it’s been in operation a while and some measurements have been taken.

He had some pointers for how to get started with BPM and SOA:

  • Focus on business processes first: they’re the source of business value, and the glue that binds the people and systems together.
  • Separate service access mediation (access control, security, routing, distribution) from services.
  • Acknowledge different types of processes, both unmanaged and managed/orchestrated.
  • Separate processes and presentation.
  • Embrace total architecture with a cross-functional architecture team

He finished up with some case studies of organizations that have taken an architectural approach to rolling out SOA and BPM, and how this has made IT departments much more responsive to new business requirements. Findings by one organization included that they wanted to have more IT involvement in business processes in order to better align the business processes with the underlying services. For services that will be used across multiple systems, it’s critical to have an enterprise architecture group review these for reusability.

His final summary: keep the business process focus as the source of business process; BPM and SOA provide opportunities for improving business process; and the major challenges are organizational, not technical.

TUCON: Design for People, Build for Constant Change

Connie Moore kicked off the Process Improvement track with the Forrester message "design for people, build for change" and dynamic business applications to a packed room. Check out my coverage of her keynote from the Forrester technology leadership conference last year for some background to this theme.

She discussed how methods of working are changing to put the worker at the center, with access to their information, processes, functions and other components as required: the modern information worker decides what he needs to complete any given task. In order to accommodate this, workers need dynamic applications that provide a highly-contextual dashboard/portal interface that might include client information, a calendar of events related to that client’s data, what-if tools for financial analysis, tools such as online enrolment for selling additional products to the client, and other information that’s related to what’s happening right now, not static information.

She sees BPM as going mainstream, and dragged out the hockey-stick growth predictions that all the big analysts love; I’m still seeing a lot of niche and departmental applications of BPM and think that these growth projections may only be met if the analysts continue to change the boundaries of what is considered to be BPM.

She covered several of the reasons for deploying BPM, and walked through some best practices for getting started:

  • Start with a major process that is causing pain: there will be less resistance to change, and easier support and funding. Typically, these are customer-facing, high-volume processes with lots of steps and handoffs. I’m also a big fan of this approach, since no one ever justified enterprise-wide deployment of BPM by doing a proof of concept with managing expense reports.
  • Look for quick hits, using an incremental approach and targeting 3-month release phases. I’m also completely behind this idea, and always recommend getting something simpler into production sooner, then adding on functionality and fine-tuning processes incrementally. I’ve found that BPM implementations lend themselves particularly well to Agile methodologies.
  • Design for real-world processes by doing effective process discovery: avoid interviewing the managers and reading the out-of-date procedures documentation in favor of talking with the people who really know how the process currently works and where the pain points are that need fixing. You don’t want to get too granular here, but use some process modeling tools to sketch things out and identify subprocesses and services. I’m going to expanding on this topic tomorrow in my breakout session, Using BPM to Prioritize Service Creation.
  • Link BPM and SOA. 71% of large companies surveyed by Forrester said that SOA was very important to their BPM efforts: the availability of services is what makes it possible to create and modify processes quickly and easily.
  • Keep the financials in mind. Link projects to the line of business rather than infrastructure, and don’t burden the first project with the infrastructure cost. Measure the results and ROI to use for future project justifications. For ROI calculations, she listed conservative estimates of saving 30-50% of clerical workers’ time, and 20-35% for knowledge workers, with transaction-focused processes seeing even greater benefit.
  • Develop a competency center from the start, including a cross-functional and collocated team of developers and business analysts, strong involvement from the vendor, and judicious use of systems integrations for specific targeted parts of the project. Forrester has seen a strong correlation between the existence of a competency center and measurable benefits in BPM projects.

She recently interviewed a financial services client of TIBCO’s, and they shared a few of their lessons learned:

  • Reengineer the process first, then pick the tool
  • Set the tools aside and focus on the process
  • Be prepared for staffing challenges
  • A competency center is critical

This was really a whirlwind tour of Forrester’s view of BPM, much too much information for a 50-minute presentation but lots of good stuff in here.

TUCON: Product Announcements, including a Messaging Appliance

I decided to break this out into a separate post although it’s all the same keynote, since this is getting a bit long and this post has all the product goodies in it, including TIBCO’s first-ever hardware release in the form of a messaging appliance.

Matt Quinn, TIBCO’s SVP of engineering and technology strategies, discussed product directions and their focus areas for 2008/2009:

  • Event-driven computing everywhere
  • invest in neutrality and enterprise breadth
  • Continue to simplify and streamline user experience

A bit of this was covered in the GTM strategy yesterday, but he provided much more information on specific products:

  • TIBCO ONE, covering all user interfaces from design time to run time, is now under the auspices of a single user experience team. They’ll be adopting Microsoft’s Silverlight as an alternative to their own General Interface, and some functionality will soon be available on Silverlight as well as GI.
  • He announced Spotfire version 2.1 for creating interactive business mashups, and discussed the Spotfire Operations Analytics that Ahlberg showed us yesterday.
  • BusinessEvents version 3 is in the final testing stages, providing a fully distributed and fully clustered rules, CEP and streaming engine, apparently a first in the market.
  • Over the next year, a new product called ActiveSpaces will be introduced for distributing data and state management cache, and is design to handle massive volumes in real-time.
  • iProcess version 11 is now in final testing, with real-time worklist management based on events, and some improvements to installation, LDAP support.
  • Business Studio version 3 is also being released, now fully based on Eclipse as part of the TIBCO ONE’s initiatives, and now (wait for it) with full functionality of the older process modeler, including Eclipse-based forms design.
  • ActiveMatrix version 2 and BusinessWorks 5.6 have been shipping since December, with improved capabilities such as multiple projects per BW engine, and AMX support for BW. In the future, the BW user interface will become part of TIBCO ONE, new testing, development and performance tuning tools will be added, and new deployment options such as clustering will be supported.

There were other announcements about enterprise messaging, managed file transfer, service performance management and CIM; it all went by pretty fast but there will be more information over the next two days.

The big finale was the announcement of a messaging appliance, which they showed on stage to spontaneous applause: a dedicated piece of hardware with Rendezvous embedded within it for incredible performance characteristics, allowing multiple services to be replaced by this single appliance. The appliance doesn’t replace existing Rendezvous installations, but is intended to work with them. They’ll be shipping the first units in September; press release here.

TUCON: Keynote

Before I start, I have to make a comment about the analyst dinner last night. I usually have a hard-and-fast rule about not blogging anything that happens when I have a drink in my hand, but I want to shout out to Heidi Bartlett for organizing the analyst summit yesterday and arranging for an amazing dinner last night. Not only were there excellent food and wines, but I sat with Christopher Ahlberg (Spotfire) and Bruce Silver so had excellent conversation as well about analytics and BPM.

The keynote session was hosted by Tom Laffey, EVP of products and technology — an engineer in the sea of sales and marketing people that’s typical for these events. After a brief intro, we heard from Vivek Ranadivé reprising his message from yesterday of Enterprise 1.0 being the mainframe era (1960-1980), Enterprise 2.0 being the database era (1980-2000) and Enterprise 3.0 being the event-driven era (2000-2020). Someone really needs to get the message to him, or whoever in marketing writes his stuff, that Enterprise 2.0 has a specific meaning in the current vernacular, and this isn’t it.

He does have a good message, which is that we’ve moved from having some of the information available in some places some of the time, to having all information available in real-time, on demand wherever we want it. In reality, we’re not there yet — my bank, one of the largest in Canada, still can’t post my banking transactions to the web in less than 24 hours — but infrastructure like TIBCO is going to help to make it happen by providing the mechanisms to tie systems together and perform complex event processing. He seeing the transition from the last 10 years to the next 10 years as being from static to dynamic, from database to SOA, from ERP to BPM, and from BI to predictive business. A modern-day event-driven architecture has an event-driven service bus as the backplane, kicking up events to the "event cloud" where they can be consumed and combined by analytics for visualization and analysis, rules to determine what to do with specific combinations of events, and BPM to take action on those decisions.

Ranadivé was followed by Kris Gopalakrishnan, CEO of Infosys (a major TIBCO integration partner), who talked about the changing markets, economies and demographics, and how enterprises need to change in order to respond to and anticipate the new requirements. A rapid consumerization of enterprise IT is happening, with greater demand for richer digital experiences both by internal enterprise workers and external customers. Process cut across systems and organizational boundaries, and need to be managed explicitly. IT systems need to be exposed as services that are aligned to business operating model in order to allow IT to respond quickly to business needs. Analytics need to be provided to more people within an enterprise to aid decision making, and there needs to be a convergence of BPM and BI to drive business optimization. He sees that the fundamental problem of information silos still exists: a point of view that I agree with, since I see it in client organizations all the time.

We then heard from Bob Beauchamp, CEO of BMC Software, to hear the customer viewpoint on IT process management using TIBCO’s products. The theme of the conference is "building bridges", with lots of pictures of the Golden Gate bridge and other famous bridges as slide backdrops, and analogies about building bridges between systems, and he used the Golden Gate bridge in another analogy about software: the bridge cost $24 million to build, and $54 million per year to maintain. This analogy is especially true of custom integration software, where in many cases you either effectively rewrite it constantly to keep up with other changes in your environment, or allow it to fall into disrepair.

In particular, however, he’s talking about how IT processes are the last to benefit from new technologies, since they’re too focused on providing these to (or testing them on 🙂 ) other departments within an enterprise. BMC is using TIBCO ActiveMatrix and some of their own technology to bring functions together in order to enable more efficient IT processes, including service support such as asset configuration, service automation such as auditing and compliance, and service assurance such as predictive analytics and scheduling. He sees this as transformational in how IT is managed, and believes that it will have a huge impact in the years to come.

Next up was Anthony Abbattista, VP of technology solutions at Allstate insurance. They’re a huge company — 70,000 employees and 17 million customers — and always felt that they were unique enough that they had to build their own systems for everything, a mindset that they’re actively working to change now. With a 7×24 operation that allows customers direct access to their back office systems, they had some unique challenges in replacing their custom legacy systems and point-to-point integration with standardized reusable components that gives them greater agility. They’ve completely rearchitected for data hubs, service bus and a range of new technologies, and taking advantage of standards to help them move into a new generation of systems and business processes.

TIBCO Analyst Summit: Partner and Channel Strategy

Dean Hidalgo, Director of Industry and Partner Marketing, discussed the partner network and how it ties into their overall strategy. As we heard in the sales strategy sessions, partnering is extremely important in certain regions and will be increasingly so as TIBCO pushes into new geographies that they can’t cover with their own people directly. The usual big system integration companies are here: HP, EDS, Infosys, Wipro, Accenture, Deloitte and CGEY to name a few.

He also discussed their vertical marketing activities, providing tools that allow the sales teams to present vertical value propositions (VVP) — matching the functionality of TIBCO’s products to the vertical business requirements — without having vertical products. Salesware, not software. TIBCO doesn’t back down from the idea that they sell infrastructure, not vertical packaged applications, although these VVPs include "frameworks" (really unsupported templates) of pre-configured rules, policies, KPIs, dashboards and processes that they throw in for a customer to use as a starting point. These VVPs are based on successful real-world implementations, like their predictive customer interaction management that is based on what was actually implemented at one of their large retail banking customers, advanced order fulfillment for telco, dynamic claims management for insurance, predictive STP for securities, point-of-sale monitoring for retail, supply chain optimization for manufacturing and retail, and disruption management for airlines and logistics.

There was a lot of discussion in the room about the value of the VVPs: some analysts felt that this didn’t go far enough, and that TIBCO needs to put out some vertical applications in order to compete, but several of us (including me) feel that this type of vertical marketing is extremely valuable by allowing an infrastructure company to sell to business people without moving out of their sweet spot. If a customer doesn’t look closely at this, however, they might think that these are supported products rather than unsupported templates. This is likely to be exacerbated by their marketing videos that refer to (for example) "TIBCO’s airline disruption management system" — if seen in isolation, or presented by a salesperson who didn’t make that distinction clear, it would be pretty easy to make that mistake.

Abhishek, AVP and head of BPM-EAI practice at Infosys, briefed us on Infosys, their primary verticals and their horizontal technology specializations. They have a significant TIBCO practice, which has allowed them to build reusable frameworks and tools that accelerate their TIBCO-based projects with clients. I’m all for reusability, but I’ve seen some pretty disastrous frameworks built on other products by other systems integrators, and I’m wary about the maintainability and weight of any third-party framework: if you’re looking at something like this, be sure to check out issues like whether it’s productized or considered custom code, the process for upgrading the underlying platform, e.g., TIBCO, and the ability to use the underlying platform features directly for design and administration. In addition to frameworks and systems integration, Infosys is also an engineering partner of TIBCO, developing and supporting various application and technology adapters.

We were supposed to finish at 4:30, but the only thing that ended at 4:30 sharp was our internet connectivity. To be fair, TIBCO provided hard-wired connectivity and power to each table in the analyst briefing room throughout the day, and they did get the internet access turned back on about 10 minutes later so that I could publish this last post before heading to the solutions showcase. The stories that I heard about the hotel’s extortionate cost for wifi doesn’t bode well for intraday posting the rest of the week, in spite of the BPM product marketing manager’s promise to have someone follow me around with a wireless router. 🙂