BEAParticipate: Tips and Tricks for Successful Deployment

One hour left, and 25% of my battery life. It’s a race to the finish.

Craig Cochrane from BEA’s professional services and Becky Lewis of SAIC finished off the first day with a session on the specific nature of BPM system roll-outs.

Cochrane pointed out some of the critical groundwork to cover in any BPM project: establish goals and key performance indicators, develop strategies for maximizing user adoption, select BPM projects, and prepare and train resources.

He covered several strategies for designing BPM systems, ranging from low-complexity, near out-of-the-box with direct user access to a standard inbox and a minimal amount of integration with other systems; through to fully-orchestrated situations where BPM controls the entire process, requiring significant integration. These often represent different stages in the same BPM project rather than endpoints in different projects: you can think of the low-complexity systems as early versions of what will eventually be a fully-orchestrated system.

Cochrane advocates an iterative development approach: not as extreme as Agile, but breaking the development into much smaller building blocks that can be rolled out incrementally, with user feedback adjusting the requirements along the way. It’s more of a mini-waterfall approach, although that’s obviously a taboo word, involving requirements, design, implementation, testing and project management at each stage. As he goes on to discuss change management, it’s clear that there’s still a lot of the old-style development mindset of use cases and screen mockups at the front end — in reality, we don’t mockup screens any more, we use rapid prototyping tools to create a working prototype, or else we risk delaying development to an unacceptable degree.

Lewis then talked to us about enterprise BPM at SAIC: they have multiple systems that embody parts of business processes (some redundantly due to decentralized IT), but no enterprise-level tool to tie all of them together or enforce consistent roles. They found that the sweet spot for BPM within their organization was processes that are complex, span functional boundaries, and have multiple system interfaces. They did think big and start small: they started on security and other framework components as would be required by future BPM applications, but started with a couple of smaller, low-risk projects. At the same time, they scoped out the high-priority (and higher risk) projects to take on once some of the internal resources were trained, and they’d had a chance to learn about BPM on the starter projects. Their first applications were training request forms plus some of the BPM framework components, and A/P invoice exception handling (now under development).

A big part of their framework vision is around the integration of ALBPM into their existing enterprise portal (built with BEA WebLogic, not ALUI), complete with single sign-on and a common look and feel, and with other technologies such as their Documentum document management system. This required the right balance: they didn’t want to customize so much that they couldn’t easily implement new versions of the core ALBPM product, but they wanted to have a consistency at the presentation layer. They removed some of the standard functionality (like creating custom views) in order to make it easier to support internally.

They also focussed on integrating their centralized Active Directory with ALBPM so that there wasn’t a duplication of effort in maintaining users, groups and roles. Interestingly enough, they created an automated ALBPM process to synchronize Active Directory into the ALBPM users and groups.

A key part of their strategy was to create a BPM knowledge repository, which they did using a wiki to capture key findings, evolving standards and best practices. Although they use a template to provide some level of consistency, they found that a wiki provides much more flexibility for knowledge capture than standard document repositories.

She had some useful summary points, like the one about planning for the first project to take more time than you expect, especially if you’re trying to build part of the big-picture framework at the same time. Still, they completed their first project in three months, which is acceptably fast.

Tonight, we’re all off to the ESPN Zone for dinner and entertainment, although I’m still trying to figure out exactly what the ESPN Zone is. I realize that sports-themed extracurricular events is the price that I pay for going to the type of conferences where there’s no lineup in the women’s restrooms.

BEAParticipate: Best Practices for Succeeding with BPM

I’m jumping around between tracks (and hence rooms): I started the afternoon in the ALUI Experience track, then on to the ALBPM Technical/Developer track, and now I’m in the ALBPM Experience track for a discussion of best practices for managing BPM projects with Dan Atwood of BEA (another former Fuego employee) and Karl Djernal of Citigroup. It’s a bit difficult to pick and choose sessions when you’re interested in multiple tracks: this session and the one after it in the same track are both around best practices, although appear to cover different aspects of BPM implementations and I’d like to sit through both. This one is categorized as “beginner” and the next as “intermediate”, so I’m hoping that someone’s actually checked to ensure that there’s not too much overlap between them. I’d also like to see the next technical track on how BPM and ESB work together, but think that I can probably get a briefing on that directly from BEA as required.

Atwood started the session with seven key practices for BPM success:

  1. Fundamentals of process-based competition: understanding the competitive advantage of being a process-oriented company, and the business case for BPM.
  2. BPM and its value to the corporation: understanding what BPM is and how it differs from other management and technology approaches.
  3. From functional management to process-oriented thinking: how the shift from functional management must permeate through the ranks of middle management in order to disperse the fiefdoms within an organization.
  4. Getting hands-on BPM experience, with the help of mentors.
  5. Foundations for process practitioners: BPM as the capability for implementing and extending other management practices such as Six Sigma.
  6. Business process modelling and methods: learn about process-oriented architectures and development methods, and how they differ from traditional approaches.
  7. Human interactions and their roles within BPM: while system-to-system automation is often a BPM focus, the human-facing parts of the process are critical. In other words, you can’t think of these as being “human-interrupted” processes, as a customer of mine did long ago.

Obviously a big fan of BPM books, Atwood references Peter Fingar, Howard Smith, Andrew Spanyi, John Jeston, Mike Jacka, Paulette Kellerin and Keith Harrison-Broninski, as well as a raft of BPM-related sites (although not, unfortunately, www.column2.com). Also a fan of lists, he finishes up with his top five success factors:

  • Executive sponsorship
  • Correct scoping
  • Start with the end in mind
  • Framework
  • Engage stakeholders

Hmmm, that seems to make 12 best practices in total…

Djernal then discussed the Agile methodology that they used for BPM implementation at Citigroup, starting with a description of Agile and Scrum as the anti-waterfall approach: providing incremental deliveries based on changing, just-in-time requirements, and involving the end users closely during the development cycle to provide feedback on each iteration. Just as important as delivery mechanisms is the Agile team structure: the team’s not managed in the traditional sense, but works closely with the customer/end-user to create what they want. There’s a 15-minute team meeting every day, and a delivery (sprint) every 30 days. Many teams vary the sprint length slightly while sticking to the Agile methodology, although there’s danger in increasing it too much or you slip back to months-long delivery cycles. Initiated by the original prioritized set of product features, the user feedback on each iteration can impact both the features and the priorities. There’s basically three roles in Agile: a product owner who represents the stakeholders, the team that implement everything, and the ScrumMaster who provides mentoring on the Agile process and helps to sort out external roadblocks.

The interesting thing is how they brought together BPM and Agile, since I’m convinced that these are two things that belong together. Process diagrams fill in a lot of the documentation gap and are a naturally agile form of creating a functional specification; they form a good basis for communication between the business and IT. Changes in requirements that cause changes to the business process can be done easily in a graphical process modelling environment. In fact, in many BPM environments, the processes can be prototyped and an initial executable version developed in a matter of days without writing any code, which in turn helps to set priorities on the functions that do require coding, such as developing web services wrappers around legacy systems.

They’ve learned some things from their experiences so far:

  • Get training on using the BPM products, and on BPM in general.
  • Use some external resources (like me) to help you get started.
  • Since BPM involves integration, setting up the development, testing and production environments can be time-consuming and require specialized resources.
  • Spend some time up front putting together a good test environment, including automated testing tools.
  • Create a centre of excellence for BPM.
  • Start something small for your first BPM project.

There’s a lot of arguments about how Agile can’t really handle large-scale development projects, but it’s my belief that most BPM projects lend themselves well to Agile. The worst disasters that I’ve seen in BPM implementation have been the product of the most non-Agile development processes imaginable, with months of requirements writing followed by many more months of development, all of which resulted in something that didn’t match the users’ requirements and was much too costly to change. As I’ve said many times before, if you can’t get something up and running in BPM in a matter of a couple of months, then you’re doing something really wrong.

BEAParticipate: Using SOA Technologies with BPM

Mariano Benitez of BEA (part of the original Fuego team that built what is now ALBPM) and Bhaskar Rayavaram of Bear Stearns (who was with Fuego before joining Bear Stearns) presented a unified view of BPM and SOA.

Benitez started with some pretty basic stuff about how BPM consumes services, either system-level or presentation-level, and how services can be introspected for easy integration. He then discussed ALBPM as producing services, that is, it can create services that can be consumed by other applications. This was much more interesting and comprehensive; however, overly dense with jargon and acronyms, and obviously dependent on us having attended the session immediately prior in that track (which I didn’t). There’s a number of mechanisms for producing services using ALBPM:

  • Web service front-end to a small set of process API (PAPI) functionality, such as instantiating processes, that’s part of Workspace; it appears that all PAPI-based web services use a common WSDL that expose the methods of PAPI.
  • Process web services, which are similar to the PAPI web services in functionality, but are implemented in the execution engine rather than Workspace. This can only be used to create instances and send notifications, but is designed as part of the process and provides a unique WSDL for each process.
  • Extended web services, which provides a component-level service; obviously I’m missing some key piece of information because I really have no idea what he’s talking about here. 🙂
  • HTML API framework (formerly WAPI), which allows for the creation of simple HTML forms that can be called as services in order to call Workspace operations.
  • JSR168 portlets, to provide portlet functionality to render Workspace operations.
  • And if you really want to beat yourself up, you can create plain Java wrappers for PAPI in order to create custom services, or JMS for asynchronous services.

All of this reinforces my impression that BEA’s BPM product focus is still too much on hard-core developers — the same ones that are writing services at the SOA level — and not enough on the business side. If I think about this morning’s presentation by PG&E, he placed BPM on the IT side of the house, with a process modelling layer as being the business side’s participation point. Whatever happened to that lovely zero-code BPM that I saw in Fuego?

Rayavaram talked about how Bear Stearns is using BPM in an SOA environment: how processes identify candidates for service enablement, rather than implementing services then looking for processes that might use them. They’re also accessing Fair Isaac’s Blaze business rules management system via web services calls from the processes. They have a loose coupling of processes and services, with services deployed separately now but with a view to migrating to an ESB and a full event-driven architecture.

BEAParticipate: BPM 101 for Portals

For the first breakout session, I attended BPM 101 for Portals to hear Jesper Joergensen of BEA’s product marketing group and Bob O’Connor of Pratt & Whitney. Jesper started out by giving a brief review of BPM (the usual model/execute/analyze/optimize cycle), since this session is in the portals track and most of the audience is likely much more familiar with portals than with BPM. However, since the description claims that he’s also going to discuss how process and portals can work together, I want to hear their message on this since I’ll be speaking about BPM at a portals conference in two weeks.

O’Connor then told us about how Pratt & Whitney is using portal technology and — soon — ALBPM. They’ve had a customer portal since 2001, but had a lot of business processes that didn’t mesh together very well. In 2002, they added SOA functionality that allowed data to be pulled from multiple systems and presented to the customer, such as all maintenance information for a specific engine based on the serial number. In spite of their advances in their customer portal, however, they still had a number of disparate departments with their own business processes, and no real end-to-end enterprise view of processes. That means that lag time between the separate processes wasn’t necessarily logged as part of the end-to-end cycle time for an engine overhaul, for example, but definitely impacted the customer. Since it was between processes, that time was no one’s responsibility until they started looking at business processes as they span the enterprise, not just within functional silos.Today, they’re doing “manual BPM” for collaboration around engine overhauls, where 1000’s of process steps and approvals are logged and uploaded so that customers have a near-real-time view of the overhaul process.

For the past year, they’ve been working with ALBPM (although they’re just starting to roll out BPM applications), and see great potential value from combining ALUI and ALBPM to automate the processes using BPM and provide the necessary visibility into those processes via portals. Their initial processes include line maintenance order-to-cash (where any delays in the process severely impact the customer), quality process clinic management, help center routing, overhaul records coordination, employee awards, engine events management, engine wash, and shop processes. Some of these smaller processes took only a day or two to create in ALBPM, while their internal IT had quoted several months and several hundreds of thousands of dollars to do the same thing. They’re pulling data from SAP and other enterprise applications into ALBPM at the start of the process, then feeding back any updates at the end; I would have thought that they’d use web services for at least SAP in order to do interactive updates rather than have to deal with the potential for mis-synchronization between BPM and the back-end systems.

They’re doing some pretty innovative combinations of technologies to shorten maintenance cycle times, for example, RFID and other sensors to detect any engine problems while a plane is still in the air allow dispatching of maintenance personnel to be at the site when the plane lands. The time to service the engine may be the same, but the down-time for the aircraft is greatly reduced, which shows a commitment to their customers’ concerns.

O’Connor, as a BPM department of one person, is part evangelist and part BPM developer (without having much of an IT background), helping to figure out how BPM can be used across Pratt & Whitney and help implement the solutions.

Although this presentation was really about BPM, I can understand why it’s in the portals track: since Pratt & Whitney was a big portals customer first, this shows how you can successfully add BPM to a portal environment.

BEAParticipate: Product updates

The general session continues with some BEA product and services information from Shane Pearson, VP of Marketing and Product Management someone whose name that I missed since I was late coming into the session after the break (someone help me out with the name, please), particularly what’s been done in the past year:

  • In AquaLogic User Interaction, there’s some new integrations, improved usability, and greater platform support. They also have some new solution suites; listed under services, I’m not sure how productized these are, or whether they have to come as a professional services offering.
  • In ALBPM, they’ve re-branded and internationalized the Fuego product, and enhanced its integration with other BEA products such as the service bus. They’ve enhanced both the business and developer tools. As with the ALUI products, they now offer strategy workshops, and there are a number of BPM-specific educational offerings such as BPM lifecycle assessment, some of which are available online.
  • In Enterprise Social Computing, the release of the new AL Pages, Ensemble and Pathways products. They also offer management consulting in this area (yes, the phrase “new paradigms” was used); I see this sort of consulting as a huge growth area in the Enterprise 2.0 space, but not necessarily one that can be addressed by product vendors.

Great quote from the presentation: “People use enterprise systems where they provide value, but otherwise work outside enterprise systems for day-to-day work.” For years, I’ve been going into customers and pinpointing deficiencies in their enterprise systems (and usually, therefore, in their business processes) by finding out where they use Excel, Access and paper log files: these are the mechanisms that they create for when the enterprise systems don’t provide sufficient value or actually hinder the process. Or, as was stated as the dilemma of the information worker on a later slide, “Our ability to create information has outstripped our ability to easily and accurately use this information in the context of business”.

They see their three main product foci as enterprise social computing, BPM and SOA, with impacts on people and participation as well as technology. The product portfolio breaks down as follows:

  • Social computing: AquaLogic Pages, AquaLogic Ensemble and AquaLogic Pathways
  • Activity servers: AquaLogic Commerce Services, AquaLogic Interaction Analytics, AquaLogic Interaction Collaboration, AquaLogic Interaction Publisher and AquaLogic Interaction Search
  • Interaction servers: AquaLogic Interaction and WebLogic Mobility Server
  • BPM: AquaLogic BPM (which is a suite including modelling and analytics in addition to the execution engine)

There was a comment about BPM standards such as BPMN and XPDL being supported in the next version — this is something that I’ll want to drill into during a more detailed session or demo. They’re also adding RSS enablement, so that work lists can be consumed with any feed reader tool.

The focus at the end of the presentation came back to social computing — as I mentioned earlier, BEA is obviously betting a lot on this new market segment. I’ve been reading and writing so much about these technologies that much of this is old hat, but it’s likely pretty new for much of the audience, or at least its application within the enterprise is a new concept. “Users at the center”, “Poised to transform the enterprise”: all the right buzz phrases in place. There were some interesting stats about the use of social computing within organizations, some of which I find hard to believe: 15% using internal blogs? Also, Pearson the presenter has a pretty slim grasp of aggregate statistics, since he added up all the % of who is using enterprise blogs, wikis, bookmarking, etc., and stated that 80% of organizations are using social computing. Um, maybe not. The 15% who are using internal blogs almost certainly has nearly 100% overlap with the 18% who are using internal bookmarking. The stats shown for consumer social networking participation also look high compared to what I’ve seen recently: 13% of people who are online are creating web pages, blogs and YouTube videos?

There’s a not-surprising chart about age demographics and social networking: I’m a “young boomer”, apparently, and only 12% of us in this age category create content on the web, so I’m obviously an outlier with three blogs, 3000 Flickr photos, a few videos on YouTube, hundreds of bookmarks on del.icio.us.

it’s obvious that the MySpace generation is driving much of the content creation and, if they ever get jobs, will be the ones forcing the adoption of social computing within the enterprise.

BEAParticipate: Brian Abrahamson

Last up before the morning break was Brian Abrahamson, Director of Enterprise Architecture at PG&E; although I’ve been interested in the portal presentations prior to this, I was relieved to finally get some BPM/SOA content. They started on a huge business transformation strategy two years ago due to various factors such as deregulation and changing legislation that are impacting the competitive landscape in the utility industry, forcing them to become more competitive. Price is certainly a point of competition, but they also have customer service issues such as managing unexpected outages (e.g., what Abrahamson referred to as a “car-pole incident”), installing new residential service, and managing regular maintenance and work orders.

They made an explicit decision to create an SOA layer that would leverage their SAP and Oracle systems in order to provide a more agile development environment. They’ve been using EAI technologies for a number of years to create integration between enterprise applications, but most of the business processes were embedded in these applications rather than being explicitly defined and executed. Their current direction, therefore, is moving from application-centric to process-centric by allowing the construction of composite applications and business processes from the services provided by the enterprise applications. They consider BPM to be a strategic enabler of their future vision.

What they’ve done so far is to expose services from the enterprise applications, and used ALSB as an enterprise service bus. ALBPM then allows those services to be used, via the bus, to create executable business processes using those services. As soon as they started exposing BPM to their internal clients, however, there was an immediate demand for modelling, simulation and analytics; now, they’re planning for a business process modelling layer that allows their business analysts to do all of these with some type of more comprehensive BPA tool, with round-tripping as a key requirement. Above all of layers is a process architecture and governance layer that, like the modelling layer below it, is business-driven, whereas they see the BPM, ESB and SOA layers as being IT’s domain.

They have realized a couple of key points: from the IT side, SOA provides a service layer than greatly expedites BPM; from the business side, cross-departmental process optimization is key to future growth. They have a business process competency centre that does mainly paper-based and manual modelling and analysis, which is a big driver for getting the business process modelling layer in place in their BPM stack.

They learned some valuable lessons along the way: put SOA principles and practices in place early; get executive sponsorship of BPM initiatives; business process modelling, management and governance is more of a business issue than a technology tool issue; and lastly, the market is still maturing and requires partnering with some key technology partners.

BEAParticipate: Mark Carges

Day 1 of the BEA user conference in Atlanta, and we start out with a morning of general sessions hosted by Ira Pollack, SVP Sales at BEA; the remainder of the 2-1/2 day conference is all breakout sessions. There’s wifi around but I seem to be missing the conference code necessary to get logged on, so posts will be delayed throughout the conference as I’ll be gathering them up to publish at times when I can get internet access. There’s also not a power source in sight, which could mean that the last parts of this are really delayed as I transcribe them from paper. 🙁

BEAParticipate is a new user conference dedicated to portals, BPM and social computing with tracks for business and developer-focussed audiences. My focus has only come on BEA with the acquisition of Fuego a year or so ago, so I’m not sure what they had in terms of user/developer conferences prior to (or in addition to) this, although I talked last night with a web developer who has been a Plumtree customer for years and has transitioned from the Plumtree conference as it was rolled into this conference.

We started out with Mark Carges, EVP of BEA, (who many years ago helped develop the source code for Tuxedo) with a high-level vision of how these technologies can create new types of agile applications, and how BEA is delivering BPM, SOA and enterprise social computing (Enterprise 2.0). He talked about the difference between traditional and situational applications, the top-most point of which is that traditional ones are built for permanance whereas situational ones are built for change: exactly the point that I made last week in my talk at TUCON. He covers other comparative points, such as tightly- versus loosely-coupled, non-collaborative versus collaborative, homogeneous vertical integration in application siles versus heterogeneous horizontal integration, and application-driven versus business process-driven.

He walked us through a few examples of their customers’ portal applications — purely intranet, customer-facing, and public — and one example of BPM in a customer, before moving on to talk about BEA’s strategy and product development, particularly in Enterprise 2.0. He made the point that enterprise applications are having to learn from the consumer-facing Web 2.0 applications by allowing for different types and degrees of user participation. Instead of just listing consumer Web 2.0 applications, however, Carges makes analogies with how the same sort of technology could be used inside an enterprise: Digg-like ranking used for ranking sales tools internally; social bookmarking and implicit connections for internal expert knowledge discovery (much like what IBM is doing with Dogear, which I’m sure that they’ll turn into a commercial product once companies like BEA prove the market for it); mashups for creating a single view of a customer from multiple sources including product, support incidents and account information; and wikis to capture competitive intelligence. This is where their new product suite fits: AquaLogic Pages (to create pages, blogs and wikis), Ensemble (for developers to create mashups) and Pathways (for tagging and bookmarking). All of these mesh with IT governance such as security and versioning, but the content isn’t controlled by IT.

Interesting that the focus of his talk has really been on their new Enterprise 2.0 products rather than portals or BPM; they obviously see this as a strong potential growth area.

Transformation & Innovation conference coming up this month

The Transformation & Innovation conference is running in Washington DC on May 21-24, with several sessions on BPM.

I won’t be there; the dates are sandwiched in between a vacation trip to Nova Scotia and a presentation at the Shared Insights Portals & Collaboration conference in Las Vegas.

TUCON: The Face of BPM

Thursday morning, and it seems like a few of us survived last night’s baseball game (and the after-parties) to make it here for the first session of the day. This will be my last session of the conference, since I have a noon flight in order to get back to Toronto tonight.

Tim Stephenson and Mark Elder from TIBCO talked about Business Studio, carrying on from Tim’s somewhat shortened bit on Business Studio on Tuesday when I took up too much of our joint presentation time. The vision for the new release coming this quarter is that one tool can be used by business analysts, graphical tools developers and operational administrators by allowing for different perspectives, or personas. There’s 9 key functions from business process analysis and modelling to WYSIWYG forms design to service implementation.

The idea of the personas within the product are similar to what I’ve seen in the modelling tool of other BPMS vendors: each has a different set of functions available and has some different views onto the process being modelled. Tim gave some great insight into how they considered the motivations and requirements of each of the types of people that might use the product in order to develop the personas, and showed how they mapped out the user experience flow with the personas overlaid to show the interfaces and overlaps in functionality. This shows very clearly the overlap between the business analyst and developer functionality, which is intentional: who does what in the overlap depends on the skills of the particular people involved.

As we heard in prior sessions, Business Studio provides process modelling using BPMN, plus concept modelling (business domain data modelling) using UML to complement the process model. There’s a strong focus on how BPM can consume web services and BusinessWorks services, because much of the audience is likely developers who use TIBCO’s other products like BusinessWorks to create service wrappers around legacy applications. At one point between sessions yesterday, I had an attendee approach me and thank me for the point that I made in my presentation on Tuesday about how BPM is the killer app for SOA (a point that I stole outright from Ismael Ghalimi — thanks, Ismael!), because it helped him to understand how BPM creates the ROI for SOA: without a consumer of services, the services themselves are difficult to justify.

We saw a (canned) demo of how to create a simple process flow that called a number of services that included a human-facing step, a database call to a stored procedure, a web service call based on introspecting the WSDL and performing some data mapping/transformation, a script task that uses JavaScript to perform some parameter manipulation, and an email task that allows the runtime process instance parameters to be mapped to the email fields. Then, the process definition is exported to XPDL, and imported into the iProcess Modeler in order to get it into the repository that’s shared with the execution engine. Once that’s done, the process is executable: it can be started using the standard interface (which is built in General Interface), and the human-facing steps have a basic form UI auto-generated.

It is possible to generate an HTML document that describes a process definition, including a graphical view of the process map and tabular representations of the process description.

As I mentioned in other posts, and in many posts that I’ve made about BPA tools, is that there’s no shared model between the process modeller, which is a serious issue for process agility and round-tripping unless you do absolutely nothing to the process in the iProcess Modeler except to use it as a portal to the execution repository. TIBCO has brought a lot (although not all) of the functionality of the Modeler into Studio, and are working towards a shared model between analysts and developers; they believe that they can remove the need for Modeler altogether over time. There’s no support at this time, however, to being able to deploy directly from Studio, that is, Studio won’t plug directly into the execution engine environment. Other vendors who have gone the route of a downloadable disconnected process modeller or a separate process discovery tool are dealing with the same issue; ultimately, they all need to make this new generation of modelling tools have the capability to be as integrated with the execution environment as those that they’re replacing in order to eliminate the requirement for round-tripping.

TUCON: Architecting for Success

In an afternoon breakout session, Larry Tubbs from AmeriCredit talked about using TIBCO to automate their contract processing workflow, that is, the part between loan origination/approval and the contract administration system. Their business case was similar to that I’ve seen in many other financial and insurance applications: visibility into the processes, appropriate management of the resources, and ever more stringent regulatory requirements. They did a product evaluation and selected TIBCO, using iProcess Suite, BusinessFactor, BusinessWorks, and EMS as the underlying service bus. They implemented really quickly: for their initial release, it was a matter of months from initial design to rollout to five branches handling 1600 cases simultaneously (the system is designed for a peak load of 7000 cases).

Nimish Rawal from TIBCO, who was involved in the implementation, described some details of what they did and the best practices that they used: use iProcess engine for process orchestration and BusinessWorks for integration; put application data in a separate schema (they had 583 instance data fields and 257 metadata fields); create a queue/group structure according to business divisions; and allow the business to control the rules to allow for easy changes to the process flow or any changing regulations. They used master and slave iProcess servers hitting against a common database to distribute the load, and used clustering for high availability although the failover process is not automatic (which surprised me a bit since clustering software or hardware can automate this). They also planned for disaster recovery by distributing nodes between two physical locations and sending archive files from the master to DR site about once every five minutes; again, the failover is not automatic, but that’s less expected in the case of a total site loss.

Rawal also went through the TIBCO professional services engagement model. On the AmeriCredit side, they had four core developers to work with the TIBCO team (which went from five to seven to two), and now the TIBCO people only do mentoring with all development being done by AmeriCredit’s developers.