Ultimus: V8 Technical Deep Dive

Chris Adams is back for a somewhat longer session — I think that he zipped through the previous overview session in about 5 minutes to make up time on the schedule — to give us a lot more detail on the V8 product features. Some of this will only be of interest to Ultimus customers, but I find that it gives some good insight into how the product works and the directions that they’re taking.

ultimus-bpm-suite_2966204861_o

First, he discussed what’s already in the released 8.x product:

  • Flobot connectors are now reusable. “Flobots” are the Ultimus connectors to other systems, with about 10 types available out of the box including web services calls (and I now have a very cool Flobot USB key); previously, you had to reconfigure each connector for every use. For example, for the email connector, you had to set up all parameters for the email connector (ports, authentication, etc.) each place it was used in the process, and change it whenever there was a change to, for example, the recipient. Now, they’ve allows for a reusable connector that has some or all of the parameters predefined to allow that to be more easily used in the process.
  • XML data storage replaces the V7 spreadsheet data structure that was previously used (which previously limited each data element to 255 characters, a limit that I sense from the audience was a sore point). My first reaction was “you used to keep your process instance data in a spreadsheet?”; sometimes you only find out about weirdnesses in a product when you hear about their upgrade out of that state.
  • A new Ultimus rules engine replaces event conditions, with a graphical representation of the rules. Rules actions can be related to steps in the process, or call .Net code or web services. Previously, the event conditions were kept in the spreadsheet data structure, and you had to reference the spreadsheet cell address rather than a schema variable name within rules. Now, you can add rules to processes directly in-line using the process parameters in the rule definitions.
  • Native ActiveDirectory support, so that you can (for example) assign a step to a group that exists in AD. You can still use their org chart functionality to create groups directly in Ultimus.
  • Attachments to process instances have been moved off the BPM server, and into SharePoint. You can use another content repository, but they do SharePoint out of the box and feel that it’s the best integrated solution.

Coming up in 8.2 in December:

  • BPMN support, although you can still convert back and forth to the Ultimus shapes if you’re more familiar with them. He showed a screenshot that looked pretty rudimentary, but it’s not released yet so I’ll reserve judgement until I see the final version.
  • Increased visibility into process incident history, to be able to step through exactly what happened in any particular process instance, including which rules that fired. You can actually playback
  • Enhanced development environment by adding Ultimus awareness to Microsoft Visual Studio for a single environment.
  • Fully exposed APIs, that is, access to the same APIs that the out of the box system is built on to allow you to build the same functionality into your own custom applications, with any function that you see in a pop-up menu also available through an API.

He showed us some architecture diagrams showing their new open architecture, including the client services for building custom client applications, BI services for custom reporting applications, and Flobots for external connectors.

Forrester Integration-Centric BPM report available

Forrester has released the 2008 version of their Wave report on integration-centric BPM suites; you can find it on Vitria’s site here (registration required).

I won’t reproduce the chart here since that always seems to get me in trouble, but suffice it say that Software AG (the former webMethods product), IBM (various WebSphere bits), Vitria, TIBCO (ActiveMatrix), Oracle (SOA Suite; the BEA products were not evaluated due to timing relative to the acquisition), SAP (NetWeaver) and Cordys should all be very happy.

Business Process Driven SOA using BPMN and BPEL

I just received a review copy of Matjaz Juric and Kapil Pant’s new book, Business Process Driven SOA using BPMN and BPEL. It’s on my list of recent books that I’ve received to review, and I hope to get to it soon.

According to the authors’ description, you’ll learn the following from this book:

  • Modeling business processes in an SOA-compliant way
  • A detailed understanding of BPMN standard for business process modeling and analysis
  • Automatically translating BPMN into BPEL Executing business processes on SOA platforms
  • Overcome the semantic gap between process models and their execution, and follow the closed-loop business process management life cycle
  • Understand technologies complementary to BPM and SOA such as Business Rules Management and Business Activity monitoring Approach

I’ll let you know if I learned all of that once I’ve had a chance to read it.

IBM to acquire ILOG

IBM and ILOG announced today that IBM will be acquiring ILOG for €10/share, or about $US340 million in total.

IBM’s goal is to integrate ILOG’s business rules technology into their existing BPM and SOA offerings:

When completed, the acquisition of ILOG will strengthen IBM’s BPM and SOA position by providing customers a full set of rule management tools for complete information and application lifecycle management across a comprehensive platform including IBM’s leading WebSphere application development and management platform.

The funny part is that the IBM press release take two paragraphs to explain what BPM is, and how business rules are used in the context of BPM, indicating just how niche these technologies still are in the broader business scope.

This may not be good news for ILOG’s other BPM partners; one less independent BRMS company means less choice when it comes to putting your processes and rules together.

The people part of SOA

I was going to just link to Mike Kavis’ post on the Top 10 Reasons Why People Are Making SOA Fail, but I wanted to added some of my own comments. By the way, he’s talking primarily about IT people, not business people, in the fail part of the equation.

Number 1 reason: they fail to explain SOA’s business value. Kavis recommends (and I completely agree) starting with business problems first, specifically using BPM as the “killer app” to justify the existence of SOA.

He continues with a number of cultural and organizational issues, such as change management and executive sponsorship, then discusses a few of the flat-out IT failure points: not having the skills to actually do SOA (and not getting the outside help required), trying to do it on the cheap, thinking of SOA as a one-time implementation project rather than an ongoing architecture, and neglecting SOA governance.

His final reason for failure is allowing the vendors to drive the architecture:

[T]he vendors promise flawless integration if you purchase all of your tools within their stack. The reality is, they have purchased so many products from other companies that their stacks do not deliver any better integration than if you bought the tools from a variety of vendors.

In the face of recent acquisitions, this could not be more accurate.

Oracle BEA Strategy Briefing

Not only did Oracle schedule this briefing on Canada Day, the biggest holiday in Canada, but they forced me to download the Real Player plug-in in order to participate. The good part, however, is that it was full streaming audio and video alongside the slides.

Charles Phillips, Oracle President, kicked off with a welcome and some background on Oracle, including their focus on database, middleware and applications, and how middleware is the fastest-growing of these three product pillars. He described how Oracle Fusion middleware is used both by their own applications as well as ISVs and customers implementing their own SOA initiatives.

He outlined their rationale for acquiring BEA: complementary products and architecture, internal expertise, strategic markets such as Asia, and the partner and channel ecosystem. He stated that they will continue to support BEA products under the existing support lifetimes, with no forced migration policies to move off of BEA platforms. They now consider themselves #1 in the middleware market in terms of both size and technology leadership, and Phillips gave a gentle slam to IBM for over-inflating their middleware market size by including everything but the kitchen sink in what they consider to be middleware.

The BEA developer and architect online communities will be merged into the Oracle Technology Network: Dev2Dev will be merged into the Oracle Java Developer community, and Arch2Arch will be broadened to the Oracle community.

Retaining all the BEA development centers, they now have 4,500 middleware developers; most BEA sales, consulting and support staff were also retained and integrated into the the Fusion middleware teams.

Next up was Thomas Kurian, SVP of Product Development for Fusion Middleware and BEA product directions, with a more detailed view of the Oracle middleware products and strategy. Their basic philosophy for middleware is that it’s a unified suite rather than a collection of disjoint products, it’s modular from a purchasing and deployment standpoint, and it’s standards-based and open. He started to talk about applications enabled by their products, unifying SOA, process management, business intelligence, content management and Enterprise 2.0.

They’ve categorized middleware products into 3 categories on their product roadmap (which I have reproduced here directly from Kurian’s slide:

  • Strategic products
    • BEA products being adopted immediately with limited re-design into Oracle Fusion middleware
    • No corresponding Oracle products exist in majority of cases
    • Corresponding Oracle products converge with BEA products with rapid integration over 12-18 months
  • Continue and converge products
    • BEA products being incrementally re-designed to integrate with Oracle Fusion middleware
    • Gradual integration with existing Oracle Fusion middleware technology to broaden features with automated upgrades
    • Continue development and maintenance for at least 9 years
  • Maintenance products
    • BEA had end-of-life’d due to limited adoption prior to Oracle M&A
    • Continued maintenance with appropriate fixes for 5 years

For the “continue and converge” category, that is, of course, a bit different than “no forced migration”, but this is to be expected. My issue is with the overlap between the “strategic” category, which can include a convergence of an Oracle and a BEA product, and the “continue and converge” category, which includes products that will be converged into another product: when is a converged product considered “strategic” rather than “continue and converge”, or is this just the spin they’re putting on things so as to not freak out BEA customers who have put huge investments into a BEA product that is going to be converged into an existing Oracle product?

He went on to discuss how each individual Oracle and BEA product would be handled under this categorization. I’ve skipped the parts on development tools, transaction processing, identity management, systems management and service delivery, and gone right to their plans for the Service-Oriented Architecture products:

Oracle SOA product strategy

  • Strategic:
    • Oracle Data Integrator for data integration and batch ETL
    • Oracle Service Bus, which unifies AquaLogic Service Bus and Oracle Enterprise Service Bus
    • Oracle BPEL Process Manager for service orchestration and composite application infrastructure
    • Oracle Complex Event Processor for in-memory event computation, integrated with WebLogic Event Server
    • Oracle Business Activity Monitoring for dashboards to monitor business events and business process KPIs
  • Continue and converge:
    • BEA WL-Integration will be converged with the Oracle BPEL Process Manager
  • Maintenance:
    • BEA Cyclone
    • BEA RFID Server

Note that the Oracle Service Bus is in the “strategic” category, but is a convergence of AL-SB and Oracle ESB, which means that customers of one of those two products (or maybe both) are not going to be happy.

Kurian stated that Oracle sees four types of business processes — system-centric, human-centric, document-centric and decision-centric (which match the Forrester divisions) — but believes that a single product/engine that can handle all of these is the way to go, since few processes fall purely into one of these four categories. They support BPEL for service orchestration and BPMN for modeling, and their plan is to converge a single platform that supports both BPEL and BPMN (I assume that he means both service orchestration and human-facing workflow). Given that, here’s their strategy for Business Process Management products:

Oracle BPM product strategy

  • Strategic:
    • Oracle BPA Designer for process modeling and simulation
    • BEA AL-BPM Designer for iterative process modeling
    • Oracle BPM, which will be the convergence of BEA AquaLogic BPM and Oracle BPEL Process Manager in a single runtime engine
    • Oracle Document Capture & Imaging for document capture, imaging and document workflow with ERP integration [emphasis mine]
    • Oracle Business Rules as a declarative rules engine
    • Oracle Business Activity Monitoring [same as in SOA section]
    • Oracle WebCenter as a process portal interface to visualize composite processes

Similar to the ESB categorization, I find the classification of the converged Oracle BPM product (BEA AL-BPM and Oracle BPEL PM) as “strategic” to be at odds with his original definition: it should be in the “continue & converge” category since the products are being converged. This convergence is not, however, unexpected: having two separate BPM platforms would just be asking for trouble. In fact, I would say that having two process modelers is also a recipe for trouble: they should look at how to converge the Oracle BPA Designer and the BEA AL-BPM Designer

In the portals and Enterprise 2.0 product area, Kurian was a bit more up-front about how WebLogic Portal and AquaLogic UI are going to be merged into the corresponding Oracle products:

Oracle portal and Enterprise 2.0 product strategy

  • Strategic:
    • Oracle Universal Content Management for content management repository, security, publishing, imaging, records and archival
    • Oracle WebCenter Framework for portal development and Enterprise 2.0 services
    • Oracle WebCenter Spaces & Suite as a packaged self-service portal environment with social computing services
    • BEA Ensemble for lightweight REST-based portal assembly
    • BEA Pathways for social interaction analytics
  • Continue and converge:
    • BEA WebLogic Portal will be integrated into the WebCenter framework
    • BEA AquaLogic User Interaction (AL-UI) will be integrated into WebCenter Spaces & Suite
  • Maintenance:
    • BEA Commerce Services
    • BEA Collabra

In SOA governance:

  • Strategic:
    • BEA AquaLogic Enterprise Repository to capture, share and manage the change of SOA artifacts throughout their lifecycle
    • Oracle Service Registry for UDDI
    • Oracle Web Services Manager for security and QOS policy management on services
    • EM Service Level Management Pack as a management console for service level response time and availability
    • EM SOA Management Pack as a management console for monitoring, tracing and change managing SOA
  • Maintenance:
    • BEA AquaLogic Services Manager

Kurian discussed the implications of this product strategy on Oracle Applications customers: much of this will be transparent to Oracle Applications, since many of these products form the framework on which the applications are built, but are isolated so that customizations don’t touch them. For those changes that will impact the applications, they’ll be introduced gradually. Of course, some Oracle Apps are already certified with BEA products that are now designated as strategic Oracle products.

Oracle has also simplified their middleware pricing and packaging, with products structured into 12 suites:

Oracle Middleware Suites

He summed up with their key messages:

  • They have a clear, well-defined, integrated product strategy
  • They are protecting and enhancing existing customer investments
  • They are broadening Oracle and BEA investment in middleware
  • There is a broad range of choice for customer

The entire briefing will be available soon for replay on Oracle’s website if you’re interested in seeing the full hour and 45 minutes. There’s more information about the middleware products here, and you can sign up to attend an Oracle BEA welcome event in your city.

Service-enable your CICS apps with Metastorm Integration Manager

To finish up my trilogy of posts on legacy integration, I had a look at the Metastorm Integration Manager (MIM), which takes a very different approach from that of OpenSpan. This is based on a look at the product that I did a couple of months ago, and now that Metastorm’s back in the news with a planned IPO, it seemed like the right time.

A lot of what’s in MIM is rooted in the CommerceQuest acquisition of a few years back. In a nutshell, it leverages IBM WebSphere MQ (what we old timers refer to as “MQSeries”) to provide a process-centric, services-oriented ESB-style architecture for integrating systems, both for batch/file transfer and real-time integration, on multiple platforms.

MIM for CICS architectureLike OpenSpan, the idea is to create web services interfaces or non-SOAP APIs around legacy applications, but instead of wrapping the user interface and running on the client, MIM wraps the apps on the server/host side and stores the resultant services in a registry. If you have CICS applications, MIM for CICS runs natively in the CICS environment and service-enables those apps, as well as allowing access to DB2 databases, VSAM and other application types. The real focus is to allow the creation of web services for custom legacy systems; packaged enterprise applications (e.g., SAP) already have their own web services interface or there’s a well-developed market of vendors already providing them.

Although Metastorm’s BPM has some native integration capability, MIM is there to go beyond the usual email, web services and database integration, especially for mainframe integration. Metastorm BPM can call the message-driven micro-flows or web services created by MIM in order to invoke functionality on the legacy systems and return the results to the BPM process.

I saw a demo of how to create a service to access a VSAM data set, which took no more than 5 minutes: through the MIM Eclipse-based IDE, you access CICS registry, create a new VSAM service, import the record definition from the COBOL copybook for the particular VSAM file, and optionally create metadata definitions to rename ugly field names. Saving the definition generates the WSDL and makes it immediately available, with standard methods for VSAM access created by default.

They also showed how to create a service orchestration process flow — an orchestrated set of service calls that could call any of the services in the MIM registry, including invoking batch jobs and managing FTP. With MIM for CICS, everything in a micro-flow is tracked through its auditing subsystem in CICS, even if it calls services that are not in CICS; the process auditing is very detailed, allowing drilldowns into each step to show what was called when, and what data was passed.

Once created, service definitions can be deployed on any platform that MIM supports (Windows, Z-series, i-series, UNIX), and moved between platforms transparently.

Inbound FTP processWe spent a bit of time looking at file transfer, which is still a big part of the integration market and isn’t addressed by messaging. MIM provides a way to control the file transfer in an auditable way, using MQ as the backbone and breaking the file into messages. This actually outperforms FTP, allows for many-to-many transfers more effectively due to the inherent overhead (chattiness) in each FTP transfer, and allows for file-to-message transfers and vice versa, e.g., file creation from message driven by SQL statement.

A directory monitor watches for inbound files and triggers actions based on file names, extensions and/or contents. A translator such as Mercator or TIBCO might be called to transform data, and the output written to multiple systems in different formats, e.g., XML, text, messages, SQL to database, files.

MIM for CICS can also drive 3270 green screens in order to extract data, using tools in the design environment to build screen navigation. This runs natively inside CICS, not on a client workstation, so is more efficient and secure than the usual screen-scraping applications.

In addition to all this, MIM can invoke any program on any platform that it supports, feed information to stdin, and capture output from stdout and stderr in the MIM auditing subsystem.

Exception from MIM in BPM - select next MIM step for reinjectionOn its own, this is a pretty powerful set of integration tools for service-enabling legacy applications, both batch and real-time, but they’ve also integrated this into Metastorm BPM. Of course, any service that you define in MIM can be called from any system that can invoke a web service — which includes all BPM systems — but MIM can launch a Metastorm human-facing process for exception handling from within one of its service orchestration processes by passing a MIM exception to the BPM server. The BPM process can pass it back to MIM at a point in the process if that’s allowed, for example if the user corrects data that will allow the orchestration to proceed, and the BPM user may be given the option to select the step in the MIM process at which to reinject the process.

What happens in Metastorm BPM when it is invoked as an exception handler is not tracked in the MIM process auditor; instead, it captures the before and after of the data that’s passed to BPM, and this would need to be reconciled in some way with the analytics in BPM. This separation of processes — into those managed and audited by BPM and those managed and audited by MIM — is an area where some customers are likely to want more integration between the two products in the future. However, if you consider the services created by MIM as true black boxes from the BPM viewpoint, then there’s nothing wrong with separation at this level. It’s my understanding that MIM calls BPM using a web service call, so really any system that can be called as a web service, including most other BPMS, could be called from MIM for exception handling instead.

OpenSpan: mashing up your legacy applications

Want to mashup your legacy applications? I recently had a chance for some background and a demo of OpenSpan, which is one of the tools that you can consider for bringing those legacy apps into the modern age of composite applications.

A big problem with the existing user environment is that it has multiple disparate applications — Windows, legacy, web, whatever — that operate as non-integrated functional silos. This requires re-keying of data between applications, or copy and paste if the user is particularly sophisticated. I see this all the time with my clients, and this is one of the areas that I’m constantly working with them to find improvements through reducing double keying.

In OpenSpan Studio, the visual design environment, you add a Windows (including terminal emulator or even DOS window) or web application that you want to integrate, then use the interrogation tool to individually interrogate each individual Windows object (e.g., button) or web page object (e.g., text box, hyperlink) to automatically create interfaces to those objects: a very sophisticated form of screen-scraping, if you will. However, you can also capture the events that occur with those objects, such as a button being clicked, and cause that to invoke actions or transfer data to other objects in other applications. Even bookmarks in MS-Word documents show up as entry/access points in the Studio environment.

OpenSpan: Calculater/Google exampleIn a couple of minutes, they built and executed a simple example using the Windows calculator and the Google home page: whenever you hit the M+ button in Calculator, it transferred the contents of the calculator to Google and execute a search on it. This is more than the simple pushing of buttons that you typically find in old-style screen-scraping however; it actually hooks the message queue of the application to allow it to intercept any UI event, which means that other events in other applications can be triggered based on any detected event in the interrogated application: effectively allowing you to extend the functionality of an existing application without changing it.

A more complex example showed interrogating the FedEx website rate to bring results back to Excel, or to bring them back and compare with UPS rates in a custom UI form that was built right in the Studio environment as well. You don’t have to build a UI in their environment: you can use Visual Studio or some other application builder instead, or build .Net controls in another environment and consume them in the OpenSpan Studio (which is a .Net environment).

OpenSpan: FedEx vs UPS rates UIAs you would expect from an integration environment, it can also introspect and integrate any web service based on the WSDL, but it can also encapsulate any solution that you create within OpenSpan and expose it as a service, albeit running on the local desktop: a very cool way to service-enable legacy applications. That means that you can make “web” service wrappers around green-screen or legacy Windows applications, exposing them through a SOAP interface, allowing them to be called from any development/mashup environment.

The integrations and applications defined in Studio are saved to an XML file, which is consumed by the lightweight runtime environment, OpenSpan Integrator; it can launch any applications as required or on startup, and manage the integration between them (which was created in Studio). You can use this to do single sign-on — although not in as secure a fashion as dedicated SSO applications — and can even call services between native desktop applications and virtual environments (e.g., a client service within Citrix). Although the Integrator needs to be running in order to access the integrations via SOAP calls, it can run completely in stealth mode, allowing other applications to call it as if it were a true web service.

You can also integrate databases, which allows you to make database calls based on what’s happening on the screen in order to create analytics that never existed before. As with all other aspects of the integration, these are based only on events that happen in the user interface, but that still has the potential to give you a lot more than what you might have now in your legacy applications.

Everything old is new again

Back in the old days (by this, I mean the 1990’s), when we wanted to integrate BPM with a legacy mainframe system, it was messy. In the worst cases, we wrote explicit screen-scraping code to interact with green screens; sometimes we were lucky enough to be able to hook a message queue or interact directly with the underlying mainframe database. Much of the development was tedious and time-consuming, as well as requiring a lot of manual maintenance when anything changed on the legacy side: sometimes I used to think that the mainframe developers intentionally changed things just to mess up our code. Don’t get me wrong, I’m all for keeping the mainframes in there as super-charged application and database servers, but don’t let the users anywhere near them.

These days, there’s a number of tools around to make integration with legacy mainframe systems easier, and although I don’t write code any more, I have a passing interest since most of my customers are financial services organizations that still have a lot of that legacy stuff lying around.

Strangely enough, the impetus for finally writing about the legacy integration tools that I’ve looked at came because of a conversation that I had recently with Sandra Wade, a senior director of product marketing at Software AG, even though we didn’t talk much about their product’s technical capabilities and I didn’t get a demo. Their announcement back in March was a repackaging of mostly existing applications into their Application Modernization Suite, which has three basic flavors:

  • The web edition is webMethods ApplinX, which allows you to web-enable green-screen applications.
  • The SQL edition uses the webMethods ConnecX adapters to provide unified, SQL-based access across heterogeneous data sources, including non-relational sources.
  • The SOA edition bundles ApplinX, webMethods EntireX, webMethods ESB and CentraSite to provide everything required to service-enable legacy applications, including governance.

Although I saw some of the underlying applications when I attended IntegrationWorld last November, I tend to focus more on briefings and sessions when I’m at a conference, so don’t have a really good feeling for the functionality of ApplinX and EntireX, and how they help to web- and service-enable mainframe applications.

I was going to include all three vendors in a single post, but will follow this one with separate posts about OpenSpan and Metastorm Integration Manager so that it doesn’t get too unwieldy.

TUCON: Using BPM to Prioritize Service Creation

Immediately after the Spotfire-BPM session, I was up to talk about using BPM to drive top-down service discovery and definition. I would have posted my slides right away, but one of the audience members pointed out that the arrows in the two diagrams should be bidirectional (I begged forgiveness on the grounds that I’m an engineer, not a graphic artist), so I fixed that up before posting to Slideshare:

My notes that I jotted down before the presentation included the following:

  • SOA should be business focused (even owned by the business): a top-down approach to service definition provides better alignment of services with business needs.
  • The key is to create business-granular services corresponding to business functions: a business abstraction of SOA. This requires business-IT collaboration.
  • Build thin applications/processes and fat services to enable agile business processes. Fat services may have multiple operations for different requirements, e.g., retrieving/updating just the customer name versus the full customer record in an underlying system.
  • Shared business semantics are key to identifying reusable business services: ensure that business analysts creating the process models are using the same terminology.
  • Seek services that have the greatest business value.
  • Use cases can be used to identify candidates for services, as can boundary crossings activity diagrams.
  • Process decomposition can help identify reusable services, but it’s not possible to decompose and reengineer every process: look for ineffective processes with high strategic value as targets for decomposition.
  • Build the SOA roadmap based on business value.
  • SOA isn’t (just) about creating services, it’s about building business processes and applications from services.
  • Services should be loosely-coupled and location-independent.

There were some interesting questions arising from this, one being when to put service orchestration in the services layer (i.e., have one service call another) and when to put it in the process layer (i.e., have a process call the services). I see two facets to this: is this a business-level service, and do you want transparency into the service orchestration from the process level? If it’s not a business-level service, then you don’t want business analysts having to learn enough about it to use it in a process. You can still do orchestration of technical services into a business service using BPM, but do that as a subprocess, then expose the subprocess to the business analyst; or push that down to the service level. If you’re orchestration business-level services into coarser business-level services, then the decision whether to do this at the service or process level is about transparency: do you want the service orchestration to be visible at the process level for monitoring and process tracing?

This was the first time that I’ve given this presentation, but it was so easy because it came directly out of my experiences. Regardless, it’s good to have that behind me so that I can focus on the afternoon sessions.