JackBe Enterprise Mashlets

The slide deck said “Proprietary & Confidential” but I was assured by the presenters that I was welcome to blog about JackBe’s webinar on enterprise mashlets. They’ve done a number of webinars in the past that are available for replay, and also have several videos available at JackBe TV (which would be great if I were able to subscribe to it in iTunes)

Today’s presenters are Deepak Alur (VP Engineering) and Kishore Subramanian (“Chief Electrician”), and Deepak started by covering widgets and mashups, and how JackBe advanced those concepts to what they call mashlets: a platform for enterprise mashup widgets. We’re all inundated with widgets these days — everything from badges that we add to our website sidebars to our own customizable dashboard of widgets such as iGoogle — but many of the consumer-oriented widgets provide access to only a single source of information and allow a minimum of customization. They’re useful points of data visualization that can be easily assembled into a portal environment, but typically don’t interoperate and may be display-only. Enterprise widgets have to have a bit more than that: they need to live within the enterprise stack with respect to security, access to enterprise services and data, and proper IT management.

A mashup, on the other hand, integrates content from more than one source, but has often been too technical for a user to create. Mashups are gaining acceptance within enterprises, since they provide lightweight methods and platforms for creating situational applications that can be deployed quickly, with very little development effort.

There’s a number of reasons to consider widgets and mashups together, since they share a number of characteristics — using building blocks to quickly assemble new web applications — which drove JackBe to create mashlets. In their definition, mashlets are user-oriented micro-applications that are secure and governed for enterprise use, providing the visualization, or “face”, of a mashup to be embedded in a web page. Unlike simple widgets, they’re context-sensitive and dynamic, allowing multiple mashlets on a single page to interact. Comparing widgets and mashlets on a number of factors:

Factors Widgets Mashlets
Consumer/Enterprise Consumer Both
Novelty/Business Depends Business
Display/Input Display Both
User/Programmer Programmer Both, governed
Visual/Non-Visual Visual Both
Client/Server side Client side Both
Web services/data Programmer Plug and play
Secure Depends Enterprise
Ability to embed Yes Enterprise
Managed No Enterprise
Shareable Ecosystem Enterprise

JackBe mashlet portfolio exampleWe saw a number of quick demos of using the mashlets created in JackBe’s Presto platform. There’s some nice built-in features for the mashlets, for example, exposing the code to embed a mashlet within another page (much like what YouTube gives you to embed a video in a web page), and the code to embed it within a MediaWiki wiki, as well as allowing them to run as standalone pages. We saw an example of a stock trading page with multiple mashlets, where entering a trade in one mashlet caused data in the portfolio positions mashlets to update automatically.

Presto is compatible with portal standards, so can be embedded within a standards-based environment such as Oracle Portal, or in environments such as Netvibes and iGoogle.

JackBe buliding a mashup serviceAll of the early examples showed using mashlets that had been created by developers, but we then looked at what’s required to actually create a mashlet. This is done in their visual composition tool, Wires (hence the “Chief Electrician” title), where you can drag services onto the workspace and connect them up to create a mashup — visually, somewhat similar to Yahoo Pipes — and save the results as a service that can be published and made available for consumption. The services can be run at any point to check the output, making it easy to debug as you go along. JackBe mashlet embedded in MediaWiki pageOnce that’s done, a mashlet can be created from that mashup service by specifying the visualization form, e.g., a specific chart type, or a data grid. Like many über-techies, the JackBe guys casually stated that this could be done by “end users”. Um, I don’t think so. Or, at least, not most of the end users that I see in my day-to-day practice. But it is pretty easy for anyone with a bit of a technical background or inclination.

Presto appears to also act as a repository/directory for the mashup services and mashlets, serving these up to whatever pages are consuming them. Mashlets can be hosted on any web server, and once delivered to the browser, they live in the browser until the session ends, communicating with the mashup server via their PrestoConnect connector.

There’s a few key differentiators for JackBe’s Presto mashlets:

  • Enterprise security for authentication and authorization
  • Inter-mashlet publish/subscribe to allow mashlets to exchange information
  • Consumption of a wide range of data and services sources
  • UI framework independence

This was a full hour with not a lot of time for Q&A; I look forward to seeing more of this at the Enterprise 2.0 conference in Boston in a few weeks.

Service-enable your CICS apps with Metastorm Integration Manager

To finish up my trilogy of posts on legacy integration, I had a look at the Metastorm Integration Manager (MIM), which takes a very different approach from that of OpenSpan. This is based on a look at the product that I did a couple of months ago, and now that Metastorm’s back in the news with a planned IPO, it seemed like the right time.

A lot of what’s in MIM is rooted in the CommerceQuest acquisition of a few years back. In a nutshell, it leverages IBM WebSphere MQ (what we old timers refer to as “MQSeries”) to provide a process-centric, services-oriented ESB-style architecture for integrating systems, both for batch/file transfer and real-time integration, on multiple platforms.

MIM for CICS architectureLike OpenSpan, the idea is to create web services interfaces or non-SOAP APIs around legacy applications, but instead of wrapping the user interface and running on the client, MIM wraps the apps on the server/host side and stores the resultant services in a registry. If you have CICS applications, MIM for CICS runs natively in the CICS environment and service-enables those apps, as well as allowing access to DB2 databases, VSAM and other application types. The real focus is to allow the creation of web services for custom legacy systems; packaged enterprise applications (e.g., SAP) already have their own web services interface or there’s a well-developed market of vendors already providing them.

Although Metastorm’s BPM has some native integration capability, MIM is there to go beyond the usual email, web services and database integration, especially for mainframe integration. Metastorm BPM can call the message-driven micro-flows or web services created by MIM in order to invoke functionality on the legacy systems and return the results to the BPM process.

I saw a demo of how to create a service to access a VSAM data set, which took no more than 5 minutes: through the MIM Eclipse-based IDE, you access CICS registry, create a new VSAM service, import the record definition from the COBOL copybook for the particular VSAM file, and optionally create metadata definitions to rename ugly field names. Saving the definition generates the WSDL and makes it immediately available, with standard methods for VSAM access created by default.

They also showed how to create a service orchestration process flow — an orchestrated set of service calls that could call any of the services in the MIM registry, including invoking batch jobs and managing FTP. With MIM for CICS, everything in a micro-flow is tracked through its auditing subsystem in CICS, even if it calls services that are not in CICS; the process auditing is very detailed, allowing drilldowns into each step to show what was called when, and what data was passed.

Once created, service definitions can be deployed on any platform that MIM supports (Windows, Z-series, i-series, UNIX), and moved between platforms transparently.

Inbound FTP processWe spent a bit of time looking at file transfer, which is still a big part of the integration market and isn’t addressed by messaging. MIM provides a way to control the file transfer in an auditable way, using MQ as the backbone and breaking the file into messages. This actually outperforms FTP, allows for many-to-many transfers more effectively due to the inherent overhead (chattiness) in each FTP transfer, and allows for file-to-message transfers and vice versa, e.g., file creation from message driven by SQL statement.

A directory monitor watches for inbound files and triggers actions based on file names, extensions and/or contents. A translator such as Mercator or TIBCO might be called to transform data, and the output written to multiple systems in different formats, e.g., XML, text, messages, SQL to database, files.

MIM for CICS can also drive 3270 green screens in order to extract data, using tools in the design environment to build screen navigation. This runs natively inside CICS, not on a client workstation, so is more efficient and secure than the usual screen-scraping applications.

In addition to all this, MIM can invoke any program on any platform that it supports, feed information to stdin, and capture output from stdout and stderr in the MIM auditing subsystem.

Exception from MIM in BPM - select next MIM step for reinjectionOn its own, this is a pretty powerful set of integration tools for service-enabling legacy applications, both batch and real-time, but they’ve also integrated this into Metastorm BPM. Of course, any service that you define in MIM can be called from any system that can invoke a web service — which includes all BPM systems — but MIM can launch a Metastorm human-facing process for exception handling from within one of its service orchestration processes by passing a MIM exception to the BPM server. The BPM process can pass it back to MIM at a point in the process if that’s allowed, for example if the user corrects data that will allow the orchestration to proceed, and the BPM user may be given the option to select the step in the MIM process at which to reinject the process.

What happens in Metastorm BPM when it is invoked as an exception handler is not tracked in the MIM process auditor; instead, it captures the before and after of the data that’s passed to BPM, and this would need to be reconciled in some way with the analytics in BPM. This separation of processes — into those managed and audited by BPM and those managed and audited by MIM — is an area where some customers are likely to want more integration between the two products in the future. However, if you consider the services created by MIM as true black boxes from the BPM viewpoint, then there’s nothing wrong with separation at this level. It’s my understanding that MIM calls BPM using a web service call, so really any system that can be called as a web service, including most other BPMS, could be called from MIM for exception handling instead.

What’s on Page 123

James Taylor tagged me in the recent blogging meme, “What’s on Page 123”, where I have to write about the book that I’m currently reading, and quote the 6th to 8th sentences on page 123.

I always have a few books on the go, but just started re-reading Flatland: A Romance of Many Dimensions, by Edwin Abbott. The book barely has 123 pages — my edition ends on page 130 — but here’s the excerpt from that page:

But it occurred to me that a young and docile Hexagon, with a mathematical turn, would be a most suitable pupil. Why therefore not make my first experiment with my little precocious Grandson, whose casual remarks on the meaning of 33 had met with the approval of the Sphere? Discussing the matter with him, a mere boy, I should be in perfect safety; for he would know nothing of the Proclamation of the Council; whereas I could not feel sure that my Sons–so greatly did their patriotism and reverence for the Circles predominate over mere blind affection–might not feel compelled to hand me over to the Prefect, if they found me seriously maintaining the seditious heresy of the Third Dimension.

I first read this book in late high school or university (yes, I’m a math geek), and re-read it when doing graduate studies in multi-dimensional pattern analysis, since it helped me to think about dimensions beyond those that we perceive. I won’t summarize the whole book — Wikipedia has a good summary, and I recommend that you pick up a copy of Flatland and read it for yourself — but it has operates on two levels. First, it’s a mathematical treatise disguised as an allegory: an inhabitant of Flatland, a two-dimensional world, is visited by a Sphere, who attempts to educate him about a three-dimensional world (given that the Flatlander is a Square, this is probably the first true instance of “thinking outside the box” 🙂 ).  The subtext of the story, however, is a satire of the societal class and religious system in Victorian society at the time the book was written (1884).

I recently recommend Flatland to my other half, who has been writing a story about Sigma, but he just couldn’t get into it; I, however, am enjoying this reading of it as much as I did the first.

I’m bouncing this meme over to Bob McIlree, who is undoubtedly reading something more current about enterprise architecture, and Kate Trgovac, whose blog always introduces me to the coolest stuff and therefore must be reading something interesting. I’m probably supposed to tag five people, but James only tagged me so I figure that I can take some artistic license with this.

OpenSpan: mashing up your legacy applications

Want to mashup your legacy applications? I recently had a chance for some background and a demo of OpenSpan, which is one of the tools that you can consider for bringing those legacy apps into the modern age of composite applications.

A big problem with the existing user environment is that it has multiple disparate applications — Windows, legacy, web, whatever — that operate as non-integrated functional silos. This requires re-keying of data between applications, or copy and paste if the user is particularly sophisticated. I see this all the time with my clients, and this is one of the areas that I’m constantly working with them to find improvements through reducing double keying.

In OpenSpan Studio, the visual design environment, you add a Windows (including terminal emulator or even DOS window) or web application that you want to integrate, then use the interrogation tool to individually interrogate each individual Windows object (e.g., button) or web page object (e.g., text box, hyperlink) to automatically create interfaces to those objects: a very sophisticated form of screen-scraping, if you will. However, you can also capture the events that occur with those objects, such as a button being clicked, and cause that to invoke actions or transfer data to other objects in other applications. Even bookmarks in MS-Word documents show up as entry/access points in the Studio environment.

OpenSpan: Calculater/Google exampleIn a couple of minutes, they built and executed a simple example using the Windows calculator and the Google home page: whenever you hit the M+ button in Calculator, it transferred the contents of the calculator to Google and execute a search on it. This is more than the simple pushing of buttons that you typically find in old-style screen-scraping however; it actually hooks the message queue of the application to allow it to intercept any UI event, which means that other events in other applications can be triggered based on any detected event in the interrogated application: effectively allowing you to extend the functionality of an existing application without changing it.

A more complex example showed interrogating the FedEx website rate to bring results back to Excel, or to bring them back and compare with UPS rates in a custom UI form that was built right in the Studio environment as well. You don’t have to build a UI in their environment: you can use Visual Studio or some other application builder instead, or build .Net controls in another environment and consume them in the OpenSpan Studio (which is a .Net environment).

OpenSpan: FedEx vs UPS rates UIAs you would expect from an integration environment, it can also introspect and integrate any web service based on the WSDL, but it can also encapsulate any solution that you create within OpenSpan and expose it as a service, albeit running on the local desktop: a very cool way to service-enable legacy applications. That means that you can make “web” service wrappers around green-screen or legacy Windows applications, exposing them through a SOAP interface, allowing them to be called from any development/mashup environment.

The integrations and applications defined in Studio are saved to an XML file, which is consumed by the lightweight runtime environment, OpenSpan Integrator; it can launch any applications as required or on startup, and manage the integration between them (which was created in Studio). You can use this to do single sign-on — although not in as secure a fashion as dedicated SSO applications — and can even call services between native desktop applications and virtual environments (e.g., a client service within Citrix). Although the Integrator needs to be running in order to access the integrations via SOAP calls, it can run completely in stealth mode, allowing other applications to call it as if it were a true web service.

You can also integrate databases, which allows you to make database calls based on what’s happening on the screen in order to create analytics that never existed before. As with all other aspects of the integration, these are based only on events that happen in the user interface, but that still has the potential to give you a lot more than what you might have now in your legacy applications.

Media relations, the old-fashioned way

I attend a lot of conferences, and blog about them while I’m there. This is good for me in a couple of ways: it gives me lots of things to write about, hence increases my blog readership and therefore my exposure to potential customers and networking contacts, and I usually learn something by attending conference sessions. It’s also good for the conference organizers, since my blogging becomes publicity for the conferences or for related conferences or products. This symbiotic relationship is why I don’t expect to have to pay for admission to any conference that I’m blogging about, and why vendors not only give me free admission but also cover my travel expenses to attend their user conferences.

Some of the large software companies are starting to treat bloggers as regular members of the press, or as analysts, hence are including them in the expenses-paid press events such as conferences without special requests.

It surprises me, after that, to see the old-fashioned way in which some conferences still view the media credentials process. For example, there was a conference recently in Toronto (where I live, so no travel costs) that had a couple of interesting tracks on SaaS and Web 2.0, although large parts of it weren’t of interest to me. I received the standard attendee invitation, and emailed back to say that I was a blogger and ask for a press pass to the conference. Usually when this happens, the immediate response is “sure”, so I thought that it was weird to receive no response after a week. A friend of a friend recommended a different contact, I emailed again, no response. The friend of a friend then poked them directly, and finally I had a response from the conference organizer with a link to the media registration form on their site: a PDF that I’m supposed to fill out and fax back to them. No, that’s not a typo, I said “fax”. Welcome to 1985.

I then checked out their required press qualifications:

Media Category Please Provide
Editorial representatives One of the following:
• A business card with your name and title from an industry publication
• The masthead page of a current industry publication with your name listed
• A copy of a current by-lined article
Freelance writers • A letter from the editor of an industry trade publication stating your assignment is to cover the [conference name] Conference for that publication.
Web/Internet media representatives • Printed proof of the site demonstrating content to Linux/Open Source and/or Network technologies and/or Storage/Security technologies
• Proof that the site has subscribers that are qualified and the site is secure.
Videographer Reporters & Magazine Producers from recognized broadcast media • Business card with your name and title from a recognized broadcast media organization.
Press members w/press cards • A photocopy of your press card

I assume that I fit into the web/internet media representatives category, so checking out the qualifications… printed proof? As in printed on paper? This is starting to sound like a joke. And the second requirement: “proof that the site has subscribers that are qualified” — qualified for what? — “and the site is secure” — secure from what, in what way, or by what standards? Add to that the fact that the part of the conference that I want to cover has nothing to do with Linux/open source, network technologies or storage/security technologies.

I duly sent off an email to the publicist explaining that I’m an analyst and blog about a number of topics, including SaaS and Enterprise 2.0, and pointing her to relevant posts and articles of mine online. Of course, I didn’t fax it in, and I linked to the posts and articles rather than printing them, so I may have risked disqualification for those reasons alone.

Several days later, and only a couple of days before the event was to start, I finally heard back from the publicist:

There are some bloggers who request media badges, but they only blog every now and then and they just use it as a guise so they can attend events like [event name] for free. That’s why providing media badges to bloggers is evaluated on a case-by-case basis.

Um, yeah. I really wanted to blow off two days billable time in order to not get paid to go to a conference where I would pretend to be a real blogger.

The best part happened a couple of days after the conference, when the organizer (the one who couldn’t be bothered to answer my original emails) called me to complain that he felt my blog posts cast the conference in a negative light — although I had been mostly positive — and wanted me to change them, but was unwilling to post a comment on my blog because, as he said, “some things should just be settled in private”.

I don’t want to pick on this little conference or its organizers, since I see the same thing from much larger conference organizers and from vendors. In the past month, I’ve had a vendor ask me to change a post that I had made about their product but refuse to comment on the post themselves, and another vendor pay my expenses to be at their conference but not let me blog. Vendors and their PR people are coming under a lot of heat lately, and for good reason: the new world order of press is about transparency, and many of the big guys aren’t quite comfortable with that yet. There are many exceptions to that — I have to say that SAP’s blogger relations is a stunning example of how to do it right — but there needs to be a lot more open communication in the industry to make things better for the consumers of the technology.

Everything old is new again

Back in the old days (by this, I mean the 1990’s), when we wanted to integrate BPM with a legacy mainframe system, it was messy. In the worst cases, we wrote explicit screen-scraping code to interact with green screens; sometimes we were lucky enough to be able to hook a message queue or interact directly with the underlying mainframe database. Much of the development was tedious and time-consuming, as well as requiring a lot of manual maintenance when anything changed on the legacy side: sometimes I used to think that the mainframe developers intentionally changed things just to mess up our code. Don’t get me wrong, I’m all for keeping the mainframes in there as super-charged application and database servers, but don’t let the users anywhere near them.

These days, there’s a number of tools around to make integration with legacy mainframe systems easier, and although I don’t write code any more, I have a passing interest since most of my customers are financial services organizations that still have a lot of that legacy stuff lying around.

Strangely enough, the impetus for finally writing about the legacy integration tools that I’ve looked at came because of a conversation that I had recently with Sandra Wade, a senior director of product marketing at Software AG, even though we didn’t talk much about their product’s technical capabilities and I didn’t get a demo. Their announcement back in March was a repackaging of mostly existing applications into their Application Modernization Suite, which has three basic flavors:

  • The web edition is webMethods ApplinX, which allows you to web-enable green-screen applications.
  • The SQL edition uses the webMethods ConnecX adapters to provide unified, SQL-based access across heterogeneous data sources, including non-relational sources.
  • The SOA edition bundles ApplinX, webMethods EntireX, webMethods ESB and CentraSite to provide everything required to service-enable legacy applications, including governance.

Although I saw some of the underlying applications when I attended IntegrationWorld last November, I tend to focus more on briefings and sessions when I’m at a conference, so don’t have a really good feeling for the functionality of ApplinX and EntireX, and how they help to web- and service-enable mainframe applications.

I was going to include all three vendors in a single post, but will follow this one with separate posts about OpenSpan and Metastorm Integration Manager so that it doesn’t get too unwieldy.

TUCON: Keynote Day 2

Tom Laffey was back hosting the keynote, dressed in a cycling shirt from Team TIBCO, one of the best US women’s pro cycling teams. He was joined briefly by a member of the team who also happens to hold a Ph.D. in biology; like any geeky engineer, Laffey giggled nervously in the presence of an attractive, brainy woman in form-fitting cycling gear, although I suspect that some of the nervousness was due to the pair of cycling shorts that she was handing him to try on. 🙂

Having covered the product announcements yesterday, this morning’s keynote moved to a customer focus, starting with Simon Post, CTO of Carphone Warehouse discussing how they improved the processes within their IT department. He made an excellent point: there is no "ERP for IT", that is, packaged software for running an IT business; this requires large IT groups roll their own process improvement efforts instead. They have the capability to do it, but that’s not the point: the IT departments are there to provide services to the business, not to spend time building systems for themselves unless no packaged software exists or they need custom capability for a competitive advantage. Carphone Warehouse uses TIBCO products extensively for their IT processes and systems: iProcess and BusienssEvents for the process layer, BusinessWorks for system orchestration, and EMS for messaging. They haven’t stopped at IT processes, however; they’re building their service-oriented architecture and rolling out services across the enterprise, facilitating reuse and reducing costs as they set up new locations in several countries.

I ducked out after that to review notes for my presentation, coming up at 11:30, since I want to take the time to see the Spotfire+BPM session that’s on just before mine.

TUCON: Merck’s SAP Integration Strategy

Daniel Freed of Merck discussed their SAP implementation, and how their integration strategy uses TIBCO to integrate with non-SAP systems. As with Connie Moore’s presentation this morning, the room was packed (I’m sitting on the floor and others standing around the perimeter of the room), and I have to believe that TIBCO completely underestimated attendees’ interest in BPM since we’re in a room that is half the size (or less) that for some of the other streams. Of course, this presentation is really about application integration rather than BPM…

They have four main integration scenarios:

  • Master data replication (since each system expects to maintain its own data, but SAP is typically the true master data source), both event-driven publish-subscribe and batch point-to-point.
  • Cross-system business process, using event-driven publish-subscribe and event-driven point-to-point.
  • Analytical extraction/consolidation with batch point-to-point from operational systems to the data warehouse.
  • Business to business, with event-driven point-to-point as well as event-driven publish-subscribe and batch point-to-point.

They have some basic principles for integration:

  • Architect for loosely coupled connectivity, in order to increase flexibility and improve BPM; the key implication is that they needed to move from point-to point integrations to hub-and-spoke architecture, publish from the source to all targets rather than chaining from one system to another, and use canonical data models.
  • Leverage industry standards and best practices
  • Build and use shared services
  • Architect for "real-time business" first
  • Proactively engage the business in considering new opportunities enabled by new integration capabilities
  • Architect to insulate Merck from external complexity
  • Design for end-to-end monitoring
  • Leverage integration technology to minimize application remediation (i.e., changes to SAP) required to support integration requirements

SAP, of course, isn’t just one monolithic system: Merck is using multiple SAP components (ECC, GTS, SCM, etc.) that have out-of-the-box integration provided by SAP through Process Integrator (PI), and Merck doesn’t plan to switch out PI for TIBCO. Instead, PI bridges to TIBCO’s bus, then all other applications (CRM, payroll, etc.) connect to TIBCO.

Gowri Chelliah of HCL (the TIBCO partner involved in the project) then discussed some of the common services that they developed for the Merck project, including auditing, error handling, cross-referencing, monitoring, and B2B services. He covered the error handling, monitoring, cross-reference and B2B services in more detail, showing the specific components, adapters and technologies used for each.

Freed came back up to discuss their key success factors:

  • Organizational
    • Creation of shared services
    • Leverage global sourcing model
  • Strategy
    • Integration strategy updated for SAP
    • Buy-in from business on integration strategy
  • Program management
    • High visibility into the development process
  • Process
    • Comprehensive on-boarding process for quick ramp-up
    • Factory approach to integration — de-skill certain tasks and roles to leverage less experienced and/or offshore resources
    • Thorough and well-documented unit testing
    • Blogs and wiki for knowledge dissemination and sharing within the team, since it was spread over 5 cities
  • Governance
    • Architecture team responsible for consistency and reuse
  • Architecture
    • Defined integration patterns and criteria for applicability
    • Enhanced common services and frameworks
    • Architecture defined to support multiple versions of services and canonical data mocdels
  • Implementation
    • Development templates for integration patterns
    • Canonical data models designed early

In short, they’ve done a pretty massive integration project with SAP at the heart of their systems, and use TIBCO (and its bridge to SAP’s PI) to move towards a primarily event-driven publish-subscribe integration with all other systems.