Ultimus: Me on the Future of BPM

Here’s the presentation that I just delivered at the Ultimus user conference:

First time that I’ve given this in this format, but it’s a combination of so much that I’m already writing or talking about, it flowed pretty well. I’m writing a paper for publication right now on Enterprise 2.0 and BPM, which will expand on some of these ideas.

PegaWorld: Mashups with IAC

I thought that I should attend one technical/product breakout, since I’ve been covering customer case studies so far, and I wanted to get a closer look at Pega’s composite application development environment for internet applications, Internet Application Composer. I had a briefing on this a few months back, so have some notes and screenshots from that time that I’ll incorporate here as well.

IAM embeds PRPC application gadgets — like a worklist — on existing web pages, allowing SmartBPM functionality to be added directly and securely. This is like having PRPC on the web: actually exposing Pega functionality as Javascript gadgets, rather than just a back-end system that supports a website. This is not a limited set of pre-made gadgets, but the ability to turn a UI created in PRPC into a gadget.

pega-internet-application-composer_2961749959_o

IAC has three main components:

  • Composite Gadget Manager, which is a Javascript file that you load on your web page that implements your Pega gadgets. It controls configuration settings, plus the attributes, actions and events for the gadgets.
  • An existing PRPC application, running on a web node (behind the firewall); a web node is a separate PRPC node designated specifically for handling IAC requests, where all functionality is disabled by default unless explicit enabled for the web applications. This makes it possible to globally restrict certain types of access and functions on that node, rather than having to build that into the user interface.
  • IAC Composite Gateway, which is a servlet (in the DMZ) that manages the HTTP traffic between the Pega gadgets and the SmartBPM application on the web node.

As with most web applications that interact with behind-the-firewall systems, it’s a best practice to have the web application authenticate the user, pass on the trusted session to the Composite Gateway, which in turn passes it to behind-the-firewall web node.

Not only can PRPC gadgets be created and exposed, but legacy applications can be wrapped in SmartBPM to expose them to the web more easily. Since the PRPC application controls the view through the gadget as well as the internal view, any changes to the PRPC UI will be reflected both internally and in the gadget, without changing the web page itself: build once, deploy everywhere.

There’s tighter security than in many consumer mashup architectures: parameters are encrypted and obfuscated within URLs, for example.

The interface within the gadgets is rich, e.g., if one parameter is changed, a related parameter may update without a screen refresh. Gadgets on a page can interoperate, so a change within one gadget may cause another gadget’s data to update, such as showing the details for a selected item: this is enabled with some advanced actions and events, and uses JSON.

We looked at the actual HTML required to add a gadget to a web page: there’s a pretty small block of Javascript up front to set the configuration, then a 10-line block in a <div> structure that actually embeds the gadget.

IAC was released earlier this year, but a 2.0 version is available as of the end of September that includes HTTPS support, support for load balanced web node and gateway, tracing tools for debugging, samples and more. If you’re a Pega customer, you can access a number of in-depth technical articles on the Pega Developer Network about IAC.

SAP SME Day: Business ByDesign deep dive

Up next is a deep dive on Business ByDesign, the SaaS offering for SMEs. The deep dives so far have been kind of shallow, and mostly centered on sales, marketing, pricing and packaging of the products rather than much to do with functionality. We’re also running 45 minutes late, and seem to be getting later with each session.

This session is particularly interesting because of the analogy to SaaS BPM: these are mission-critical business systems, responsible for the day-to-day business processes, and there’s some significant issues with customer acceptance of their core processes existing in the cloud.

I hadn’t seen Business ByDesign before — somehow I missed it at SAPPHIRE — so it was interesting to have Rainer Zinow, SVP SME Global Services, give us a demo.

The system is role based, so that functionality is exposed depending on the user’s role. Apparently, there’s some basic document management, but we didn’t see that.

The system is built on an in-memory architecture for both transactions and analytics, using a search engine rather than a database (similar to some ideas that I saw at FASTforward); transactions cause database writes, but client applications are always served from memory.

There are some pretty complete analytics available, where you can drill down into specific items of interest, and even link directly back to the transaction on the ERP side, something that you couldn’t easily do with non-integrated BI.

There’s some lightweight workflow, really just manual routing to a person’s inbox that also allows a work item to be forwarded to someone else.

One of the most interesting parts was exposed when he demonstrated saving the online reports to Excel: the Excel version can be converted to contain formulas that point back to the original data source, which are actually pointers to web services. The reporting implication is that you can save the Excel report, then come back later and update it with point-in-time data simply by refreshing the data source; even better is that this set of web services is available to any environment, not just Excel, allowing you to build mashups or other applications that access the core transactional data.

This sort of hybrid model for SaaS is nice, where you can do everything in the on-demand environment, but also be able to download some desktop tools or build mashups that link directly to the online data.

Enterprise Mashups webinar

SnapLogic sponsored a webinar today featuring (Michael) Coté of Redmonk, entitled Enterprise Mashups, RIAs and Cloud Computing.

Coté started out talking about why we want to do all of this, namely, the goal of moving to a connected model, where value is increased through increased connectivity. Mashups bring together information from multiple sources, thereby connecting those systems and services in order to add value.

He considers three aspects of applications:

  • User interface, either a web app or an RIA
  • Business logic in the application code
  • Infrastructure, either on-premise or in the cloud

RIAs (rich internet applications) are user interfaces that mimic rich desktop user interfaces, but are the front-end for internet-connected applications, built using tools such as AJAX, Flex/AIR and Sliverlight. These are typically applications (business or leisure) rather than single-purpose functionality such as search.

Moving to the bottom of the stack, cloud computing offers a faster, cheaper and more scalable infrastructure, particularly for starting up a new service when the potential load is unknown.

One of the challenges is in integrating systems when you move to the cloud: cloud to cloud and cloud to on-premise, where the latter has the challenge of integrating across the firewall. Data integration is critical so that you don’t end up with silos of information locked into your applications’ data stores.

Chris Marino of SnapLogic followed up with a few slides about their view of application integration, moving from the bad old days of point-to-point custom systems integration to a utopia that uses SnapLogic as a hub to integrate applications using web standards (HTTP, REST, XML). SnapLogic connectors and servers can be combined for all sorts of data connectivity from cloud to cloud, cloud to client, and cloud to on-premise systems. They provide a graphical tool for designing a data flow between sources, including transformations, or for exposing an enterprise data source directly in a browser for mashing up using other tools. He moved on to a lightning-quick demo showing an interface to Salesforce.com data, allowing the data to be extracted to another system or even just to the browser as input to a mashup.

SnapLogic has both a free, open source community edition and a fully-supported enterprise edition available by paid subscription.

Enterprise 2.0: RSS and Business Processes at Wallem

For the last breakout today, I went to the session featuring of Patrick Slesinger of Wallem (a shipping company). I don’t know anything about shipping, but their requirements aren’t different from a lot of other organizations: involvement and transparency to customers into business processes, internal decision support, long-term accessibility to event data. They needed to make their processes mobile and make the right information available anywhere, without using email.

Their solution, using K2 for BPM, Attensa for RSS and SharePoint as a content repository, integrates process-driven applications with managed RSS. The solution uses K2 to manage processes, then pushes the process event log (or some filtered version of it) over to the Attensa feed server, where it can be served up to a web interface or delivered by email. The advantage of using a feed server for this is that it provides complete device/platform independence for consuming the event feed, as well as providing multiple formats for consumption. An enterprise RSS feed server provides things such as integrating your LDAP database for defining users and groups, and allows for easy assignment of specific feeds to users and groups. Users can have feeds assigned to them, which they can’t unassign, but they can use the same tool for reading other feeds as well. They can read a specific feed item on one platform, and it’s marked as read everywhere (as you would expect). The system also tracks who reads which feeds, when and for how long, making it possible to track what information is actually being used, and ensure that users are accessing the relevant information before making decisions.

Slesinger showed a demo of the system, showing how tasks that are assigned to a user show up in their feed reader; clicking on the details in the feed item pulls them into a web form to complete the task. There are many BPM products now that allow a feed to be created for any user’s inbox or other queues; his earlier architecture diagram led me to believe that they’re not doing that (if K2 is even capable of it), but extracting events from the K2 event log instead. In the example shown, the captain of a ship was actually participating in a workflow where he received task notification through a feed reader rather than in email or directly through the BPM product’s inbox.

The results:

  • Increased visibility into systems and information sources
  • Mobile connected process and feedback loops
  • Alignment of information and process creating knowledge and value
  • Email clutter reduced
  • Understanding what information is required: who, what, when, where, why

Their customers — the ships’ owners — saw huge savings as well: using timely information and appropriate processes for deciding where ships take on fuel and oil, the annual customer savings are about $400M. They’re looking to do more with this in terms of analytics, search, and expanding the mobile RSS capabilities.

I’ve been blogging for a couple of years about how RSS and BPM could work together, and many of the vendors have integrated in the functionality, but this is the first real case study that I’ve seen of the two working together on this scale.

Enterprise 2.0: Enterprise Mashups Panel

David Berlind hosted a panel on enterprise mashups, with Michalene Todd of Serena, Nicole Carrier of IBM, Lauren Cooney of Microsoft (recently of IBM) and Charlotte Goldsbery of Denodo. I was supposed to moderate this panel, but when the vendors started treating it like a sponsored panel by switching out participants, and the conference organizers refused to kick in for any of my expenses (in an outrageously biased policy where they pay some speakers’ expenses but not others depending on who you complain to), I decided that it wasn’t worth the hassle and bowed out. David’s a great moderator and knows a lot about mashups, but ultimately, I think that he allowed this panel to be hijacked by the vendors, with the exception of Lauren, who speaks her own mind rather than the Microsoft party line. Serena totally screwed up on this one by bumping Kelly Shaw off the panel — a panel that’s described as being full of “girl uber-geeks” — and replacing her with a non-technical corporate marketing person who was out of her depth, and Denodo didn’t do much better by putting in a self-described salesperson.

There was an interesting discussion about how data is exposed to be consumed by mashups, e.g., ATOM/RSS, and the implications with respect to the security of the underlying data, the ability of mashup platforms to consume that data, and how to appropriately encapsulate data so that a non-technical person creating a mashup can’t do evil things to the underlying data source, like doing a search on a non-indexed field in a large database table. You need to consider the interfaces for accessing the data and services: SOAP, RESTful services, web services, etc.

Realistically, business users still can’t do mashups, in spite of what the vendors tell you: there’s just too much technical stuff that they need to know in order to do mashups still. Although it’s easy to drag and drop things within a graphical environment, that’s not the issue: it’s understanding the data sources and their interactions that’s critical. The real target for many of the mashup platforms, as I’ve stated many times before, is for the semi-technical types within business units who are now creating end-user computing applications using Excel, Access and other readily-available tools. I don’t think that’s anything to be ashamed of, and striving for the goal of allowing any business user to do mashups is unrealistic. I was at a client site recently, and of all the claims adjusters and their managers who I talked with there, I can’t imagine that a single one of them would be inclined to even try to create a mashup or — without intending any insult to them in any way — have the skills to do so. Likely the closest that business users will come to building mashups will be configuring their own personalized portal within an existing framework, similar to iGoogle; a proper mashup framework may also allow the portal widgets/gadgets to interact, such as using selections in one widget as a filter for another on the same page. A lot of the good business applications, the things that are now being handled by other MS-Office-based end-user applications, are spreadsheet-like in nature; data visualization is a critical part of mashups, but there’s rarely a Google map involved.

Another issue is whether mashups are ready for prime time: are they really intended to be deployed as production applications, or are they just an easy-to-use prototyping environment? What about underlying data sources that aren’t under your control (like Google Maps) in terms of SLAs and fault tolerance? Although internal systems can also have failures, at least you have some degree of control over your own IT resources in terms of high availability of applications and their data sources, and any critical external services that you use — whether in a mashup or any other type of application — has to come from a company with whom you can nail down a believable SLA.

JackBe Enterprise Mashlets

The slide deck said “Proprietary & Confidential” but I was assured by the presenters that I was welcome to blog about JackBe’s webinar on enterprise mashlets. They’ve done a number of webinars in the past that are available for replay, and also have several videos available at JackBe TV (which would be great if I were able to subscribe to it in iTunes)

Today’s presenters are Deepak Alur (VP Engineering) and Kishore Subramanian (“Chief Electrician”), and Deepak started by covering widgets and mashups, and how JackBe advanced those concepts to what they call mashlets: a platform for enterprise mashup widgets. We’re all inundated with widgets these days — everything from badges that we add to our website sidebars to our own customizable dashboard of widgets such as iGoogle — but many of the consumer-oriented widgets provide access to only a single source of information and allow a minimum of customization. They’re useful points of data visualization that can be easily assembled into a portal environment, but typically don’t interoperate and may be display-only. Enterprise widgets have to have a bit more than that: they need to live within the enterprise stack with respect to security, access to enterprise services and data, and proper IT management.

A mashup, on the other hand, integrates content from more than one source, but has often been too technical for a user to create. Mashups are gaining acceptance within enterprises, since they provide lightweight methods and platforms for creating situational applications that can be deployed quickly, with very little development effort.

There’s a number of reasons to consider widgets and mashups together, since they share a number of characteristics — using building blocks to quickly assemble new web applications — which drove JackBe to create mashlets. In their definition, mashlets are user-oriented micro-applications that are secure and governed for enterprise use, providing the visualization, or “face”, of a mashup to be embedded in a web page. Unlike simple widgets, they’re context-sensitive and dynamic, allowing multiple mashlets on a single page to interact. Comparing widgets and mashlets on a number of factors:

Factors Widgets Mashlets
Consumer/Enterprise Consumer Both
Novelty/Business Depends Business
Display/Input Display Both
User/Programmer Programmer Both, governed
Visual/Non-Visual Visual Both
Client/Server side Client side Both
Web services/data Programmer Plug and play
Secure Depends Enterprise
Ability to embed Yes Enterprise
Managed No Enterprise
Shareable Ecosystem Enterprise

JackBe mashlet portfolio exampleWe saw a number of quick demos of using the mashlets created in JackBe’s Presto platform. There’s some nice built-in features for the mashlets, for example, exposing the code to embed a mashlet within another page (much like what YouTube gives you to embed a video in a web page), and the code to embed it within a MediaWiki wiki, as well as allowing them to run as standalone pages. We saw an example of a stock trading page with multiple mashlets, where entering a trade in one mashlet caused data in the portfolio positions mashlets to update automatically.

Presto is compatible with portal standards, so can be embedded within a standards-based environment such as Oracle Portal, or in environments such as Netvibes and iGoogle.

JackBe buliding a mashup serviceAll of the early examples showed using mashlets that had been created by developers, but we then looked at what’s required to actually create a mashlet. This is done in their visual composition tool, Wires (hence the “Chief Electrician” title), where you can drag services onto the workspace and connect them up to create a mashup — visually, somewhat similar to Yahoo Pipes — and save the results as a service that can be published and made available for consumption. The services can be run at any point to check the output, making it easy to debug as you go along. JackBe mashlet embedded in MediaWiki pageOnce that’s done, a mashlet can be created from that mashup service by specifying the visualization form, e.g., a specific chart type, or a data grid. Like many über-techies, the JackBe guys casually stated that this could be done by “end users”. Um, I don’t think so. Or, at least, not most of the end users that I see in my day-to-day practice. But it is pretty easy for anyone with a bit of a technical background or inclination.

Presto appears to also act as a repository/directory for the mashup services and mashlets, serving these up to whatever pages are consuming them. Mashlets can be hosted on any web server, and once delivered to the browser, they live in the browser until the session ends, communicating with the mashup server via their PrestoConnect connector.

There’s a few key differentiators for JackBe’s Presto mashlets:

  • Enterprise security for authentication and authorization
  • Inter-mashlet publish/subscribe to allow mashlets to exchange information
  • Consumption of a wide range of data and services sources
  • UI framework independence

This was a full hour with not a lot of time for Q&A; I look forward to seeing more of this at the Enterprise 2.0 conference in Boston in a few weeks.

Outsourcing the intranet

I’ve told a lot of people about Avenue A|Razorfish and their use of MediaWiki as their intranet platform (discussed here), and there’s a lot of people who are downright uncomfortable with the idea of any sort of non-standard intranet platform, such as allowing anyone in the company to edit any page on the intranet, or contribute content to the home page via tagging and feeds.

Imagine, then, how freaked out those people would be to have Facebook as their intranet.

Andrew McAfee discusses a prototype of a Facebook application that he’s seen that provides a secure enterprise overlay for Facebook, allowing for easy but secure social networking within the organization. According to WorkLight, the creators of the application:

WorkBook combines all the capabilities of Facebook with all the controls of a corporate environment, including integration with existing enterprise security services and information sources. With WorkBook, employees can find and stay in touch with corporate colleagues, publish company-related news, create bookmarks to enterprise application data and securely share the bookmarks with authorized colleagues, update on status change and get general company news.

This sort of interaction is critical for any organization, and once you get past a certain size or start to spread geographically, you can’t do it with a bulletin board and a water cooler any more; however, many companies either build their own (usually badly) or use some of the emerging Enterprise 2.0 software to do something inside their firewall. As Facebook becomes more widely used for business purposes, however, why not leverage a platform that pretty much everyone under the age of 40 is already using (and a few of us over that age)? One company, Serena Software, is already doing this, although they appear to be using the naked Facebook platform, so likely aren’t putting any sensitive information on there, even in invitation-only groups.

Personally, I quite like the idea, although I’m a bit of an anarchist when it comes to corporate organizations.

There’s a lot that would have to happen for Facebook to become a company’s intranet (or even a part of it): primarily sorting out issues of data ownership and export. There’s lots of people putting confidential data into Salesforce.com and other SaaS platforms that I think we can get past the philosophical question of whether or not to store corporate data outside the firewall; it just needs to be proven to be private, secure and exportable.

I also found an interesting post, coincidentally by an analyst at Serena, discussing how business mashups should be human process centric, which was in response to Keith Harrison-Broninski’s post on mashups and process. Although Facebook isn’t a mashup platform in any real sense, one thing that should be considered in using Facebook as a company’s intranet is how much process can — or should — be built into that. You really can’t do a full intranet without some sort of processes, and although WorkBook is targeted only at the social networking part of the intranet, it could easily become the preferred intranet user interface if it were adopted for that purpose.

Update: Facebook launched Friends Lists today, that is, the ability to group your contacts into different lists that can then be used for messaging and invitations. Although it doesn’t (yet) include the ability to assign different privacy settings for each group, it’s a big step on the way to more of a business platform. LinkedIn, you better get that IPO done soon…

BPM Think Tank Day 3: Enterprise 2.0/BPM Mashups Roundtable

I facilitated one of the last roundtables of on the conference, about Enterprise 2.0 and BPM mashups.

Mashups (considered a part of Enterprise 2.0) are lightweight integration of web-based services and data, often in ways that the service providers never intended them to be used; personally, I think that as mashup techniques get easier, mashups will become the technology of choice for what’s referred to as “end-user computing”, that is, all the stuff that is created within business units (typically now using Excel or Access) because it’s either too small for IT to take on as a project or they can’t turn it around in a timely manner. I see software-as-a-service BPM and other services as having an impact on the ability to do mashups, since these platforms are often designed with a bit more openness in mind.

I’ve looked a lot in the past at Enterprise 2.0 and BPM, and the features that are (or should be) creeping into BPM under the influence of Enterprise 2.0: RSS, tagging, SaaS, mashups, collaboration, and all sorts of user-created content in general. There’s a lot of challenges around this, many of the cultural, since Enterprise 2.0 decentralizes control of IT assets and requires a certain level of user participation.

We spent most of the session talking about BPM mashups, not Enterprise 2.0 in general. At one level, a BPMS can be considered to be a mashup platform, given the right business services available for assembly.

BPM mashups can take several forms:

  • Lightweight assemblies of subprocesses and services
  • User-facing information at a step in the process, e.g., Google maps mashed up with BPM data and presented to a user in a form in order to complete a task
  • BPM as a component within a portal, possibly assembled by a user

Issues around mashup adoption include IT not trusting something that is user-created, and business analysts not understanding the concept of mashups as well as not yet having easy enough tools to do mashups. There are also issues around discoverability of services (as I discussed the previous week in a Mashup Camp session) and the use of internal versus external services, where both types require some sort of SLA to be included in any sort of production mashup.

By lowering the barrier to entry, mashups can play an important role as application prototypes, or emergent applications that IT wouldn’t have thought to build for the business; IT can learn from what the business creates for itself in order to create more structured applications and processes.This is similar to the concept of how a folksonomy is used to gradually become a taxonomy: allow the users to do it themselves, then observe and detect the patterns. My favourite phrase that someone used at this point was to “intelligently stumble upon the future” and the whole idea of unintended consequences of mashups, although there was some discussion as to whether is was closer to serendipity or Frankenstein. Along this line, we talked about how to keep bad things from happening in mashups, and agreed that the services and data to be mashed up had to be controlled in some way (by IT) so that, for example, someone couldn’t do an unindexed full text search on a multi-million record database.

Without a doubt, mashups enable agility in application development, and BPM stands to benefit from enabling all types of BPM mashups.

There was some discussion around whether business users/analysts were asking for this, and whether they really wanted a full mashup capability, or just some parameterized configuration. I think that they don’t even know what’s possible through mashups, and if they did, they’d want it.

Take the Ajax challenge

I had a chance to talk to Kevin Hakman of TIBCO late last week about their Ajax Challenge (Kevin co-founded General Interface, which was acquired by TIBCO a couple of years back), the goal of which is to build the world’s largest mashup. You have to use General Interface to build it, but more interestingly, you have to use PageBus, a JavaScript client-side message bus that TIBCO just contributed to the OpenAjax Alliance as open source to become part of the OpenAjax Hub 1.0 installation.

The contest is splashy (win an oversized TV! win a video iPod!) but PageBus is the real news here: it provides a message bus for mashups in an attempt to eliminate the spaghetti mess of point-to-point integrations that we’re already starting to see emerge. In the enterprise world, this is why ESBs have become an essential part of any sizable application integration effort: without a message bus, you’re creating a unique integration between each pair of applications. Okay when you have two applications, but not when you have 10. [To be fair, usually you don’t have every application interact with every other application in a complex integration: each one may only interact with a couple of others, but that just shifts the pain point, it doesn’t eliminate it.]

Getting back to mashups and the OpenAjax Hub, the PageBus exposes the basic functions of messaging — publish, subscribe, unsubscribe — all in less than 5k of JavaScript, so that multiple Ajax components on an HTML page can share data using these standard methods. This allows the development of a mashup to be more easily split up between multiple developers since each can focus on their specific component and not on the interface between components; it will also allow for easier “no programing required” assembly of components within a PageBus-enabled mashup framework.

This is a pretty important step in mashup-land: I’m starting to see a lot of things referred to as mashups that are actually portals, where the components don’t intercommunicate, but the fundamental benefit of mashups is that they are an integration, not just components that happen to coexist on the same page.

TIBCO is apparently already using this in their BPM product for things such as task list publication, which means (I think) that you could create a mashup between your iProcess task list and some other component or data source — a real BPM mashup. Although many vendors are starting to provide RSS feeds of task lists/inboxes (I hope that my past year of nagging about this has had some contribution to those efforts), this is the first truly mashup-enabled BPM environment of which I’m aware.

The full OpenAjax Hub specification is about 4-6 weeks away from release, but the project is already on SourceForge. TIBCO will continue to develop the source and contribute to the open source efforts in the future; their press release about PageBus is here.