BPM Centers of Excellence webinar today

Today (March 18th) at noon Eastern, I’ll be doing a live webinar on BPM centers of excellence that will become part of the Appian-sponsored BPM Basics informational site. You can sign up for the webinar here if you want to listen to it live, which will include Q&A from the audience; the version without Q&A will be available for replay on the BPM Basics site.

Gartner warns against shelfware-as-a-service

Gartner’s had a good webinar series lately, including one last month with Alexa Bona on software licensing and pricing (link to “roll your own webinar” download of slides in PDF and audio in mp3 separately), as part of their series on IT and the economy. As enterprises look to tighten their belts, software licenses are one place to do that, both on-premise and software-as-a-service, but you need to have flexible terms and conditions in your software contract in order to be able to negotiate a reduction in fees, particularly if there are high switching costs to move to another platform.

For on-premise enterprise software, keep in mind that you don’t own the software, you just have a license to use it. There’s no secondary market for enterprise software: you can’t sell off your Oracle or SAP licenses if you don’t need them any more. Even worse, in many cases, maintenance is from a single source: the original vendor. It’s not that easy to walk away from enterprise software, however, even if you do find a suitable replacement: you’ve probably spent 3-7 times the cost of the licenses on non-reusable external services (customization, training, ongoing services, maintenance), plus the time spent by internal resources and the commitment to build mindshare within the company to support the product. In many cases, changing vendors is not an option and, unfortunately, the vendors know that.

There are a lot of factors in software licensing that can come under dispute:

  • Oracle’s (and many other vendors’) definition of “named user” includes non-human processes that interact with the database, not just the people who are running applications. This became a huge issue a few years back when enterprise systems started being connected in some way to the internet: is the internet gateway process a single user, or do all potential users have to have individual licenses?
  • Virtualization and multi-core issues need to be addressed; in many cases, these hardware partitioning is often not adequately covered in license contracts, and you need to ensure that you’re not paying for the maximum potential capacity of the underlying hardware, not what you’re actually using.
  • Make sure that you have the right to change the platform (including hardware or underlying database) without onerous fees.
  • Watch out for license minimums embedded within the contract, or cases where upgrading to a larger server will cost you more even if you don’t have any more users. Minimums are for small organizations that barely meet discounting thresholds, not large enterprises. Vendors should not be actively promoting shelfware by enforcing minimums.

Maintenance fees are also on the increase, since vendors are very reliant on the revenue generated from that in the face of decreasing software sales. Customers who have older, stable versions of a product and don’t generate a lot of support issues feel that costs should be decreasing, especially since many vendors are offshoring support so that it is cheaper for vendor to supply it. Of course, it’s not about what the maintenance actually costs, it’s about what the market will bear. Gartner suggests negotiating maintenance caps, the ability to reduce your maintenance if you use less licenses, and the right to switch to a cheaper maintenance offering. Document what you’re entitled to as part of your maintenance, rather than relying on a link to the vendor’s “current maintenance offering”, to ensure that they can’t decrease your benefits. Watch out for what is covered by maintenance upgrades: sometimes the vendor will release what they call a new product but what the customer sees as just a functional upgrade on their existing product. To get around that, you can try licensing the generic functionality rather than the specific products by name (e.g., stating “word processing functionality” rather than “Microsoft Word”).

When polled, 64% of the audience said that they have been approached by a vendor to do a software audit in the past 12 months. In some cases, vendors may be doing this in order to recover license fees if they have lost a sale to the customer and feel that they might find them out of compliance. Be sure to negotiate how the audit is conducted, who pays for it, and what price that you pay for additional licenses if you are found to be out of compliance. Many software vendors are finding it a convenient time to conduct license audits in order to bolster revenues, and for the first time ever, I’ve heard radio advertisements urging people to blow the whistle on their employer if they are aware of pirated or misused software licenses, which is a sort of crowd-sourced software audit.

Software as a service licensing has its pitfalls as well, and they’re quite different from on-premise pricing issues. Many SaaS contracts have minimums or do not allow for reductions in volumes, leading to shelfware-as-a-service – consider it a new business model for wasting your money on software license fees. There is aggressive discounting going on right now – Gartner is seeing some deals at $70/user/month for enterprise-class software – but there may be much higher fees on renewal (when you’re hooked). There are also some unrecognized fees in SaaS contracts: storage (if beyond some minimum that they provide as part of the service, which is often charged at a rate far above cloud storage on the open market), additional costs for a development and test sandbox, premium maintenance that is more aligned with typical on-premise enterprise software support, non-corporate use (e.g., customers/partners accessing the system), integration, and termination fees including the right to get your data out of their system. Make sure that you know what the SaaS provider’s privacy/security policies are, especially related to the location of the data storage. Most of the Canadian financial services firms that I deal with, for example, will not allow their data to be stored in the United States, and many will not allow it to be stored outside Canada.

Furthermore, SaaS vendor SLAs will only cover their uptime, not your connectivity to them, so there are different points of failure than you would have for on-premise software. You can hardly blame the vendor if your internet connectivity fails, but you need to consider all of the points of failure and establishing appropriate SLAs for them.

Bona finished up with some very funny (but true) reinterpretations of clauses in vendor contracts, for example:

  • What the vendor means: “We are going to send you software that you are not licensed to use. If you use this software in error, you will be out of compliance with this contract, and woe to you if we audit.”
  • What they actually wrote: “Licensee shall not access or use any portion of the software not expressly licensed and paid for by the licensee.”
  • What you probably want to change it to: “Licensor shall not ship any software to licensee that licensee is not authorized to use.”

The summary of all this is that it’s not a task for amateurs. Unless you want to just let the vendor have their way with you on a large contract, you should consider engaging professionals to help out with this. Gartner provides this type of service, of course, but there are also high-quality independents (mostly former analysts) such as Vinnie Mirchandani.

Lombardi’s user conference goes online

In an amazing reflection of the economic times, Lombardi announced today that their user conference, Driven, will not be taking place in Austin as originally planned, but will be an online conference. From their email update:

For the last few weeks, we have been talking to customers all over the world about Driven – our annual user conference. The feedback has been consistent – people want to come but many companies are under travel restrictions.

So, we have decided to change the format for Driven this year. Instead of you having to come to Austin, we are going to bring Driven to you. Think of it as Driven without the travel. Actually, we are calling it Driven Online.

I assume that this means that the attendance numbers just weren’t shaping up as expected, and they had to make a tough decision to cancel the onsite conference.

It’s still the week of April 20-24th, but the content is severely restricted: a single one-hour webinar each day at 11am Eastern. It’s live, so that there will be a Q&A session at the end of each webinar, but this amount of content doesn’t even begin to come close to what would be at a real conference.

It will be interesting to see what other conferences end up cancelled this year; I can’t believe that this will be the only casualty.

Webinars and podcasts

This seems to be my month for webinars and podcasts. Here’s the line-up:

  • I recorded a webinar for SearchSOA a few weeks ago on a pragmatic approach to using SOA and BPM together, particularly in the area of service discovery and specification. Unfortunately, I can’t find it on their site, so not sure if it’s been published yet. Keep looking.
  • On March 18th, I’ll be doing a live webinar on BPM centers of excellence that will become part of the Appian-sponsored BPM Basics informational site. You can sign up for the webinar here if you want to listen to it live, which will include Q&A from the audience; the version without Q&A will be available for replay on the BPM Basics site.
  • That same week, I’ll be recording a podcast with Savvion’s Dr. Ketabchi on BPM in a down economy. There have been a few other webinars on this topic lately, but right now it’s a very popular message and there’s lots to talk about. This will be published on ITO America, which provides broad coverage of technology issues for higher-level technical management.

The fun part of these three is that not only are they three completely different topics, they’re targeted at three different audiences: the first for developers and other technical people, the second for business and mid-level project team members, and the third at CIOs. Although doing webinars and white papers is a small part of my business, the research, analysis and writing that goes into them really helps to hone my ideas for applicability with my enterprise clients who are implementing BPM.

IBM FileNet P8 BPM V4.5

I’ve had a couple of briefings on the 4.5 release of IBM/FileNet P8 BPM, which was released in November but is likely just starting to hit customer sites so I thought that it would be good timing for a review post. As a point of disclosure, I worked for FileNet in 2000-1 and have worked with their BPM software extensively in two of my own companies including my current consulting practice, but I don’t do any work for IBM, only for their customers. That means that I am probably more familiar with their system than with any other BPMS, but they are not compensating me in any way for this post (they don’t even cover analyst/bloggers expenses to attend their user conferences, so I don’t) nor do they have any editorial control, which means that I will likely manage to say something to annoy IBM management here, as I usually do.

I’ve blogged in the past about the IBM-FileNet acquisition, specifically my comments at the time that the acquisition was announced, at an analyst briefing just after that, then a follow-up last June comparing it to the Oracle-BEA acquisition: in brief, I noted the transition in product positioning from that of a full BPMS product to document-centric BPMS so as not to compete with WebSphere Process Server. I still think that both IBM and its customers would have been better served by ripping BPM out of the P8 product line and adding it to WebSphere to round out the human-facing capabilities, producing a single BPMS product at IBM. Instead, if a customer wants both human-centric functionality and services orchestration, IBM will be in there selling them two products – each with their own modeling, execution and monitoring environments – rather than one, which is going to be a bit of a hard sell in this economy. They’re working to bring some of that together, but fundamentally it’s still two products to do what many other vendors do with one. There are a few point of integration now — the WebSphere modeler can export FileNet-compliant XPDL, and the WebSphere monitoring tools can monitor the FileNet process engine – and they’ll be doing a bit more cosmetic integration to make it more palatable, but there’s no plan for a unified execution engine. Strangely, the recent Gartner report on BPMS doesn’t both to distinguish them: it bases its analysis on the combination of WebSphere Dynamic Process Edition and FileNet Active Content Edition, which is a bit bogus (in my opinion).

That being said, the current positioning of FileNet P8 BPM is around “agile ECM”, with active content being a key differentiator. Active content, in the FileNet world, is the ability to capture content events (such as creation and versioning) and trigger activities in response, either launching new process instances in BPM, or making external calls. If you’re proficient with the FileNet BPM design tools, that means that you can create a new process, link it via a workflow subscription to the events occurring on a class of content, and have that process automatically trigger when that event occurs for a document in that class. In my world of back-office transaction processing, where there is still a lot of paper, this could be the creation of a process instance in response to a new scanned document being added to the content repository, all without writing a line of code.

IBM FileNet P8 BPM 4.5There’s more to their agile message than active content, however: IBM is also bundling in a new set of BPM widgets and the IBM (Lotus) Mashup Center to allow for the much easier creation of user interfaces. This has always been a problem in the past: although the Process Designer will auto-generate a user interface for each step that allows for view and update of the parameters exposed at that step, it’s not very pretty. The options were to use the FileNet e-forms product – which required some technical fiddling to integrate – or create custom interfaces using some other development tools. Although the widgets don’t provide a fully-customizable forms interface, they do provide a way to put together configurable user screens that work well for prototyping and for some lighter-weight/tactical production applications.

I liked what I saw with the widgets, despite the limitations, since I think that it’s a move in the right direction. They use the iWidget specification, which is an open standard created by IBM and used natively in the Mashup Center, and there’s also a wrapper to turn an iWidget widget into a JSR-168 compliant portlet, with the cross-widget wiring exposed, for use in other environments such as the WebSphere portal product. The BPM widgets are built using the new REST services that wrap around the process engine Java API; you can also call the REST services directly from other application development environments. Although the widgets are referred to as “ECM widgets” in the IBM documentation, they all (with the exception of a document viewer widget) provide BPM functionality. There’s a lot more that I saw about the widgets; I might do a separate post just on that for those who are evaluating this product.

Some partners are also creating widgets for the mashup framework; I can see this as a key way for partners to add value through providing interoperable components rather than monolithic applications, and I would hope to see some of these emerging for free as companies try out this new technology.

There’s no requirement for all-or-nothing with the mashups, either: each step in the process can invoke a different UI from a different source, so that one step might have a custom application, another an e-form and another a mashup. As far as the process is concerned, that’s just what is invoked at the step to manage the user interaction, not an integral part of the process.

One issue is that WebSphere Business Space will replace Mashup Center as the mashup environment included with P8 BPM, although it’s not clear what degree of functional overlap there is, or when to use one versus the other. The Mashup Center appears to be positioned as being for prototyping and tactical situational applications, whereas Business Space is more of an enterprise portal, but it’s not clear that you couldn’t build an enterprise-strength application using the Mashup Center (unless you’re afraid that IT will laugh at you for using the words “mashup” and “enterprise” in the same sentence). Business Space supports the ECM widgets, but would require a few “minor functional changes” (IBM’s words) to get things working.

FileNet BPM Process DesignerOn the process modeling side, the Process Designer now has two modes: diagram mode for business analysts, and design mode for technical analysts, with user access rights determining which that a specific user can access. In diagram mode, the user draws the process map, adds the description and instructions at each step, and a description for each route between steps. Design mode is the full “classic” view, with all parameters visible, where a developer can take the description entered by the business analyst and map that into parameters, rules, assignments, deadlines and web services calls. However, the Designer still is not BPMN compliant: if you want BPMN, you can do it in Visio with a BPMN template that they provide, then import the results into the Designer, but it’s a one-way trip. They do plan to leverage some of what’s been done with BPMN in the WebSphere process modeler to bring that into the P8 BPM designer, but there’s nothing concrete to talk about yet.

There’s also some new user roles functionality built in to the designer (and runtime, obviously) that is based on the Business Process Framework, an add-on product to BPM used for creating case management processes. I suspect that we’ll see more of the useful bits of BPF integrated into the core BPM product in the coming releases, to the point where it won’t exist as a separate product, although no one at IBM said that.

Simulation is now web-based and integrated within the process designer, rather than being a separate application: one of the tabs in the design view of a process is Simulation, which allows durations for steps and weights (%) for routes to be entered. Configuration and administration is also now done within the process designer rather than in a separate configuration console.

For business rules, ILOG (a recent IBM acquisition) is being integrated into the WebSphere suite; since it provides a web services interface, it can easily be called at a step in a BPM process for adding business rules more complex than can be handled by the built-in expression engine in BPM.

The BAM product integrated into the P8 BPM product line is also now IBM: originally it was Celequest, which was acquired by Cognos, which was in turn acquired by IBM; the branding on the last set of product slides that I saw is “Cognos Now”.

IBM is starting to push Lotus Forms with BPM, although it is not yet integrated to the same degree as FileNet eForms, which can replace the user interface at a step in a process. I can’t believe that IBM will maintain two e-forms products in the long run, but they can’t really cut off FileNet eForms until they complete that integration.

Overall, FileNet’s legacy of content and process together has grown into fully-featured document-centric BPM capability. Unfortunately, they positioned themselves as pure-play BPMS just long enough to get some customers on that bandwagon, leaving those customers with some uncomfortable migration decisions in their future.

FASTforward09: David White, Kusiri

David White of Kusiri finished up the afternoon of breakout sessions with a presentation on using customer data to drive business results. He started with some statistics about just how much data flows through businesses: 85% of all data is managed by enterprises, and that’s 85% of a very large number. Businesses, however, need actionable information, not just data, so we start to apply technologies such as search and business intelligence to explore and make sense of the data.

Business intelligence, however, hasn’t delivered the goods, particularly in the area of unstructured content such as documents: BI typically relies on structured (database) data within the firewall, and that doesn’t provide a complete view of things. Traditional search – represented by the “search box” (I think that the presenters are not allowed to say the G-word) – provides too many irrelevant results, hence also not that useful. But just like Goldilocks, we have a solution that’s just right: search applications, that is, a search-powered application that searches across internal and external data, both structured and unstructured, and guides you to actionable information through navigation.

He showed us some screen snapshots from a search application that they have built, but difficult to see and not interactive so not very compelling, as demos go. Overall, it’s a portal-like dashboard application where the widgets in the portal are actually the results of searches. From here, you can click through on a line item to drill down into the information (again, which uses search behind the scenes) to see more detailed information from a variety of sources, as well as additional tools and functionality for taking action on the data or further analyzing it. There’s a lot of functionality here, presented in the sort of dashboard/drill-down visual analysis environment that you might expect to see from a BI system, but accessing information from sources that you’d never have access to in your BI system.

White’s prediction is that in five years, typing criteria into a search box will appear hopelessly outdated: search will just be implicit in the applications that we use to access information. Both the sources of data and the questions that you’re going to ask will change, and traditional BI and search methods can’t handle that; instead, you’ll be using a new generation of search applications that allow you to traverse the data universe.

FASTforward09: Auli Ellä, Orion

Time out for a video interview (which will appear on the FASTforward blog sometime today or tomorrow), then back to the business productivity track to hear Auli Ellä of Orion, a Finnish pharmaceutical, discuss their enterprise search implementation. As a research-based company, internal users need to be able to find information both internally – in a Documentum document management system, eRooms and the intranet – and externally.

As she put it, it’s not about searching, it’s about finding. Prior to implementing FAST, users didn’t know which system to look in or what metadata to use, and failed searches (where they didn’t find what they needed) took an average of an hour each, which tells you how them cost-justified their enterprise search project.

They started with a pilot enterprise search project that used a small selection of internal and external information sources accessed by a select group of R&D and business users, and used that to prove their business case and gain management acceptance.

She outlined two major benefits that they’re seeing from enterprise search: it supports decision-making, and supports innovation. It saves them a lot of time spent searching internal and external resources, but also adds value to search results through increased accuracy and reliability, and the ability to filter, categorize and drill-down into the results. Information is re-used in place, not copied between systems, using the existing metadata from the source systems both for indexing and for filtering results. This provided an easier and faster search that is at least as reliable as a single source search, and although search results are federated, the access rights of the original information source are respected such that a user can’t see search results for content that they would not have access to in the source system. Users could create and save searches, making it even faster to locate frequently-accessed information.

The user feedback was impressive: in fact, some users thought that it couldn’t be real because it was too fast.

She had some lessons learned for implementing enterprise search:

  • It’s essential to start with a pilot, doing a concrete proof of concept, then implementing iteratively
  • It’s important to have a diverse group of pilot users, not just highly-skilled knowledge workers
  • Keep it simple with the filters and navigation: too much sophisticated can just confuse things
  • Be prepared for resistance, and combat that by offering tips and tricks for using the system more effectively

Going back to the goal of supporting decision-making, it’s helping them by allowing users to see all relevant information available on one screen when, for example, gathering materials for a meeting. On the innovation side, they’re starting to have some unexpected results – which drive innovation – becoming visible more easily by using filters such as content authors.

They’re now adding more internal and external information sources, and improving functionality based on feedback from the users for things such as selecting a single information source when the user does know where the content is – which is an interesting request, when you consider that that means that the users would rather search in FAST than in the underlying system due to speed, ease of use and functionality. Reliability and speed are critical for search, of course: without both of those, the users will reject it.

In the future, they’ll be developing new filters and search profiles, adding structured content, and incorporating search behind the scenes in portals to display relevant information (like the company cafeteria menu, which is currently the top search in their system). As this happens, however, they are aware that this continuous development can make it easier for the user, but increases the amount of work in the background by the information management people.

Her conclusions: if done well, search can become a personalized desktop where discovering, re-using and refining information is business as usual. It’s all about turning information into knowledge and action.

FASTforward09: The Social Enterprise

The business productivity track of the breakouts started with a panel on the social enterprise, and how search is changing the way that people work, featuring Kevin Dana of Accenture and Amit Bansal of Cisco.

The moderator, Nancy Lai of Microsoft, asked them both the same set of questions rather than fostering a real conversation, and the link to search was a bit tenuous at times: sometimes she just seemed to replace the words “social media” with “social search” or “search-based Web 2.0”. The panelists just blithely ignored that, and discussed the adoption of social media within their enterprises.

Interestingly, there was a bit from Amit Bansal about users not knowing when to use which tool – blogs, wikis, etc. – which means that there are multiple information silos being created within their enterprise social media spaces: the perfect application for search, although that didn’t come up.

Something that I noticed at last year’s FASTforward is that this is more than just a user conference: it’s also a social media conference. I don’t know how a search company’s user conference ends up like that, but it makes it interesting. What has changed since last year is that it seems that they’re required to inject the word “search” into everything in order to reinforce the overall message of the conference.

This was a great discussion of social media adoption in the enterprise, including user expectations, return on investment and a number of other very relevant topics for anyone considering bringing in these types of tools to their company. Nothing to do with search, but that ultimately didn’t really matter. Maybe Microsoft just needs to admit that not everything has to be exactly on the product message, and that providing discussions like this add to the overall value of the conference.

FASTforward09: Charlene Li

I’m a fan of Charlene Li (formerly a Forrester analyst on social media, now independent), so was excited to see her kicking off the second half of the morning by talking about the connection between search and social technologies. Her main topic is how companies are being transformed by social technologies, starting with the statement that social media isn’t about the specific sites that are hot right now, but about engaging and forming relationships with your customers. This requires learning from the community of your customers and prospective customers by listening to what they want to do:

  • Instead of one-way marketing, start a conversation with your customers on Facebook, like H&R Block does.
  • Instead of reactive customer service, watch for references to problems with your brand on Twitter, and engage the Twitterers directly, like Comcast does.
  • Instead of a well-hidden email link on your website for customer feedback, create a part of your own site that allows customers to make public suggestions for how to do things better, and allow people to vote on the suggestions, like Starbucks does on MyStarbucks.com.
  • Instead of waiting for prospective employees to come to you, find the new graduates on Facebook, like Ernst and Young does.
  • Instead of faceless press releases, have executives write a public blog, like Beth Israel’s CEO does.

Companies will be transformed from the outside in, by listening to their customers and communities.

She then brought this back to how social technologies are impacting search: search now has to be able to reach all of this information on different platforms and sites, as well as using your social graph to assist with search and prioritizing results.

From an enterprise standpoint, companies are integrating external search into their support site search results in order to enrich the results. Out on the web in general, however, there’s been an explosion of web tools to search social media, and categorize the results: not just plain old Google searches, but Techmeme and Technorati to categorize and rank the results. Delicious, a social bookmarking site, tells you how many other people have tagged a specific link, and allows you to traverse any of those people’s links to see if they have other relevant links for the information that you’re trying to find. Twitter allows you to search for tags that people include in the content of their tweets (or for any other term), and FriendFeed provides you with much richer social search features such as filtering certain people in your social graph into or out of the results.

It’s not just search, it’s a combination of search, using your social graph, and active trends on the particular social media site at that moment: searching enhanced with social profiles, even if the profile that you’re using in the context of your search isn’t even in your social graph but has sufficient social capital to influence your decisions. However, most of the social searching is still siloed: if you’re looking for your friends’ opinions on a book, you might have to search on Amazon, Facebook, Twitter and a variety of blogs and personal websites. This is no different than the silos of information that we have within enterprises, and federated search could provide the same degree of benefit as we’re seeing with the implementation of enterprise search to aggregate multiple sources and platforms.

We’re just starting to look at search on the social web: I agree with Charlene that social media is having a big impact on search, and search will have a big impact on how we find things in social media.