Salesforce Releases Force.com Visual Process Manager

A couple of months back, there was a private discussion amongst the Enterprise Irregulars about who Salesforce.com was going to buy next, and there was a thought in the back of my mind that it might be a BPM vendor. Since that time, two BPM vendors have been acquired, but not by Salesforce: instead, they launched their own Force.com Visual Process Manager for designing and running processes in the cloud.

However, they seem determined to keep it a secret: first, the Visual Process Manager Demo video on YouTube has been made private (that’s just a screen snapshot of the cached video below), and second, I was unable to get a call back in response to the technical questions that I had during the demo.

For those of you unfamiliar with options for Salesforce application development ( as I mostly was before this briefing), Force.com is the platform originally built for customizing the Salesforce CRM offering, which became a necessity for larger customers requiring customization of data, UI and business logic. Customers started using it as a general business application development and delivery platform, and there are now 135,000 custom applications on Force.com, ranging from end-user-created databases and analytics, to sophisticated order management and e-commerce systems that link directly to customers and trading partners, and can update data from other Salesforce applications. In the past four years, they’ve gone from offering transactional applications to entire custom websites, and are now adding collaboration with Chatter.

As you might guess, there are processes embedded in many applications; classic software development might view these as screen flows, that is, the process for a person to move from one screen to another within an application. Visual Process Manager came about for exactly that purpose: customers were building departmental enterprise applications applications with process (screen flow) logic, but were having to use a lot of code in order to make it happen.

Link between form and process mapSalesforce acquired Informavores for their process design and execution engine, and that became Visual Process Manager. This is primarily human-centric BPM; it’s not intended as a system-centric orchestration platform, since most customers already have extensive middleware for integration, usually on-premise and already integrated with their Force.com apps so don’t need that capability. That means that although a process step can call a web service or pretty much anything else within their existing Force.com platform, asynchronous web service calls are not supported; this would be expected to be done by that middleware layer.

The process designer allows you to create a process map, then create a form that is tied to each human-facing step in the process map. Actions are bound to the buttons on the forms, where a form may be a screen for internal use, or a web page for a public user to access. You can also add in automated steps and decisions, as well as calling subprocesses and sending emails. It uses a fairly simple flowchart presentation for the process map, without swimlanes. There isn’t a lot of event handling that I could see, such as handling an external event that cancels an insurance quote process. There’s a process simulator, although that wasn’t demonstrated.

Visual Process Manager is priced at $50/user/month for Force.com Enterprise and Unlimited Edition customers, although it’s not clear if that’s just for the application developers, or if there’s a runtime licensing component as well.

Similar to what I said about SAP NetWeaver BPM, this isn’t the best BPMS around – in fact, in the case of Force.com, it’s little more than application screen flow – but it doesn’t have to be the best in class: it only has to be the best BPMS for Force.com customers.

Designing compelling customer-facing user experiences #BTF09

For the last breakout of the day before the final keynote, I attended Mike Gualtieri’s session on designing customer-facing user interfaces. He started with the idea that application developers have to be involved in user experience design, and not just leave it to the designers (which is, of course, exactly what we did in the bad old days of development when there was no such thing as a user experience designer). Forrester defines user experience as “users’ perceptions of the usefulness, usability, and desirability of a Web application based upon the sum of all their direct and indirect interactions with it”, and propose that a great UX is useful, usable and desirable.

User experience impacts how your customers feel about you, and it’s also not just about the interfaces that the customer works with directly: a second-hand interface can also impact the customer experience, as you know if you’ve ever waited ages while a hotel desk clerk clicks their way through a complex interface in order to check you in. A good UX can increase purchases, retain customers and attract more customers; leaving it to chance hurts your conversion rates, alienates customers and increases your development costs due to redesign and redevelopment.

Gualtieri argues that UX design is Lean (although you could argue that only good UX design is Lean), and sets out best practices for good UX design:

  • Become your users, by listening to their needs, observing them in their natural habitat, creating personas, and empathizing with them. Users typically don’t articulate their needs fully or accurately so it’s not sufficient to just listen to them, but they will demonstrate them if you watch how they do their work. This type of user research is not the same as gathering requirements from business stakeholders; remember the Henry Ford quote: “If I asked people what they wanted, they would have said faster horses”. Forrester uses personas in their own materials – for example, representing an application development manager, complete with picture and name – and I’m seeing some companies such as Global 360 use these for BPMS user interface design.
  • Design first, and understand constraints and potential areas of change as well as the different personas that you discovered in your user research. Keep in mind that you have to serve business goals by serving user goals. Create rough prototypes first, and don’t rush into development or lock into a design too soon. There is some amount of art UX design, so don’t assume that tools can do it for you. Keep the basic principles in mind: useful, usable and desirable.
  • Trust no one: test your designs. It doesn’t matter how many experts review the designs, there is no better review of some features than testing the UX with a range of intended users. Remember that this is not just about usability, it’s also about usefulness and desirability.
  • Inject UX design into your software development life cycle. Everyone on the team should understand why UX design is important, and be incented to help create great UX. UX design should be part of your development process, and requires someone on the team to own the UX design efforts. You still need to use the same techniques as discussed in the other best practices, not just do the design in isolation from the users, but having it integrated into the development team will improve the overall software design.

He finished with the ideas that your development efforts are essentially wasted if the user experience isn’t done right, but it doesn’t have to add a lot of time or money to your project. Good UX design is the mark of a great application development team.

Can packaged applications ever be Lean? #BTF09

Chip Gliedman, George Lawrie and John Rymer participated in a panel on packaged applications and Lean.

Rymer argued that packaged apps can never be Lean, since most are locked down, closed engines where the vendor controls the architecture, they’re expensive and difficult to upgrade, they use more functions than customers use, they provide a single general UI for all user personas, and each upgrade includes more crap that you don’t need. I tend to be on his side in this argument about some types of appls (as you might guess about someone who used to write code for a living), although I’m also a fan of buy over build because of that elusive promise of a lower TCO.

Gliedman argued the opposite side, pointing out that you just can’t build the level of functionality that a packaged application provides, and there can be data and integration issues once you abandon the wisdom of a single monolithic system that holds all your data and rules. I tend to agree with respect to functionality, such as process modeling: you really don’t want to build your own graphical process modeler, and the alternative is hacking your own process together using naked BPEL or some table-driven kludge. Custom coding also does not guarantee any sort of flexibility, since many changes may require significant development projects (if you write bad code, that is), rather than a package app that may be more configurable.

It’s never a 100% choice between packaged apps and custom development, however: you will always have some of each, and the key is finding the optimal mix. Lean packaged apps tend to be very fit-to-purpose, but that means that they become more like components or services than apps: I think that the key may be to look at composing apps from these Lean components rather than building Lean from scratch. Of course, that’s just service-oriented architecture, albeit with REST interfaces to SaaS services rather than SOAP interfaces to internal systems.

There are cases where Lean apps are completely sufficient for purpose, and we’re seeing a lot of that in the consumer Web 2.0 space. Consider Gmail as an alternative to an Exchange server (regardless of whether you use Outlook as a desktop client, which you can do with either): less functionality, but for most of us, it’s completely sufficient, and no footprint within an organization. SaaS, however, doesn’t not necessarily mean Lean. Also, there are a lot of Lean principles that can be applied to packaged application deployment, even if the app itself isn’t all that Lean: favoring modular applications; using open source; and using standards-based apps that fit into your architecture. Don’t build everything, just the things that provide your competitive differentiation where you can’t really do what you need in a packaged apps; for those things where you are doing the same all every other company, suck it up and consider a packaged app, even if it’s bulky.

Clearly, Gliedman is either insane or a secret plant from [insert large enterprise vendor name here], and Rymer is an incurable coder who probably has a ponytail tucked into his shirt collar. 🙂 Nonetheless, an entertaining discussion.

How Can Lean Software Enable You To Better Serve The Business? #BTF09

John Rymer and Dave West presented a breakout session in the application development track on how Lean software development practices can be applied in your business. This obviously had a big focus on Agile, and how it can be used within large organizations. Unlike what some people think, Agile isn’t cowboy coding: it is quite disciplined, but it is optimized for delivering the right thing (from a business standpoint) in the minimal time. It’s all based on four principles: deliver the right product, provide hard value, simplify the platform, and allow efficient evolution. An optimal strategy depends on all four of those elements, but Agile projects may deliver on two or three of them, proving the value of Agile before a full Agile strategy is in use.

In order to apply these principles across your entire application development portfolio, you need a strategy that addresses these elements, and provides some way to measure the impact of such a strategy. Delivering the right product requires a focus on people and talent, and the industrial concepts of mass customization rather than mass production; providing hard value requires linking your development process to value streams with their focus on investment return; simplifying the platform requires a focus on tools and technology; and allowing efficient evolution requires optimizing work processes both within development teams and across the organization. I especially liked their chart comparing today’s practices in tools and technologies against Lean practices:

Today’s practices

Lean practices

Install for today and tomorrow Install for today, architect for tomorrow
Configure a general UI for many users Design for people in their work roles
Adopt integrated suites Adopt narrow-purpose modules and services
No component substitution is allowed Component substitution is allowed
Architectural evolution is slow by design Architectural evolution is constant by design

There are ways to bring Agile into an organization, even when budgets are flat and there is the perception that legacy systems just can’t be replaced without yet another huge project expense. Likely, your developers are already practicing some Agile methods already, and you could easily gain permission to prove these out in non-critical systems development.

Good session, with a high-speed tag team between Rymer and West. Unfortunately, the logistics aren’t quite as good as the general sessions: too-small meeting rooms requiring elevator access from the main conference area, no tables and no wifi coverage (at least in the room that I was in at this time).

Software AG partner frameworks

I had lunch today at Innovation World with a Software AG partner that will be releasing one of the industry vertical frameworks (I didn’t ask if I could use their name, so am withholding it for now). They see the frameworks as a necessity to even demonstrate BPM to a vertical business, as well as providing a base on which to create a custom solution. A few bits about the partner frameworks:

  • The partner that I spoke with does not plan to productize the framework; rather, it will be used as the starting point for a custom solution for a client. This is how I see most companies implementing frameworks on top of any BPMS, with a mixed degree of success: although it’s difficult to turn some or all of a framework into a product rather than a service, especially for the services companies who normally create them, a productized framework can have advantages when it comes to maintenance and support. Customers need to be aware that a non-productized framework is really just another piece of custom code, and the long-term maintenance costs will reflect that.
  • This partner plans to retain the intellectual property of the framework and any custom code that they build on it, allowing them to roll the code developed for any customer back into the framework for resale. This is great for the industry in general, and future customers in particular, but customers would need to ensure that they specify any processes to which they do not want to give up IP rights.
  • Software AG does not provide guidelines or rules for what should or should not be in a framework, or how to create one. In their online partner forum, however, they describe the existing frameworks so that partners can get an idea of what should be in one.
  • Software AG is not certifying the partner frameworks, so customers need to do their own due diligence on the strength of the solution. Some sort of certification program would likely improve customer confidence in the third-party frameworks.

Vertical industry frameworks are definitely the new black in BPM these days: in addition to Software AG’s program of mixed internal and third-party frameworks, Savvion announced a fairly ambitious framework program with one tier of components built by Savvion and one by their partners, and TIBCO provides some vertical frameworks as a vertical marketing tool.

I’m all for providing a leg up for customers to start working with a BPMS in their industry, but we need to be clear about whether something is a true framework or a set of unsupported templates, understand the value that a framework can bring, and know the pitfalls of a framework approach. I’ve seen some pretty big BPMS implementations that went totally off track because of the use of a non-productized framework: the framework became brittle legacy custom code before it was even in production, was seriously impacted by a minor upgrade to the underlying BPMS platform, and did not allow access to recent modeling and optimization functions provided in the BPMS since it was designed and built for a previous version.

In general, I think that most “frameworks” that overlay BPMS’ are actually templates, providing marketing and sales support for the underlying product in that vertical, but not providing a lot of value in terms of a code base. Those that do have a significant code base are usually not productized, hence need to be evaluated as a big chunk of custom code: although the initial purchase price is likely lower than having all that code written for you, you have to consider the ongoing maintenance costs.

Business Rules Forum: Pedram Abrari on MDA, SOA and rules

Pedram Abrari, founder and CTO of Corticon, did a breakout session on model-driven architecture, SOA, and the role that rules play in all of this. I’m also in the only room in conference center that’s close enough to the lobby to pick up the hotel wifi, and I found an electrical outlet, so I’m in blogger heaven.

It’s a day for analogies, and Abrari uses the analogy of car for a business application: the driver representing business, and the mechanic representing IT. A driver needs to have control over where he’s going and how he gets there, but doesn’t need to understand the details of how the car works. The mechanic, on the other hand, doesn’t need to understand where the driver is going, but keeps the car and its controls in good working order. Think of the shift from procedural to declarative development concepts, where we’ve moved from stating how to do something, to what needs to be done. A simple example: the difference between writing code to sum a series of numbers, and just selecting a range of cells in Excel and selecting the SUM function.

The utopia of model-driven architecture (MDA) is that  business applications are modeled, not programmed; they’re abstract yet comprehensive, directly executable (or at least deployable to an execution environment without programming), the monitoring and analytics are tied directly to the model, and optimization is done directly on the model. The lack of programming required for creating an executable model is critical for keeping the development in the model, and not having it get sucked down into the morass of coding that often happens in environments that are round-trippable in theory, but end up with too much IT tweaking in the execution environment to ever return to the modeling environment.

He then moved on to define SOA: the concept of reusable software components that can be loosely coupled, and use a standard interface to allow for platform neutrality and design by contract. Compound/complex services can be built by assembling lower-level services in an orchestration, usually with BPM.

The key message here is that MDA and SOA fit together perfectly, as most of us are aware: those services that you create as part of your SOA initiative can be assembled directly by your modeling environment, since there is a standard interface for doing so, and services provide functionality without having to know how (or even where) that function is executed. When your MDA environment is a BPMS, this is a crystal-clear connection: every BPMS provides easy ways to interrogate and integrate web services directly into a process as a process step.

From all of this, it’s a simple step to see that a BRMS can provide rules/decisioning services directly to a process; essentially the same message that I discussed yesterday in my presentation, where decision services are no different than any other type of web services that you would call from a BPMS. Abrari stated, however, that the focus should not be on the rules themselves, but on the decision service that’s provided, where a decision is made up of a complete and consistent set of rules that addresses a specific business decision, within a reasonable timeframe, and with a full audit log of the rules fired to reach a specific decision in order to show the decision justification. The underlying rule set must be declarative to make it accessible to business people.

He ended up with a discussion of the necessity to extract rules out of your legacy systems and put them into a central rules repository, and a summary of the model-driven service-oriented world:

  • Applications are modeled rather than coded
  • Legacy applications are also available as web services
  • Business systems are agile and transparent
  • Enterprise knowledge assets (data, decisions, processes) are stored in a central repository
  • Management has full visibility into the past, present and future of the business
  • Enterprises are no longer held hostage by the inability of their systems to keep up with the business

Although the bits on MDA and SOA might have been new to some of the attendees, some of the rules content may have been a bit too basic for this audience, and/or already covered in the general keynotes. However, Abrari is trying to make that strong connection between MDA and rules for model-driven rules development, which is the approach that Corticon takes with their product.

PegaWorld: Paul Kompare on JPMorgan Chase’s agile methodology

Paul Kompare, SVP of commercial banking technology at JPMorgan Chase discussed their Pega implementation of a straight-through processing infrastructure for commercial loans. He gave us a brief view of the current environment and the proposed solution, then moved on to discuss their agile implementation approach. Although they refer to this as Agile, it’s still a bit waterfall-like: the sprints don’t result in released code, but in checkpoint demos, and these are the points when the business representatives interact with the development team rather than being a co-located team (which likely would not have been possible since they rely heavily on offshore development resources). However, it’s a big improvement over their old waterfall methodology.

They delivered the project in two phases, each with three iterations:

  1. Happy paths and primary flows
  2. Exception paths and secondary flows
  3. UI, integration and reports

In each iteration, they establish and sign off on the criteria, then use Pega to directly capture objectives and model the processes. With this relatively agile process, they improved project sponsorship as well as getting early buy-in from the business, since they were able to demonstrate something earlier and more frequently. Using PRPC also gives the business managers more visibility into the business processes and their underlying logic, rather than having those processes locked up inside opaque application code. They found that the tools gave them more agility and flexibility during implementation, greater reusability and faster time to market, as well as allowing potential changes to be identified earlier.

They did have some challenges with adapting to an agile approach: this type of approach assumes that changes to the design and functionality of the system will occur during development, and relies on rolling out a phased series of small, self-contained components. From a funding standpoint, it’s almost impossible to issue fixed-price contracts for agile development, since there is not a really fixed statement of work on which to base the proposed price. I’ve seen cases where a third-party services firm doesn’t really get agile methodology, and there is a huge amount of overhead as they attempt to shoehorn their waterfall deliverables into each iteration of the agile development, or they just abandon the agile approach and go back to waterfall.

There’s also major changes to roles and responsibilities: the business participants have much greater responsibility during design and as the system rolls out, and having them trained in the design tools is critical.

He concluded that adopting agile development methodologies has been a challenge for them, but that it’s definitely helping them to achieve shorter development cycles. There’s very little here that’s specific to commercial loans, Pega implementations, or even BPM; these same factors would be seen in any organization shifting to an agile approach. However, Kompare made the point that they were driven to consider an agile approach because the Pega tools tend to work better in that environment than with a traditional development methodology.

PegaWorld: Meryl Stewart and Kelly Karlen on Business-IT Collaboration at BlueCross BlueShield

Last session of the day, and Meryl Stewart and Kelly Karlen of BlueCross BlueShield of Minnesota talked about maximizing BPM value through business and IT collaboration. They established a shared business-IT objective of enabling the business to manage their frequently-changing business rules to provide agility, while still maintaining environmental stability by following the necessary change management procedures.

They’ve wrapped some procedures around their projects to explicitly call this out, as well as explicit governance layers for processes and rules. Some of this — a big part — is about well-defined roles and responsibilities, such as a business rules steward. They categorize these procedures and methods by collection, execution and optimization stages, and walked us through each of the roles in each of the stages.

In the collection stage, they have a pretty structured way to create business rules and shore them in an enterprise repository; this is independent of their BPM technology, since not all processes end up being automated.

They wanted to make execution more efficient, so combined their RUP methodology with Pega’s RUP-like methodology and lightened it up to create a more agile “RUP Lite” (although as they walk through it, it doesn’t feel so light but it does have fairly frequent code releases).  Within that methodology, they have a number of additional roles to work on the business to technology transformation of the execution phase, and definite rules about who can do what types of changes and who does the associated testing. There’s a level of semi-technical analyst who can do a lot of the non-coding changes.

The optimization stage is where business agility happens, but this was addressed pretty quickly and seemed to be some sort of standard change management procedure.

This definitely shows some good software development practices, but there’s nothing particularly innovative here that can’t be replicated elsewhere as long as you can get the collaboration part to work; collaboration is primarily a function of finding people on both sides of the business-IT divide who can see over the wall to the other side, and maybe even straddle the divide somewhat with their skills.

They’ve applied the methodology to a couple of projects and have seen positive ROI, and very few coding changes since most of the process tuning can be done by business users or the semi-technical analysts. In one process, they’ve had 11 rule changes in 4 months with resultant savings of $820k in the improved processes; if IT were to have been involved in these changes, only $126k of the savings could have been realized in the same timeframe due to IT project schedules — a good measure of the value of agility provided by allowing the business to change business rules. Fundamentally, they changed an 8-week IT build cycle to 10 days or less by allowing the business to change the rules, but still following a test and deploy cycle that keeps lT happy.

That’s it for today; there’s a reception, then dinner and a cruise on the Potomac to view the monuments by night. The esteemed Dr. Michael zur Muehlen will not be joining us in spite of being right across the river in Arlington; when I invited him, he gave some lame excuse about just getting back from Seoul. 😉

TUCON: Using BPM to Prioritize Service Creation

Immediately after the Spotfire-BPM session, I was up to talk about using BPM to drive top-down service discovery and definition. I would have posted my slides right away, but one of the audience members pointed out that the arrows in the two diagrams should be bidirectional (I begged forgiveness on the grounds that I’m an engineer, not a graphic artist), so I fixed that up before posting to Slideshare:

My notes that I jotted down before the presentation included the following:

  • SOA should be business focused (even owned by the business): a top-down approach to service definition provides better alignment of services with business needs.
  • The key is to create business-granular services corresponding to business functions: a business abstraction of SOA. This requires business-IT collaboration.
  • Build thin applications/processes and fat services to enable agile business processes. Fat services may have multiple operations for different requirements, e.g., retrieving/updating just the customer name versus the full customer record in an underlying system.
  • Shared business semantics are key to identifying reusable business services: ensure that business analysts creating the process models are using the same terminology.
  • Seek services that have the greatest business value.
  • Use cases can be used to identify candidates for services, as can boundary crossings activity diagrams.
  • Process decomposition can help identify reusable services, but it’s not possible to decompose and reengineer every process: look for ineffective processes with high strategic value as targets for decomposition.
  • Build the SOA roadmap based on business value.
  • SOA isn’t (just) about creating services, it’s about building business processes and applications from services.
  • Services should be loosely-coupled and location-independent.

There were some interesting questions arising from this, one being when to put service orchestration in the services layer (i.e., have one service call another) and when to put it in the process layer (i.e., have a process call the services). I see two facets to this: is this a business-level service, and do you want transparency into the service orchestration from the process level? If it’s not a business-level service, then you don’t want business analysts having to learn enough about it to use it in a process. You can still do orchestration of technical services into a business service using BPM, but do that as a subprocess, then expose the subprocess to the business analyst; or push that down to the service level. If you’re orchestration business-level services into coarser business-level services, then the decision whether to do this at the service or process level is about transparency: do you want the service orchestration to be visible at the process level for monitoring and process tracing?

This was the first time that I’ve given this presentation, but it was so easy because it came directly out of my experiences. Regardless, it’s good to have that behind me so that I can focus on the afternoon sessions.

BPEL for Java Developers Webinar

Active Endpoints is hosting a webinar this Thursday on BPEL Basics for Java Developers, featuring Ron Romano, their principal consulting architect. From their information:

A high-level overview of BPEL and its importance in a web-services environment is presented, along with a brief discussion of the basic BPEL activities and how they relate to Java concepts. The following topics will be covered:

  • Parsing the Language of SOA with Java as a guide
  • Breaking out of the VM: evolving from RPC to Web Services
  • BPEL Activities – Receive, Reply, Invoke
  • BPEL Facilities – Fault Handling and Compensation (“Undo”)

The VP of Marketing assures me that he was allowed only two slides at the end of the presentation, and that otherwise this is focused on the technical goodies.

You need to register in advance at the link above.