Software AG Technology Innovation Fair

For once, I don’t need to travel to see a (mini) vendor conference: Software AG has taken it on the road and is here in Toronto this morning. I wanted to get an update of what’s happening with webMethods since I attended their user conference in Miami last November, and this seemed like a good way to do it. Plus, they served breakfast.

Susan Ganeshan, SVP of Product Management, started the general keynote with a mention of Adabas and Natural, the mainframe (and other platforms) database and programming language that drive so many existing business applications, and was likely the primary concern of many of the people in the room. However, webMethods and CentraSite are critical parts of their future strategy and formed the core of the rest of the keynote; both of these have version 8 in first-customer-ship state, with general availability before the end of the year.

First, however, she talked about Software AG’s acquisition of IDS Scheer, and how ARIS fits into their overall plan, following on today’s press release about how Software AG has now acquired 90% of IDS Scheer’s stock, which should lead to a delisting and effective takeover. She discussed their concept of enterprise BPM, which is really just the usual continuous improvement cycle of strategize/discover and analyze/model/implement/execute/monitor and control that we see from other BPMS vendors, but pointed out that whereas Software AG has traditionally focused on the implement and execute parts of the cycle, IDS Scheer handles the other parts in a complementary fashion. The trick, of course, will be to integrate those seamlessly, and hopefully create a shared model environment (my hope, not her words). They are also bringing a process intelligence suite to market, but no details on that at this time.

Interesting message about the changing IT landscape: I’m not sure of the audience mix between Adabas/Natural and webMethods, but I have to guess based on her “intro to BPM” slides that it is heavily weighted towards the former, and that the webMethods types are more focused on web services than BPM. She also invokes the current mantra of every vendor presenter these days about how the new workforce has radically different expectations about what their computing environment should look like (“why can’t I google for internal documents?”); I completely agree with this message, although I’m sure that most companies don’t yet have that as a high priority since much of the new workforce is just happy to have a job in this economy.

She discussed the value of CentraSite – or at least of SOA governance – as being a way to not just discover services and other assets, but to understand dependencies and impacts, and to manage provisioning and lifecycle of assets.

A few of the BPM improvements:

  • Also a common message from BPMS vendors this week, she talked about their composite application environment, a portal-like dynamic workspace that can be created by a user or analyst by dragging portlets around, then saved and shared for reuse. This lessens the need for IT resources for UI development, and also allows a user to rearrange their workspace the way it best works for them.
  • They’ve also added ad hoc collaboration, which allows a process participant to route work to people who are not part of the original process; it’s not clear if they can add steps or subprocesses to the structured process, or whether this is a matter of just routing the task at its current step to a previously unidentified participant.
  • They integrate with Adobe Forms and Microsoft Infopath, using them for forms-driven processes that use the form data directly.
  • They’ve integrated Cognos for reporting and analytics; it sounds like there are some out of the box capabilities that run without additional licensing, but if you want to make changes, you’ll need a Cognos license.

Since the original focus of webMethods was in B2B and lower-level messaging, she also discussed the ESB product, particularly how they can provide high-speed, highly-available messaging services across widespread geographies. They can provide a single operational console across a diverse trading network of messaging servers. There’s a whole host of other improvements to their trading networks, EDI module and managed file transfer functionality; one interesting enhancement is the addition of a BPEL engine to allow these flows to be modeled (and presumably executed) as BPEL.

They have an increased focus on standards, and new in version 8 are updates to XPDL and BPEL support, although they’re still only showing BPMN 1.1 support. They also have some new tooling in their Eclipse-based development suite.

She laid out their future vision as follows:

  • Today: IT-driven business, with IT designing business processes and business dictating requirements
  • 2009 (um…isn’t that today?): collaborative process discovery and design; unified tooling
  • 2010: business rules management and event processing; schema conformance
  • 2012: personalized, smart-healing processes; centralized command and control for deployment and provisioning
  • 2014: business user self-service and broad collaboration without organizational boundaries; elastic and dynamic infrastructure

She finished up with a brief look at AlignSpace for collaborative process discovery; I’m sure that someday, they will approve my request for a beta account so that I can take a closer look at this. 🙂 Not only process discovery and modeling, however, AlignSpace will also provide a marketplaces of resources (primarily services) related to processes in particular vertical industries.

They have a complete fail on both wifi and power here, but I no longer care: my HP Mini has almost six hours of battery life, and my iPhone plan allows me to tether the netbook and iPhone to provide internet access (at least in Canada).

AIIM webinar on content and process

I’m the headline act in an upcoming webinar, Content: Meet the Business Process, hosted by AIIM and sponsored by SAP, on November 11th at 2pm ET. Although I spend a lot of my time focused on BPM, I have a pretty strong background in content management as well: almost every client that I work with has to deal with content and process together.

I’ll cover some of the key benefits of bringing together content and process, walk through a couple of case studies, and end up with some suggestions on getting started with content-centric cross-departmental processes.

Process Design Slam 2009 – The Final Judgement #SAPTechEd09 #BPXslam09

To wrap up the proceedings from last night, I was asked to critique the efforts of the groups and pick a winner: as it turned out, I was the only judge. Each of the groups did great work, and I want to call out some of the specific efforts:

  • The Business Use Case group had a great written story, including a lot of cultural and social background for our fictional city in order to provide context for the implementation.
  • The BPM Methodologies group had excellent documentation on the wiki, including graphics and charts to make it clear how the methodologies fit with the other groups.
  • The Business Rules group were stars at collaboration with the other groups, in part because everyone quickly realized the importance of business rules to data, UI and process, and solicited their input.
  • The UI and Dashboards group created mockups of monitoring dashboards that provide a starting point for future design slam work.
  • The Collaborative Modeling group led at international collaboration, using Gravity (process modeling within Google Wave) interactively with team members in Europe during the session, and produced a business process model.
  • The Service Implementation group also kicked off implementation, creating a service orchestration process model as a starting point.

In general, everyone seemed to have a good understanding of the importance of data, rules and process, but there could have been better cross-pollination between the groups; in future design slams, that could be helped by requiring some group members to move partway through the evening in order to ensure that there is a better understanding on both sides, something that is fairly common in real-life businesses where people are seconded from one department to another for part of a project. Although a certain amount of collaboration did occur, that was one area that requires more work. I saw one tweet that referred to the design slam as crowdsourced rather than collaborative, although I’m not sure that I would say that: crowdsourcing usually has more of a flavor of individuals contributing in order to achieve their own goals, whereas this was a collaboration with common goals. However, those goals were a bit fragmented by group.

Another issue that I had was the lack of an architectural view of process design: although all of the groups are contributing to a common process (or set of processes), there is little thought around the transformations required to move the process list developed by the Business Use Case group to the process model developed by the Collaborative Modeling group to the process design developed by the Service Implementation group. In enterprise architecture terms, this is a case of transforming models from one layer to another within the process column of the architecture (column 2 if you’re a Zachman fan); understanding these transformations is key so that you don’t reinvent the process at each layer. One of the goals of model-driven design is that you don’t do a business-level process model, then redraw it in another tool; instead, the business-level process model can be augmented with service-level information to become an executable process without recreating the model in another tool. In reality, that often doesn’t happen, and the business analysts draws a process in one tool (such as Visio, or in the case of the design slam, Gravity), then IT redraws it in a tool that will create an executable process (NetWeaver in this case). I have a couple of suggestions here:

  • Combine the Business Use Case and Collaborative Modeling groups into a single group, since they are both doing high-level business analysis. This would allow the process list to be directly modeled in the same group without hand-off of information.
  • Reconsider the use of tools. Although I have a great deal of appreciation for Gravity (I am, after all, a geek), the fact that it does not share a model with the execution environment is problematic since the two groups creating process models were really off doing their own thing using different tools. Consider using NetWeaver 7.2, which has a business analyst perspective in the process composer, and having the business use case/collaborative modeling group create their initial non-technical models in that environment, then allow the service implementation team to add the technical underpinnings. The cool Wave collaboration won’t be there, or maybe only as an initial sketching tool, but the link will be made between the business process models and the executable models.

When it came down to a decision, my choice of the winner was more a product of the early state of the design slam rather than the efforts or skills of the group: I suspect that my view would change if I were judging in Vienna or Bangalore when the process is further along. I selected the Business Use Case group as the winner at this point based on the four judging criteria: although they failed to include alternative media, their story was clear and well-written, it fit well with the other groups’ efforts, and they used good social and collaborative methods within their group for driving out the initial solutions.

The winning team was made up of Greg Chase, Ulrich Scholl and Claus von Riegen, all of SAP, with input from a few others as subject-matter experts on public utilities and electricity production, and started the discussions on pricing plans that ended up driving much of the Business Rules group’s work. Ulrich also has solar cells on his house that connect to the grid, so he has in-depth knowledge of the issues involved with micro-generation, and was very helpful at determining the roles involved and how people could take on multiple roles. They leveraged a lot of the content that was already on the wiki, especially references to communities with experience in micro-generation and virtual power plants. Besides this initial leg up on their work, they were forced to work fast to produce the initial use cases and processes, since that provided necessary input to the other groups to get started with their work, which left them with more of the evening to write a great story around the use case (but, apparently, not enough time to add any graphics or multimedia).

There was a huge amount of effort put into the design slam, both in the preceding weeks through conference calls and content added to the wiki, and at the session last night in Phoenix. I believe that a huge amount of groundwork has been laid for the design slams upcoming in Vienna and Bangalore, including process model, service orchestration diagrams, business rules decision tables, and monitoring dashboard mockups.

I had a great time last night, and would happily participate in a future process design slam.

Process Design Slam 2009 #SAPTechEd09 #BPXslam09

8pm

We’re just getting started with the Process Design Slam: one of the face-to-face sessions that make up the collaborative design process that started a couple of months ago on the Design Slam wiki. Marilyn Pratt has identified the six groups that will each work on their part of the design, collaborating between groups (a.k.a. poaching talent) as required, and even bringing in people from the Hacker Night and Business Objects events going on in the same area.

  • Business Use Case, led by Greg Chase
  • Collaborative Modeling, led by David Herrema
  • Business Rules, led by James Taylor
  • Service Implementation, led by John Harrikey
  • BPM Methodologies, led by Ann Rosenberg
  • UI and Dashboards, led by Michelle Crapo

Right now, everyone has formed into initial groups based on their interests, and is having some initial discussions before the food and beer arrives at 8:30. Since there was an initial story and process model developed by the online community, everyone is starting at something close to a common point. Participants within a group (and even the leaders) could change throughout the evening.

By the end of the night, each team will have created a story about their work, and give a 5-minute presentation on it. The story must include additional media such as video and images, and in addition to the presentation, it must be documented on the wiki. Each story must also be related to the output of the other teams – requiring some amount of collaboration throughout the evening – and include pointers on what worked and didn’t work about their process, and what they would do differently in the future.

At that point, the judging panel, which includes me plus Marc Rosson, Uli Scholl, Ann Rosenberg and Dick Hirsch, will render our judgment on the creations of the groups based on the following criteria:

  • Clarity and completeness of the story on the wiki, particularly if it could be understood without the presentation.
  • Creative use of media.
  • How well this story ties into the overall storyline of the night.
  • The social process that was used to create the story.

I’m floating around between groups to listen in on what they’re doing and some of their initial thoughts.

8:30pm

Beer o’clock. The Business Rules team is still deep in conversation, however, and Business Use Case comes over to them to ask for help in bringing the business rules and business use case together. Business Use Case outlines the actors that they have identified, and the high-level business processes that they have identified in addition to the initial business process of bringing new consumer-producers online.

9pm

BPM Methodologies has a much wider view than just this project: developing methodologies that can be used across (SAP) BPM projects, including assessing the business process maturity of an organization in order to determine where they need to start, and identifying the design roles. In the context of the design slam, they will be helping to coordinate movement of people between the teams in order to achieve the overall goals.

9:30pm

Service Implementation – viewed by groups such as Business Use Case as “the implementers” – have revised the original process map from a service standpoint; looking at the services that were required led to a process redesign. They are using the Composite Designer to model the service orchestration, including the interfaces to the services that they need and external services such as FirstLook, an wind assessment service based on location data. In their service orchestration process, they assume that the process is initiated with the data gathered from a user interface form, and they focus primarily on the automated process steps. Ginger Gatling doesn’t let me leave the table until I tell them what they have to do to win; I advise them to update the wiki with their story.

9:50pm

The Collaborative Modeling group is modeling the business process using Gravity, online with a couple of participants in Europe. This is a process model from a business standpoint, not an executable model; there is no concept of the linkage between this and what is being done by the Service Implementation team. I suggest that they should head over there to compare processes, since these should (at some level) just be different perspectives on the same process.

10pm

Business Use Case is identifying the necessary processes based on their earlier collaboration with Business Rules: this has given them a good understanding of business case, goals and incentives. They’re considering both human and automated usages, and have fed their results to the UI, Business Rules and Collaborative Modeling teams.

10:10pm

Business Rules states that they’ve had to gather information from numerous sources, and the challenge is to sequence it properly: data is captured by the UI, but is driven by the Business Use Case. They didn’t work with the Collaborative Modeling group directly, but there are links between what they do and what’s happening in the process. They’re also interested in using historical usage data to determine when to switch consumers between usage plans.

10:20pm

UI and Dashboards managed to recruit a developer who is actually coding some of their interfaces; they were visited by many of the other groups to discuss the UI aspects, since the data gathered by the UI drives the rest of the process and rules, and the data generated by the process drives the dashboard interfaces. They feel that they had the best job since they could just be consumers and visualize the solutions that they would like to have.

10:35pm

Presentations start. Marilyn Pratt is being the MC, and Greg Chase is wrangling the wiki to show what has been documented by each of the groups. Half of the Service Implementation team just bailed out. I have to start paying attention now. Checking out the wiki pages and summarizing the presentations:

  • Business Use Case worked with the UI, Collaborative Modeling and Business Rules teams, since those teams required the business use cases in order to start their work. They developed a good written story including cultural/social background about the fictional city where the power generation plan would go into effect. They defined the roles that would be involved (where one person could take on more than one role, such as a consumer that is also a producer), and the processes that are required in order to handle all of the use cases. They did not use any presentation/documentation media besides plain text.
  • BPM Methodologies had excellent documentation with the use of graphics and tables to illustrate their points, but this was a quite general methodology, not just specific to this evening’s activities. They worked briefly with the other groups and created a chart of the activities that each of these groups would do relative to the different phases in the methodology. I found the methodology a bit too waterfall-like, and not necessarily a good fit with the more agile collaborative methods needed in today’s BPM.
  • Business Rules focused on the rules related to signing up a new user with the correct pricing plan, documenting the data that must be collected and an initial decision table used to select a plan, although no graphics or other non-text media. They worked with the Business Use Case team and the UI team to drive the underlying business use cases and data collection.
  • UI and dashboards created the initial mockups that can be used as a starting point for the design slam in Vienna in a couple of weeks. They worked with Business Rules and Business Use Case in order to nail down the required user data inputs, and what is required for monitoring purposes, and included some great graphics of the monitoring dashboards (although not the data collection form).
  • Collaborative Modeling used Gravity (process modeling in Google Wave) not just for modeling with the group around the table, but also with participants in Germany and the Netherlands. They included photos of the team as well as screen snaps of the Gravity Wave that they created, although the text of the story documented on the wiki isn’t really understandable on its own. I’m not sure that they spent enough time with other groups, especially the Service Implementation group.
  • Service Implementation talked to the Business Rules and UI teams to discuss rules and data, but felt that they were running blind since there wasn’t enough of the up-front work done for them to do any substantial work. They used placeholders for a lot of the things that they didn’t know yet, and modeled the service orchestration. The documentation in the wiki is very rudimentary, although includes the process map that they developed; it’s not clear, however, how the process model developed in Collaborative Modeling relates to their map.

11:30pm

And now, on to the judging – I’ll write up the critique and results in a later post.

NetWeaver BPM update #SAPTechEd09

Wolfgang Hilpert and Thomas Volmering gave us an update on NetWeaver BPM, since I was last updated at SAPPHIRE when they were releasing the product to full general availability. They’re readying the next wave of BPM – NetWeaver 7.2 – with beta customers now, for ramp-up near the beginning of the year and GA in spring of 2010.

There are a number of enhancements in this version, based on increasing productivity and incorporating feedback from customers:

  • Creating user interfaces: instead of just Web DynPro for manual creation of UI using code, they can auto-generate a UI for a human-facing task step.
  • New functions in notifications.
  • Handling intermediate events for asynchronous interfaces with other systems and services.
  • More complete coverage of BPMN in terms of looping, boundary events, exception handling and other constructs;
  • Allowing a process participant to invite other people on their team to participate in a task, even if not defined in the process model (ad hoc collaboration at a step).
  • The addition of a reporting activity to the process model in order to help merge the process instance data and the process flow data to make available for in-process analytics using a tool such as BusinessObjects – the reporting activity takes a snapshot of the process instance data to the reporting database at that point in the process without having to call APIs.
  • Deeper integration with other SAP business services, making it easier to discover and consume those services directly within the NetWeaver Process Composer even if the customer hasn’t upgraded to a version of SAP ERP that has SOA capabilities
  • Better integration of the rules management (the former Yasu product) to match the NetWeaver UI paradigms, expose more of the functionality in the Composer and allow better use of rules flow for defining rules as well as rules testing.
  • Business analyst perspective in process modeler so that the BA can sketch out a model, then allow a developer to do more of the technical underpinnings; this uses a shared model so that the BA can return to make modifications to the process model at a later time.

I’d like to see more about the ad hoc runtime collaboration at a task (being able to invite team members to participate in a task) as well as the BA perspective in the process modeler and the auto-generation of user interfaces; I’m sure that there’s a 7.2 demo in my future sometime soon.

They also talked briefly about plans for post-7.2:

  • Gravity and similar concepts for collaborative process modeling.
  • Common process model to allow for modeling of the touchpoints of ERP processes in BPM, in order to leverage their natural advantage of direct access to SAP business applications.
  • Push further into the business through more comprehensive business-focused modeling tools.
  • Goal-driven processes where the entire structure of the process model is not defined at design time, only the goals.

In the future, there will continue to be a focus on productivity with the BPM tools, greater evolution of the common process model, and better use of BI and analytics as the BusinessObjects assets are leveraged in the context of BPM.

SAP research overview: Gravity #SAPTechEd09

We had a blogger roundtable today with Soeren Balko, VP in the SAP NetWeaver BPM architecture and design group, and Marek Kowalkiewicz from the Brisbane section of SAP Research with an overview of the research and special projects going on at SAP. Innovations tend to emerge from the research centers – in conjunction with the universities with whom they collaborate and customers – then the product development groups become involved in order to determine how to productize the ideas.

The hot thing in their research right now is Gravity: the collaborative process modeling environment that they created within Google Wave. The process modeling is done purely with tools created in Google Web Toolkit; this is not SAP NetWeaver BPM embedded within Google Wave, it’s a BPMN modeler created with GWT. The process models can be exported to the BPMN 2.0 format for import into a BPMS (or another modeling tool). The Wave playback capability is especially nice for seeing how the process model was built, and different colored shadows on the model objects to denote which participant created the object.

There are bots that can be added to processes in order to check the process integrity, export process models, and to detect portions of the process flow that could potentially be collapsed into a subprocess. It makes sense that there will be other bots created in order to perform other automated checks and actions on the process model.

They’re not supporting the full BPMN 2.0 object set, but have a subset that can at least be used for simple models and as a proof of concept around the idea of a modeler within Wave.

James Taylor was at the table too, and we got into a discussion of modeling rules in a similar manner: although this is a BPMN modeler, so there’s no opportunity to model rules here, there may be an opportunity to take the NetWeaver BRM rules modeling paradigm and create a similar sort of prototype that allows for rules modeling within Wave.

We’ll be seeing more of Gravity tonight at the Process Design Slam, and if I ever get my freaking Wave account (2 invitations already on their way, but not arrived yet), then I can actually try it out for myself.

We also had a brief overview of Yowie, a project that we saw at DemoJam last night, that uses SAP text analytics to act as an intelligent agent either as a bot in Wave or when receiving emails regarding enterprise applications and assets; and BirdsEye, which receives the GPS signal sent from an iPhone (or any geopositioning RSS feed) to do near-real-time positional tracking for applications such as delivery optimization.

Process Design Slam preparation #SAPTechEd09 #BPXslam09

I was sitting in the blogger room this morning at SAP TechEd in Phoenix, and heard Marilyn Pratt mention my name over at another table: usually something that makes me perk up my ears, since Marilyn is a primo community builder, and I had the feeling that I was about to be recruited for something. 🙂 I’m already signed up as a judge/critic for the Process Design Slam event here tonight, which is the culmination (along with the TechEd events in Vienna and Bangalore) of a three-month virtual community collaboration for applying BPM tools and methodologies to solve a specific business challenge.

The selected process, from the design slam wiki:

Automating business processes related to forming virtual community-based power plant made up of resident’s personal solar wind generation.

The idea is to describe a process that allows a homeowner or business to come online as a micro generator within a township and the various steps (human and automated) that are required. Sustainability gets better over time, the more neighborhoods choose to generate power from green sources to supply the very power this neighborhood consumes – and in pretty much the same timeframe. This also reduces the losses of transporting power over longer distances.  Thus, power companies will more and more become brokers, and less actual suppliers of power.

After a chat with Marilyn, we’ve decided that I’ll interview the winners (briefly, since it will be after midnight, which is 3am in my time zone) and write a short blog post about their winning contribution. This will definitely break my standard rule that everything is off the record once the bar opens.

The community has already done a lot of the work, including creating and agreeing upon a process map using NetWeaver BPM 7.1:

and rules in NetWeaver BRM 7.1:

Keep an eye on the #BPXslam09 hashtag on Twitter for up-to-date news as the day progresses.

Forge your Lean process improvement game plan #BTF09

After an intro by Mike Gilpin, Clay Richardson gave the first keynote of the second day, focused on Lean process improvement. We were visited by the ghost of BPM past being Michael Hammer and business process reengineering, focused on mass production but forgetting the people; essentially, it became a euphemism for downsizing. The ghost of BPM present, although it has moved beyond that frightening past, is stuffed full of consultants, books, tools and certification programs, to the point of confusion. The ghost of BPM future, however, envisions an empowered front line and engaged customers.

There’s a greater demand for BPM than ever – 66% of those that Forrester surveyed want to do more with BPM – but almost no one has increased budget to implement it. ROI might still be used to sell BPM projects (necessary in these budgetary times), but the final metrics will be business value-based, since ROI doesn’t necessarily measure the actual business improvement.

Lean is shaping the new world of process improvement: processes are moving from standardized to flexible, and the focus is moving from ROI to value since the old IT-centric metrics just don’t work any more. From an implementation standpoint, Lean is about moving from waterfall to Agile, and shifting from on-premise to cloud computing environments.

In order to develop a process improvement game plan, it’s necessary to understand your approach (methodology, tools) and your strategic intent; he had an interesting Lean process improvement (LPI) measure where looking at the correlation between those two factors could diagnose whether an enterprise’s process improvement efforts are bloated, lean or anemic. From there, each of those ranges has a specific plan: if bloated, then you need to connect your process to strategy, and eliminate waste from the BPM technology portfolio (which could mean eliminating some of the tools that you use); if anemic, improve process governance and your process improvement talent pool.

Any process methodology needs to be customized to your specific environment and requirements, and you need to assess gaps in your skills (particularly process analysts) and work towards empowering the business. Process improvement has to be connected to your value drivers, including the center of excellence.

Interesting discussion following between Richardson and Gilpin, especially about BPM mashups (Richardson is just as hot about social BPM as I am): he says that the key to a successful mashup environment that will be used by business people is to make it look like Microsoft Office 2007. He also mentioned that closely pairing a process analyst with the developers can reduce bloat on the project since it reduces the amount of miscommunication across that critical boundary (this, of course, assumes that the process analysts comes from the business side and not part of the development team to begin with).

Waterfall contracts and iterative development don’t mix #BTF09

The post title is the best quote from Tom Higgins, CIO of the Territory Insurance Office in Australia, who came all the way from Darwin to speak at both the Gartner and Forrester conferences this week. I had a chance for a chat at the airport with him while waiting for our flight from Orlando to Chicago (and introduced him to the wonder that is the Chicago transit system), and caught his Appian-sponsored lunch presentation today.

TIO is a government-backed insurance operation that covers risks that most insurance companies won’t take on, including workers’ compensation, cyclone damage and other personal and P&C policies. They were looking to reduce their operational costs by making their claims operation more efficient, but also reducing their claims costs by reducing the length of disability claims, which can often be done through proper case management during the period of a claim. Originally the business was set on a COTS (commercial off-the-shelf) claims management system, but when they compared that with BPM, they realized that it met their requirements much better than the COTS systems available due to the ease of use and flexibility. They short-listed three vendors and did a three-day proof of concept with each; that managed to knock one vendor completely out of the running due to the complexity of the implementation, in spite of them being a large and well-respected vendor in the space (no, he didn’t say who; yes, he told me over a beer; and no, I won’t tell you).

For a short presentation, he spent quite a bit of time talking about the contract – including the “waterfall contracts and iterative development don’t mix” line – and I have to agree that this is an incredibly critical part of any BPM project, and very often is handled extremely poorly. The contract needs to focus on risk management, and you can’t let your lawyers force you into a fixed-price contract that has pre-defined waterfall-type milestones in it if you don’t know exactly what you want; in my experience, no BPM project has ever started with the business knowing exactly what they want ahead of time, and I don’t imagine that many do, so don’t mistake a contract for a project plan. If you plan on doing iterative or Agile development, where the requirements are defined gradually as you go along, then a fixed-price contract just won’t work, and will be a higher risk even though many (misinformed) executives believe that fixed price is always lower risk. Going with a time and materials contract requires a much higher level of trust with the vendor, but it will end up as a much lower risk since there won’t be the constant stream of change requests that you typically see with a waterfall contract. Besides, if you can’t trust the vendor, why are you working with them?

TIO had a number of issues pop up during their implementation: the CEO was replaced just before the vendor was engaged, and the business sponsor was replaced in the middle of development; however, due to a strong sponsorship and governance team, they were able to weather these storms. In fact, he sees the strength of the governance team as a critical success factor, along with using your “A team” for implementation, finding a committed vendor and engaging the business early.

He had a really good point about making sure that your project managers is not a business subject matter expert and does use an appropriate project methodology such as Agile. The PM is supposed to be the coordinator and facilitator of the entire project, and not an SME that will dive down the rabbit hole of specific business issues and requirements at the first sign of trouble. I’m a strong believer that PMs should manage projects, not gather requirements, write code or most other activities since that distracts them from the project and increases the risk that it will run off the rails when no one is looking; it’s good to hear that at least one other person shares my opinion on this.

They used Agile project methodology and Spiral development methodology, with six-week code cycles. The team was fairly small: seven TIO team members, an internal business reference group (the SMEs, who eventually became the rollout leads), four Appian people onsite, four offshore Appian team members, and four part-time specialists. The project started with Appian as the technical lead, but that shifted through the first three project phases, and now TIO essentially works on its own with no assistance from Appian. They established a center of excellence to assist with taking this success on to other projects, and that seems to be working: the initial project cost them over $3M, and the next one – which is three times more complex – cost only one-third of that since BPM is now built into their enterprise infrastructure. And, at the end of the day, they’re seeing a 30% productivity increase in their initial implementations.

Their biggest challenges were the introduction of Agile and Spiral methodologies, the geographic dispersion of the team, and recruiting the right talent for their remote location; they used internal education both for the methodologies and to grow their own BPM specialists locally when they couldn’t recruit them.

There were several things that they did that he feels contributed to their success, such as daily headline meetings, engagement with the business, fostering team spirit, and highlighting and addressing the riskier requirements early so that they can be tried out by the business and tuned as required. He also felt that Agile was a huge contributor, since there were no more 300-page requirements documents that were either not read or not understood by the business, but signed off regardless. He finished with a few strategic lessons learned: begin with the end in mind, including planning how this will become part of the infrastructure; and pick a big project in order to ensure commitment and executive engagement.

Getting Business Process Value From Social Networks #GartnerBPM

For the last session of the day, I attended Carol Rozwell’s presentation on social network analysis and the impact of understanding network processes. I’ll be doing a presentation at Business Rules Forum next month on social networking and BPM, so this is especially interesting even though I’ll be covering a lot of other information besides social graphs.

She started with the (by now, I hope obvious) statement that what you don’t know about your social network can, in fact, hurt you: there are a lot of stories around about how companies have and have not made good use of their social network, and the consequences of those activities.

She posited that while business process analysis tells us about the sequence of steps, what can be eliminated and where automation can help, social network analysis tells us about the intricacies of working relationships, the complexity and variability of roles, the critical people and untapped resources, and operational effectiveness. Many of us are working very differently than we were several years ago, but this isn’t just about “digital natives” entering the workforce, it’s about the changing work environment and resources available to all of us. We’re all more connected (although many Blackberry slaves don’t necessarily see this as an advantage), more visual in terms of graphical representations and multimedia, more interactively involved in content creation, and we do more multitasking in an increasingly dynamic environment. The line between work and personal life blurs, and although some people decry this, I like it: I can go to many places in the world, meet up with someone who I met through business, and enjoy some leisure time together. I have business contacts on Facebook in additional to personal friends, and I know that many business contacts read my personal blog (especially the recent foodie posts) as well as my business blog. I don’t really have a lot to hide, so don’t have problem with that level of transparency; I’m also not afraid to turn off my phone and stop checking my email if I want to get away from it all.

Your employees are already using social media, whether you allow it within your firewall or not, so you might as well suck it up and educate them on what they can and can’t say about your company on Twitter. If you’re on the employee side, then you need to embrace the fact that you’re connected, and stop publishing those embarrassing photos of yourself on Facebook even if you’re not directly connected to your boss.

She showed a chart of social networks, with the horizontal axis ranging from emergent to engineered, and the vertical axis from interest-driven to purpose-driven. I think that she’s missing a few things here: for example, open source communities are emergent and purpose-driven, that is, at the top left of the graph, although all of her examples range roughly along the diagonal from bottom left to top right.

There are a lot of reasons for analyzing social networks, such as predicting trends and identifying new potential sources of resources, and a few different techniques for doing this:

  • Organizational network analysis (ONA), which examines the connections amongst people in groups
  • Value network analysis (VNA), which examines the relationships used to create economic value
  • Influence analysis, a type of cluster analysis that pinpoints people, associations and trends

Rozwell showed an interesting example of a company’s organizational chart, then the same players represented in an ONA. Although it’s not clear exactly what the social network is based on – presumably some sort of interpersonal interaction – it highlights issues within the company in that some people have no direct relation to their direct reports, and one person who was low in the organizational chart was a key linkage between different departments and people.

She showed an example of VNA, where the linkages between a retailer, distributor, manufacturer and contract manufacturer where shown: orders, movements of goods, and payments. This allows the exchanges of value, whether tangible or intangible, to be highlighted and analyzed.

Her influence analysis example discussed the people who monitor social media – either within a company or their PR agency – to analyze the contributors, determine which are relevant and credible, and use that to drive engagement with the social media contributors. I get a few emails per day from people who start with “I read your blog and think that you should talk to my customer about their new BPM widget”, so I know that there are a lot of these around.

There are some basic features that you look for when doing network analysis: central connectors (those people in the middle of a cluster), peripheral players (connected to only one or two others), and brokers (people who form the connection between two clusters).

There are some pretty significant differences between ONA, VNA and business process analysis, although there are some clear linkages: VNA could have a direct impact on understanding the business process flows, while ONA could help to inform the roles and responsibilities. She discussed a case study of a company that did a business process analysis and an ONA, and used the ONA on the redesigned process in order to redesign roles to reduce variability, identify roles most impacted by automation, and expose critical vendor relationships.

Determining how to measure a social network can be a challenge: one telecom company used records of voice calls, SMS and other person-to-person communications in order to develop marketing campaigns and pricing strategies. That sounds like a complete invasion of privacy to me, but we’ve come to expect that from our telecom providers.

The example of using social networks to find potential resources is something that a lot of large professional services firms are testing out: she showed an example that looked vaguely familiar where employees indicated their expertise and interests, and other employees could look for others with specific sets of skills. I know that IBM does some of this with their internal Beehive system, and I saw a presentation on this at the last Enterprise 2.0 conference.

There are also a lot of examples of how companies use social networks to engage their customers, and a “community manager” position has been created at many organizations to help manage those relationships. There are a lot of ways to do this poorly – such as blasting advertising to your community – but plenty of ways to make it work for you. Once things get rolling in such a public social network, the same sort of social network analysis techniques can be applied in order to find the key people in your social network, even if they don’t work for you, and even if they primarily take an observer role.

Tons of interesting stuff here, and I have a lot of ideas of how this impacts BPM – but you’ll have to come to Business Rules Forum to hear about that.