The New Software Industry: Ray Lane

I’m at the Microsoft campus in Mountain View attending the New Software Industry conference, put on by Carnegie Mellon West and the Haas School of Business. I interviewed a few of the people from CMU West a few months ago about the new Masters of Software Management program, and ended up being invited to attend here today. Since I’m down here for TUCON this week, it was just a matter of coming in a day early and fighting the traffic down from the city this morning (although I left San Francisco at 7:30 this morning, I still arrived late, around 9:15).

Unfortunately, I missed the brief opening address which, according to the program, featured Jim Morris, Dean of CMU West, and Tom Campbell, Dean of Haas, so my day started with Ray Lane of Kleiner Perkins (formerly of Oracle) talking about the personal enterprise, or what I would call Enterprise 2.0.

Lane started with a discussion about how the software industry is changing, including factors such as packaging (including SaaS) and vertical focus. I found it interesting, if not exactly surprising, that he has a very American-centric view of the industry, so that he’s really talking about the software industry in the U.S., not the global industry; he spoke about India and China gaining market share in software as some sort of external force as opposed to part of the industry.

He had some interesting points: a call to action, which including leveraging community power via mashups and other collaborative methods; and a look at how platforms are moving from monoliths to clouds (i.e., services exist in cloud and are called as required). He covered some basic about Web 2.0 and web-driven capabilities. Since I’ve been so immersed in this for such a long time, there wasn’t much new here for me, although he had some interesting examples, particularly about collaboration and user-driven content.

He talked about the “personal enterprise”, where consumer web applications inspire new enterprise applications, or what many of us have been talking about as Enterprise 2.0. He makes a great point that somehow, being at home allows us to just try something new online, whereas the act of going into the office makes us want to spend a year evaluating rather than just trying something, and how we need to change that notion.

He gave seven laws for Enterprise 2.0 applications:

  • serves individual needs
  • viral/organic adoption
  • contextual personalize information
  • no data entry or training required
  • delivers instantaneous value
  • utilizes community, social relationships
  • minimum IT footprint

I’d love to expand further on each of these, but I’m trying to get this conference blogging back to something like real-time, so that will have to wait for another post.

He finished up with some examples of personal enterprise applications, with some discussion about what each of them contributed to advancing software industry business models:

  • Services: Webex, Skype, RIM, Google
  • Applications: Salesforce.com, NetSuite, RightNow
  • Collaboration: SuiteTwo, Visible Path

Access to the Microsoft guest wifi is tightly guarded and requires an hour or so turnaround to get login credentials, so this first post is coming out late and the other will trickle along throughout the day. All of the posts for this conference are available here.

BEA Dev2Dev days

BEA is holding a series of half-day developer seminars in a number of cities in Europe and the Americas, focussed on building enterprise mashups with their new/rebranded en.terpri.se platform. I was excited to see that one will be in Toronto, since it seems like vendors always skip my hometown; however, I’m less excited to see that it’s the only one of the seminars to be held at the same time as their own user conference, which means that I have to miss it.

A month of travel

Forgive me readers, for I have slacked off. It’s been 8 days since my last blog post. I blame the Canadian government, who insist on me doing my personal taxes by April 30th.

I’ve had a number of vendor product demos over the past several weeks, and it’s time to start blogging about them before I start into a month of travel: I’m giving a presentation at the TIBCO user conference in San Francisco next week, then attending the BEA user conference in Atlanta the following week, a few days vacation in Nova Scotia after that, then on to Las Vegas for a presentation at the Shared Insights Portals and Collaboration conference. Watch for live blogging from all three conferences, although not from my vacation.

Enterprise 2.0 TV launches today

It slipped past the earlier announced launch date of April 9th, but it looks like Dion Hinchcliffe’s Enterprise 2.0 TV Show will launch today. As of 2am (ET), there’s only a short snippet available on the site, but I’m hopeful. I’m also hopeful for a subscription feed via iTunes so that I can watch this on my iPod, but I’m not holding my breath.

ProVision 6.0 release

I finally made it home from Chicago around 1:30 this morning: United Airlines wimped out and cancelled all their flights, but Air Canada came through in the crunch.

I’m now on a Proforma webinar about their new V6.0 release of ProVision, of which I had a brief preview at their user conference last fall. Some highlights:

  • Browser-based access for collaboration, although I suspect that this does not include full modelling capabilities based on the comments that I heard at the user conference.
  • Web services access to Knowledge Exchange — this is pretty exciting, and I’d like to hear about more of this. For example, if ProVision exposed process models via a web service, could a BPMS consume that model directly?
  • Embedded Crystal Report functionality, which I recall was a big deal for the user conference attendees.
  • Updated UI in their desktop application, which was looking a bit dated.
  • The concept of dimensions in models, which allows for alternative versions to be created based on specific dimensions, where a dimension may be, for example, geography, or as-is versus to-be. In one model, then, you can compare North American as-is models with European to-be models, or whatever else you want to define based on your dimensions. Pretty powerful stuff.

I’m not familiar enough with ProVision to tell exactly what’s new and what was there before, but it does look like some significant improvements in this version.

ProVision 6.0 is being released over the next two weeks. A replay of the webinar will be available on their website.

BrainStorm BPM Day 2: Pat Dowdle

I was wrong, the last one wasn’t my last session, I had time for one more: Pat Dowdle on a Roadmap to Implementing Process-Based Management, based on CAM-I’s emerging Process-Based Management (PBM) assessment and framework. There’s a number of pieces to this:

  • Mindset/culture (how things are done; values, rules, practices)
  • End-to-end processes (classification; portfolio; structure)
  • Process-based measures (process performance; incentives/compensation)
  • Initiative integration: (ABC/M; ISO/quality standards; Six Sigma)
  • All centred around customer expectations

He went through a management model for process ownership, from a process council to process owners to team leaders and the team, and talked about a roadmap to PBM through the seven key milestones: awareness, commitment, engagement, managing processes, integration, embed and optimization. In this, he talks about moving from process metrics to transformation metrics once processes start to integrate across the organization: critical for moving from local optimized processes to global optimization, which seems to be my personal theme for the day. He also names the transitions between the seven milestones: discovery, foundation, transition, transformation (moving from managing processes to integration), institutionalization and realization.

There’s a presentation on the CAM-I website that goes into more detail about PBM including much of this, and quite a bit of material of theirs on BPMInstitute.org (just search for CAM-I). Check out slide 13 on the CAM-I presentation for a great chart that maps different quality programs (such as Six Sigma) against their seven milestones.

My flight home just cancelled, so I may hang around a bit longer…

BrainStorm BPM Day 2: Dan Madison

Last session of the day for me: I’m headed off to the airport following this, although I realize that the probability of a flight in or out of Chicago being on time when it’s snowing is near zero. With some luck, I’ll make it home tonight. The only thing that I’m missing is some sessions where the vendors get to show off their products, and a final wrapup keynote.

This session by Dan Madison is on Creating the “To Be” Process, something that I often do with customers, and I’m always looking to learn new tips and techniques from others who do the same thing. This session is part of the Organizational Performance symposium, the first of those that I’ve attended these two days.

He suggests a number of “lenses of analysis” to look at processes and derive the “to be” processes from the problems seen in that process.

First, create a customer report card, which for each ranked criteria, shows the current process performance (usually around quality and timeliness), what the best possible performance in that process would look like, and the two main competitors or outsourcers.

Second, look at the things that frustrate the people who are currently participating in the process, since there’s a high correlation between frustration and quality problems: frustration has the ability to act as a lens focussed on problem areas. Once frustrations are identified, the process participants tend to generate a ton of ideas on how to fix the problems, and there’s a huge amount of buy-in for changing the process from the grassroots level. I’ve definitely seen this with my customers. There was an audience question about how to keep this from becoming a bitch session, and Dan said that he uses some basic rules if things start to go that way: only process problems are discussed, not people problems; and each person can only bring forward their three main frustrations.

Third, look at the time required for each type of work in the process: processing, waiting, rework, moving, inspecting and setup. He finds that processing — the actual work — is typically only 2-20% of the time, which indicates that there’s a huge amount of inefficiency in the process. Of that small percentage, even all of that may not be time that adds value to the process. If you’ve automated your process with BPM, then you can gather this information with your system, but if your processes are still manual, then figuring out how your process breaks down will be manual, too.

Fourth, a cost lens such as activity-based costing; ABC calculates what it really costs to deliver a specific product or service by looking at the labour, overhead and material costs of each step in a process.

Fifth, a quality lens such as Six Sigma for measuring defect rates or some other relevant quality measure.

Last, take a look at benchmarks and best practices, by looking at your direct competitors and what they’re doing; and by looking at companies that have a process similar to your problem process and are considered to be world class, regardless of their industry.

He then moved on to design principles for the to-be process:

  • Design the process around value-adding activities.
  • Provide a single point of contact for customers and suppliers.
  • If the inputs coming in to the process naturally cluster, create a separate process for each cluster.
  • Ensure a continuous flow of the “main sequence.”
  • Bring downstream information needs upstream.
  • Involve as few people as possible in performing a process [our old adage of reducing handoffs lives!].
  • Ensure 100% quality at the beginning of the process.
  • Use co-located or networked teams for complex issues.
  • Redesign the process first, and then automate it.

Putting it all together, creating the to-be is the synthesis of:

  • Customer feedback
  • Worker frustrations
  • Time analysis
  • Cost analysis
  • Quality analysis
  • Benchmarking and best practices
  • Design principles
  • Information technology

Dan’s obviously experienced at this: he does it as a consultant, he teaches process mapping and improvement at the local university, and he has a couple of books that he’s written on it. I haven’t read his books, but I’ll be checking them out soon.

BrainStorm: Meeting my peeps in Chicago

I’m starting to see more and more familiar faces at these BPM conferences, and this one is no exception: I’ve met Gregg Rock and Tom Dwyer of BPMInstitute.org at a couple of conferences now, finally met Brett Champlin at the last Gartner conference, and Bruce Silver and I meet up so often that our spouses are starting to get suspicious. 🙂  There’s also the people who work for the vendors — I seem to see many of the same ones at every show, and these shows must be some small form of purgatory for them.

It’s also fun to meet people who I’ve only met online previously, and I’ve had a couple of those experiences here in Chicago. David Novick, who added a comment to my blog that turned into an email conversation, recognized me in one of the sessions — I guess that Rannie’s new headshot of me is paying off. At lunch, I happened to sit at the same table as Barbara Saxby from Ramco, who was on a webinar that I moderated last month. And Jean Campagna of Resolvit found me yesterday evening at the vendor showcase/drinks party to say that her colleague was sending me a hello: apparently her colleague back at the office has been reading my blog to Jean over the phone.

BrainStorm BPM Day 2: Ken Orr

For the first breakout session of the day, I attended Ken Orr’s talk on Business Process Driven Enterprise Architecture. He started out with some observations: improving business processes is essential for enterprises; business architecture is critical; modelling is critical; and business processes are hard to manage in the real world and especially in big organizations. Nothing earth-shattering here, but excellent points.

He made a great analogy by talking about IT levees — fragile yet critical applications and systems where you know that they’re a weak point but just never find the time or money to fix them — and understanding when they’re going to break. Apparently, a year before Hurricane Katrina, there was an exercise that modelled exactly what would happen if a force 4 or 5 hurricane hit New Orleans, but nothing was done; when Katrina hit, the levees failed exactly as modelled. Orr talked about mission critical spreadsheets as being one class of IT levees that are all set up to fail at the wrong time.

He talked about how enterprise architecture is like city planning, where your deliverables are things like a city plan, a zoning plan, a building code and an approved building-materials list. Sticking with the disaster analogies, he talked about how building codes are the result of disasters, and the obvious analogy with software and system disasters is pretty clear.

He covered off their enterprise architecture framework briefly, but used it mostly to discuss how the different layers in a framework interact: in short, technology changes enable business changes, and business changes drive the need for technology changes. He also talked about determining what type of business that you’re in, that is, what business processes are you really doing, so that you can figure out whether or not you should be in those businesses as well as how to improve them. Funnily enough, he really answered part of the question that I asked in the panel in the previous session with respect to getting an end-to-end business process view, but that’s sort of expected from an enterprise architecture person since EA can be a key tool in doing just that. In his terminology, what I’m talking about is a value stream, defined by James Martin in The Great Transition as “…an end-to-end set of activities which collectively create value for a customer.”

Update: I forgot to add “Orr’s rules of modelling”, which he gave after I had shut down my laptop, so were just scribbled on a piece of paper:

  1. It’s more important to be clear than correct. If you’re clearly wrong, someone will correct you. If you’re obscurely correct, you may never know.
  2. It’s not important that your first model is correct, only that your last model is correct.