TUCON: Mark Elder and Venkat Swaminathan

I played hooky for a couple of sessions to go over my presentation for later today; as I mentioned earlier, TIBCO’s product base is broader than my interests, so where a couple of natural dead spots during the schedule for me.

Just before my presentation with Tim Stephenson, I sat in on Mark Elder and Venkat Swaminathan, both from TIBCO, talking about BAM. Since this room (which is also where I present next) has crappy wifi reception, publication will be delayed until after I present, since it might be considered a bit cavalier to dash out between presentations just to post.

They spent some amount of time at the beginning explaining what BAM is and why you need it; I’ll assume that most of you already know that stuff.

The interesting part (for me) are the specifics about TIBO’s BAM product, iProcess Insight, which is a plug-in to the BusinessFactor framework to provide BAM capability for monitoring iProcess process flows. Like most of the other BAM products that I’ve seen, it allows the definition of industry-specific KPIs in the business processes, then provides real-time monitoring of those KPIs with drill-downs from the aggregate statistics to the details. You can also use BusinessFactor to integrate external data sources, like a customer database. There’s not shared process models between the BPM execution environment and BAM, since the first step is to download process definitions from the iProcess Engine to create a project; changes to the iProcess model require the model to be re-downloaded to the iProcess Insight project and the project manually updated to suit the updated process model. With all the round-tripping problems that we have already with process modelling in one environment and execution in another, I would have favoured a shared model approach.

Once you have your project defined, the BAM runtime sends information over to the process engine about what to monitor, and the engine sends back the relevant events to be aggregated, analyzed and presented within iProcess insight.

You can define different dashboards for different parts of the process, with different KPIs visible in each dashboard. There are some standard dashboard views, but it’s pretty configurable/customizable for views such as balanced scorecards or even geographical overlays.

Looking at the components of iProcess Insight, there’s a wizard interface to initially create a BusinessFactor project that will become your BAM dashboard, a process monitor for the start/end of procedures, a step monitor for the start/end of steps and their deadlines, a resource monitor for user/group metrics, and a supervisor capsule to allow someone with the appropriate credentials to change a specific process instance.

We then looked at a comparison between iProcess Insight and iProcess Analytics: basically, Insight is near-real-time, event-driven operational BAM, whereas Analytics is historical analysis and reporting based on batch statistics export from the process engine. Many BPM vendors (especially the more mature ones) end up with this same split of functionality, since they tend to have first built the analytics years ago when they built the process execution engine, then OEM’d or bought a BAM engine and strapped it on the side within the last year or two.

Based on the audience questions, and some earlier observations, I’m starting to get the idea that TIBCO’s “user” base is pretty technical, with not much representation from the business side of organizations. Given that many of their products are development and low-level integration tools, this isn’t surprising overall, but I expected a few more non-geeks in the BPM sessions. If this is any indication of who’s using TIBCO within customer organizations, TIBCO needs to focus more on the business side of their customers to really play in the BPM space.

TUCON: Diane Schueneman

Next in the morning’s general session was Merrill Lynch‘s Head of Global Infrastructure, Diane Schueneman. She focussed on change and complexity, and how to manage that while maintaining a client focus. Like all the financial institutions that I work with, this comes down to four types of change: disintermediation, competition (especially when there is ambiguity over whether another firm is a competitor or customer), innovation and regulation. This all creates a huge amount of complexity; many companies try to address this by adding more complexity, which leads to client expectations that are rarely met. With more than 9 million transactions processed per day, Merrill Lynch needs to have a better way to handle client expectations, which they’ve done by moving from a product silo focus to a client focus, where the service is organized from the client’s point of view.

She described how they did this:

  • 70% of IT spending on innovation, 30% on maintenance (the industry average is more like the opposite)
  • Straight=through processing, with a goal of 99% of transactions processed without human intervention
  • Application availability 99.95%
  • Global sourcing to reduce costs
  • Focus on getting client satisfaction into the top 3 within the industry
  • Use of e-channels

Schueneman was very complimentary of the Spotfire acquisition announced earlier, since they use predictive analytics heavily within Merrill Lynch and see the value: they can actually predict most fraud before it ever happens based on patterns in the data.

An interesting piece of trivia from Schueneman: during the 3 days following 9/11 when all planes in the US were grounded, not a single cheque was cleared in the country because the paper cheques were all sitting on planes. This resulted in the development new legislation that allowed for electronic scanning and clearing of cheques at the point of origin. Now if only we could clear a US cheque in Canada in less than 30 days.

Kicking off TUCON: Vivek Ranadivé

Although I kicked off TUCON in an Irish pub down the road two nights ago with some jet-lagged TIBCO folks, it really started last night with the opening reception in the solutions showcase. The reception was packed, and it turns out that there’s over 1100 people here, which is pretty amazing for a user conference.

There’s the usual flurry of press releases from TIBCO and their partners, such as announcing version 2.0 of Business Studio with full BPMN 1.0 support, although I’ll wait until after my joint presentation with Tim Stephenson, the engineering manager for Business Studio, later today to see all the details of the new version. This morning, they also announced that they’re acquiring Spotfire, an analytics firm.

It’s been a while since I’ve been to a user conference of this size — probably the last one is FileNet’s about two years ago, prior to the assimilation — and I’d forgotten about the light and music show in the opening sessions. These companies spend a lot of money to create choreographed video and music shows, always with fog effects, that don’t add a lot of value, although this one did have a cute sequence of two jugglers intended to make a point about collaboration. It was mercifully short.

After a brief intro, we moved into the first general session with TIBCO CEO’s Vivek Ranadivé, who will be throwing out the first ball at the Giants game that we’re all attending tomorrow night. He started with the paradox of how the cost of implementing IT keeps going up when the cost of CPU, storage, memory and network are all declining: it’s all about too much customization. This is an interesting echo of the discussions yesterday at the New Software Industry conference about how product companies now make more than half of their revenue from services: that’s one excellent explanation for why IT costs keep going up.

Ranadivé showed a vision of what will be happening in the next ten years: SOA, BPM, predictive business and AJAX, although that’s not really that much of a prediction as opposed to projecting well-entrenched trends. For the large corporations represented in the audience, however, it probably looks like futuristic magic.

He was then joined briefly by Christopher Ahlberg, the CEO of Spotfire, to talk about what they do and why it’s a great fit (naturally) to be acquired by TICBO.

Disclosure: TIBCO is a client, and they paid my travel expenses to be at this conference.

The New Software Industry: Craig Mundie

Craig Mundie, Chief Research and Strategy Officer at Microsoft, gave today’s lunch address; this post is out of order because I was not about to whip out my laptop and displace the best conference lunch that I have ever had — grilled Kobe beef over salad greens in a lemon fennel vinaigrette with baby pear tomatoes, spiced candied walnuts, Point Reyes blue cheese and a puff pastry triangle with balsamic reduction, followed by a marquise au chocolat of dark truffle chocolate, garnished with whole hazelnuts, a chocolate leaf and fresh raspberries — so I took notes in an actual paper notebook. Microsoft did host us in their well-appointed conference facilities and provided the afore-mentioned lunch, so they deserve the time to chat us up during lunch.

Mundie, who interpreted all the morning’s discussions about services to the narrowly focussed SaaS definition, discussed how there are opportunities to complement internal enterprise applications with services that are in the cloud.

He spent quite a bit of time discussing processor speed increases, based on the premise that we’ve reached a fundamental limit to processor speeds at around 3GHz (since increases without spontaneous combustion have been achieved in the past by lowering voltages, which just can’t be lowered any further), and how multi-core processors is what will cause the next way of processor speed increases. The result of this, however, is that machines that already are operating at far below capacity will have even more idle cycles. He discussed the idea of “fully productive computing” to absorb the idle cycles by such speculative execution activities as anticipating and preloading the next most likely applications that the user will run — a discussion that turned into a brief ad for Microsoft Vista.

In response to a question about the local platform as a “solution” for privacy concerns, he spoke about how providing notice of information gathering, and choice as to how that information is used, alleviates most of the concerns about privacy in a hosted environment.

That’s it for my coverage of the New Software Industry conference. All the presentations will be available online in about a week, and within a few weeks all of the video recorded during the sessions will be on Google Video.

I’ve already attended the opening reception for TUCON, and I’ll be full on with blogging about that tomorrow, except when I’m presenting late in the day.

The New Software Industry: Jim Morris and Bob Glushko

Jim Morris of CMU West and Bob Glushko of UC Berkeley summarized the day in a final session, and although it’s coming up on 6pm and I’m eager to get back on the 101 up to San Francisco to get to the TUCON reception, I’ve been fascinated by today’s conference and not about to leave early. As Morris pointed out, this was originally a two-day conference crammed into one day.

Glushko gave us the phrases that stuck with him from the sessions today:

  • No-man’s land as a zone on a graph of business models
  • Sweet and sour spots for business models
  • Impact and complexity of the product-service mix
  • Service systems, and how they’re embedded in social and economic systems
  • The “nifty nine”, being the nine SaaS public companies that have achieved (collectively) $1.4B in revenues
  • Data lock-in as the dirty secret of the open source
  • Open source as a lever for putting pressure on your competitor’s business model
  • Emerging architecture, which he considered to be the oxymoron of the day
  • The tension between front stage and back stage design
  • Collective action in the software industry

Morris chimed in with his favourite, the one that I liked as well, where in World of Warcraft you can tell if someone has a Master’s in Dragon Slaying, and how good they are at it, whereas the software industry in general, and the open source community in particular, has no equivalent (but should).

Morris pointed out that Google and Amazon are gathering a huge amount of information about us, and we’re giving it to them for free; at some point, they’re going to leverage this information at some point in the future and make a huge amount of money from it — not by violating the privacy of an individual’s data, but through the aggregate analysis of that data.

At the end of it all, it’s clear to me that this conference is pretty focussed on the new software industry in the valley, or at most, the new software industry in the U.S. It’s true, there’s been a disproportionate amount of software innovation done within 50 miles of where I’m sitting right now, but I think that’s changing, and future “new software industry” conferences will need to be more inclusive of the global software industry, rather than see it as an external factor.

The New Software Industry: David Messerschmitt

David Messerschmitt, a prof at UC Berkeley and the Helsinki University of Technology, finished the formal presentations for the day with a talk on how inter-firm cooperation can be improved in the software industry. This is an interesting wrap-up, since we’ve been hearing about technology, applications and business opportunities all day, and this takes a look at how all these new software industry companies can cooperate to the benefit of all parties.

He started out by proposing a mission statement: focus the software industry’s attention and resources on providing greater value to the user and consumer. This has two aspects: do less harm, and do more direct provision of value to the customer rather than the computational equivalent of administrivia.

In general the software industry has a fairly low customer satisfaction rate of around 75%, whereas specific software sectors such as internet travel and brokerage rank significantly higher. In general, services provided by people have a lower satisfaction rate (likely due to the variability of service levels), and the satisfaction rates are decreasing each year. Complaints are focussed on gratuitous change (change due to platform changes rather than anything that enhances user value) and security, and to some extent on having to change business processes to match an application’s process rather than having the system adapt to their business process. Certainly, there are lessons here for BPM implementations.

Messerschmitt raised the issue of declining enrolment of women in computer science, which he thinks is in part due to the perception that computer science is more about heads-down programming rather than about dealing with users’ requirements. He sees this as a bit of a canary in a coal mine, indicating some sort of upcoming problem for the computing industry in general if it is driving away those who want to deal with the user-facing side of software development. Related to that, he recommends the book Democratizing Innovation by Eric von Hippel, for its study of how customers are providing innovation that feeds back into product design and development, not just in software but in many areas of products.

He ended up by discussing various ways to improve inter-firm cooperation, such as the Global Environment for Networking Innovations (GENI) initiative, ways to accomplish seamless operation of enterprise systems, and referring to a paper that he recently wrote and will be published in July’s IEEE Proceedings, Rethinking Components: From Hardware and Software to Systems. He then listed elements of collective action that can be pursued by industry players, academia and professional organizations to help achieve this end:

  • Systematically look at knowledge gaps and ensure that the research is addressing those gaps
  • Create/educate the human resources that are needed by the industry
  • Understand and encourage complementarities, like broadband and certain types of software
  • Structures and processes: capture end-user innovations for incorporation into a product, and achieve a more orderly evolution of technology with the goal of leaving behind many fewer legacies in the future

He’s definitely of the “a rising tide lifts all boats” mindset.

The New Software Industry: Investment Opportunities Panel

Jason Maynard of Credit Suisse moderated a panel on investment opportunities in the new software industry, which included Bill Burnham of Inductive Capital, Scott Russell (who was with two different venture capital firms but doesn’t appear to be with one at this time, although his title is listed as “venture capitalist”), and Ann Winblad of Hummer Winblad Venture Partners.

This was more of an open Q&A between the moderator and the panel with no presentation by each of them, so again, difficult to blog about since the conversation wandered around and there were no visual aids.

Winblad made a comment early on about how content management and predictive analytics are all part of the collaboration infrastructure; I think that her point is that there’s growth potential in both of those areas as Web 2.0 and Enterprise 2.0 applications mature.

There was a lengthy discussion about open source, how it generates revenue and whether it’s worth investing in; Burnham and Russell are against investing in open source, although Winblad is quite bullish on it but believes that you can’t just lump all open source opportunities together. Like any other market sector, there’s going to be winners and losers here. They all seem to agree, however, that many startups are benefiting from open source components even though they are not offering an open source solution themselves, and that there are great advantages to be had by bootstrapping startup development using open source. So although they might not invest in open source, they’d certainly invest in a startup that used open source to accelerate their development process and reduce development costs.

Russell feels that there are a number of great opportunities in companies where the value of the company is based on content or knowledge rather than the value of their software.

SaaS startups create a whole new wrinkle in venture: the working capital management is much trickier due to the delay in revenue recognition since payments tend to trickle in rather than be paid up front, even though the SaaS company needs to invest in infrastructure. Of course, I’m seeing some SaaS companies that are using hosted infrastructure rather than buying their own; Winblad discussed these sort of rented environments, and other ways to reduce startup costs such as using virtualization to create different testing environments. There are still a lot of the same old problems however, such as sales models. She advises keeping low to the ground, getting something out to a customer in less than a year, getting a partner to help bring the product to market in less than two years. As she put it, frugality counts; the days of spending megabucks on unnecessary expenses went away in 2000 when the first bubble burst, and VCs are understandably nervous about investing in startups that exhibit that same sort of profligate spending.

Maynard challenged them each to name one public company to invest in for the next five years, and why:

  • Russell: China and other emerging markets require banking and other financial data, which companies like Reuters and Bloomberg (more favoured) will be able to serve. He later made comments about how there are plenty of opportunities in niche markets for companies that own and provide data/information rather than software.
  • Burnham: mapping/GPS software like Tele Atlas, that have both valuable data and good software. He would not invest in the existing middleware market, and specifically suggested shorting TIBCO and BEA (unless they are bought by HP) — the two companies whose user conferences that I’m attending this week and next.
  • Winblad: although she focusses on private rather than public investments, she makes Amazon is a good bet since they are expanding their range of services to serve bigger markets, and have a huge amount of data about their customers that allows them to . She thinks that Bezos has a good vision of where to take the company. She recommends shorting companies like CA, because they’re in the old data, infrastructure and services business.

Audience questions following that discussion focussed a lot on asking the VCs opinions on various public companies, such as Yahoo. Burnham feels that Yahoo is now in the entertainment industry, not the software industry, so is not a real competitor to Google. He feels that Google versus Microsoft is the most interesting battle to come. Russell thinks that Yahoo is a keeper, nonetheless.

Questions about investments in mobile produced a pretty fuzzy answer: at some point, someone will get the interface right, and it will be a huge success; it’s very hard for startups to get involved since it involves them doing long negotiations with the big providers.

Burnham had some interesting comments about investing in the consumer versus the business space, and how the metrics are completely different because marketing, distribution and other factors differ so much. Winblad added that it’s very difficult to build a consumer destination site now, like MySpace or YouTube. Not only are they getting into a crowded market, but many of the startups in this area have no idea how to answer basic questions about the details of an advertising revenue model, for example.

Burnham had a great comment about what type of Web 2.0 companies not to invest in: triple-A’s, that is, AdSense, AJAX and arrogance.

Winblad feels that there’s still a lot of the virtualization story to unfold, since it is seriously changing the value chain in data centres. Although VMware has become the big success story in this market, there are a number of other niches that have plenty of room for new players. She also thinks that companies providing specialized analytics — her example was basically about improving financial services sales by analyzing what worked in the past — can provide a great deal of revenue enhancement for their customers. As a final point on that theme, Maynard suggested checking out Swivel, which provides some cool data mashups.

The New Software Industry: Bob Glushko and Shelley Evenson

Bob Glushko, a prof at UC Berkeley, and Shelley Evenson, a prof at CMU, discussed different views on bridging the front stage and back stage in service system design. As a side note, I have to say that it’s fun to be back (temporarily) in an academic environment: many of these presentations are much more like grad school lectures than standard conference presentations. And like university lectures, they cover way too much material in a very short time by speaking at light speed and flipping slides so fast that there’s no time to even read what’s on the slide, much less absorb or document it. If I had a nickel for every time that a presenter today said “I don’t have time to go into this but it’s an important concept” while flipping past an interesting-looking slide, I could probably buy myself the drink that I need to calm myself after the information overload. 🙂

Glushko posits that greater predictability produces a better experience, even if the average level of service is lower, using the example of a self-service hotel check-in versus the variability of dealing with a reception clerk. Although he doesn’t mention it, this is exactly the point of Six Sigma: reducing variability, not necessarily improving service quality.

He goes on to discuss the front stage of services, which is the interaction of the customer or other services with the services, and the back stage, which is the execution of the underlying services themselves. I love his examples: he uses an analogy of a restaurant, with the front stage being the dining room, and the back stage being the kitchen. Front stage designers focus on usability and other user interface factors, whereas the back stage designers focus on efficiency, standardization, data models and the like. This tends to create a tension between the two design perspectives, and begs the question if these are intrinsic or avoidable.

From a design standpoint, he feels that it’s essential to create information flow and process models that span both the back and front stages. The focus of back stage design is to design modular and configurable services that enable flexibility and customization in the front stage, and to determine which back stage services you will perform and which you will outsource/reuse from other service providers. Front stage design, on the other hand, is focussed on designing the level of service intensity (the intensity of information exchange between the customer and the service, whether the service is human or automated), and to implement model-based user interfaces and use these models to generate/configure/specify the APIs of user interfaces for the services. By exposing back stage information in front stage design, more back stage information can improve the immediate experience for a specific customer, and can improve subsequent experiences. Data mining and business intelligence can also improve service for future customers.

Evenson, who specializes in interaction design, has a very different perspective than Glushko, who focusses on the back stage design, but rather than being opposing views, they’re just different perspectives on the same issues of designing service systems.

She started out with a hilarious re-rendering of Glushko’s restaurant example by making the point that she applied colour to make the division of the co-production between front and back stage more visible.

Her slides really went by so fast that I was only able to capture a few snippets: sensors will improve the degree of interaction and usefulness of web-based services; technology influences our sense of self; services are activities or events that form a product through interaction with a customer; services are performances: choreographed interactions manufactured at the point of delivery; services are the visible front end of a process that co-produces value. A service system is a framework that connects service touchpoints so that they can sense, respond and reinforce one another. The system must be dynamic enough to be able to efficiently reflect the expectations people bring to the experience at any given moment. Service systems enable people to have experiences and achieve goals.

She discussed the difficulties of designing a service system, such as the difficulty of prototyping and the difficulty of representing the experience, and pointed out that it requires combining aspects of business, technology and experience. She feels that it’s helpful to create an integrated service design language: systems of elements with meanings (that designers use to communicate and users “read”) plus sets of organizing principles.

The New Software Industry: Martin Griss and Adam Blum

Martin Griss of CMU West and Adam Blum of Mobio Networks had a fairly interactive discussion about integrating traditional software engineering practices into modern service oriented development.

Griss is a big proponent of agile development, and believes that the traditional software development process is too ponderous; Blum admits to benefits from smaller teams and lightweight process for faster delivery, but he believes that some of the artifacts of traditional development methods provide value to the process.

Griss’ problems with traditional development are:

  • Too many large documents
  • It’s too hard to keep the documents in synch with each other and the development
  • People spend too much time in document reviews
  • Use cases are too complex
  • Can’t react well to changes in requirements
  • Schedule and features become omnipotent, rather than actual user requirements

In response, Blum had his list of problems with agile development:

  • Some things really do need upfront analysis/architecture to create requirements and specification, particularly the lower layers in the stack
  • Team management needs to be more complex on larger projects
  • Many agile artifacts are simply “old wine in new bottles”, and it’ simply a matter of determining the right level of detail
  • If you have a team that’s currently delivering well, the introduction of agile processes can disrupt the team and impact productivity — if it’s not broke, don’t fix it
  • Some of the time-boxing of agile development (e.g., SCRUM monthly sprints, daily 10-minute meetings) creates artificial schedule constraints
  • Agile development theory is mostly pseudo-science without many facts to back it up
  • Modern tools can make older artifacts lighter-weight and more usable

Writing requirements and specifications is something that I’ve spent probably 1000’s of hours doing over the years, and many of my customers still require this methodology, so I’m sympathetic to Blum’s viewpoint: sometimes it’s not appropriate or not possible to go agile.

An interesting point emerged from the back-and-forth discussion: it may not be possible to build the development platforms and frameworks themselves (such as what Mobio builds) in an agile fashion, but the applications built on those high-level platforms lend themselves well to agile development. Features to be added to the platform are effectively prototyped in an agile way in applications built on the platform, then are handed off to the more traditional, structured development cycle of the platform itself.

Griss, who was partially looking to just stir up discussion earlier, pointed out that it’s necessary to take the best parts of both ends of the software development methodology spectrum. At the end, it appears that they agree that there are methodologies and artifacts that are important, it’s just a matter of the degree of ceremony to use on any given part of the software development process.

The New Software Industry: Open Source panel

First up after lunch is a panel on the role of open source in service management, moderated by Martin Griss of CMU West, and including Kim Polese of SpikeSource, and Jim Berbsleb and Tony Wasserman of CMU West.

Polese is included in the panel because her company is focussed on creating new business models for packaging and supporting open source software, whereas the other two are profs involved in open source research and projects.

The focus of the session is on how open source is increasingly being used to quickly and inexpensively create applications, both by established companies and startups: think of the number of web-based applications based on Apache and MySQL, for example. In many of these cases, a dilemma is created by the lack of traditional support models for open source components — that’s certainly an issue with the acceptance of open source for internal use within many organizations — so new models are emerging for development, distribution and support of open source.

Open source is helping to facilitate unbundling and modularization of software components: it’s very common to see open source components from multiple projects integrated with both commercial software components and custom components to create a complete application.

A question from the audience asked if there is a sense of misguided optimism about the usefulness open source; Polese pointed out in response that open source projects that aren’t useful end up dying on the vine, so there’s some amount of self-selection that tends to promote successful open source components and suppress those that are less successful through market acceptance.

As I mentioned during the Brainstorm BPM conference a few weeks back, it’s very difficult to blog about a panel — much less structure than a regular presentation, so the post tends to be even more disjointed than usual. With luck, you’ll still get some of the flavour of the panel.