Can packaged applications ever be Lean? #BTF09

Chip Gliedman, George Lawrie and John Rymer participated in a panel on packaged applications and Lean.

Rymer argued that packaged apps can never be Lean, since most are locked down, closed engines where the vendor controls the architecture, they’re expensive and difficult to upgrade, they use more functions than customers use, they provide a single general UI for all user personas, and each upgrade includes more crap that you don’t need. I tend to be on his side in this argument about some types of appls (as you might guess about someone who used to write code for a living), although I’m also a fan of buy over build because of that elusive promise of a lower TCO.

Gliedman argued the opposite side, pointing out that you just can’t build the level of functionality that a packaged application provides, and there can be data and integration issues once you abandon the wisdom of a single monolithic system that holds all your data and rules. I tend to agree with respect to functionality, such as process modeling: you really don’t want to build your own graphical process modeler, and the alternative is hacking your own process together using naked BPEL or some table-driven kludge. Custom coding also does not guarantee any sort of flexibility, since many changes may require significant development projects (if you write bad code, that is), rather than a package app that may be more configurable.

It’s never a 100% choice between packaged apps and custom development, however: you will always have some of each, and the key is finding the optimal mix. Lean packaged apps tend to be very fit-to-purpose, but that means that they become more like components or services than apps: I think that the key may be to look at composing apps from these Lean components rather than building Lean from scratch. Of course, that’s just service-oriented architecture, albeit with REST interfaces to SaaS services rather than SOAP interfaces to internal systems.

There are cases where Lean apps are completely sufficient for purpose, and we’re seeing a lot of that in the consumer Web 2.0 space. Consider Gmail as an alternative to an Exchange server (regardless of whether you use Outlook as a desktop client, which you can do with either): less functionality, but for most of us, it’s completely sufficient, and no footprint within an organization. SaaS, however, doesn’t not necessarily mean Lean. Also, there are a lot of Lean principles that can be applied to packaged application deployment, even if the app itself isn’t all that Lean: favoring modular applications; using open source; and using standards-based apps that fit into your architecture. Don’t build everything, just the things that provide your competitive differentiation where you can’t really do what you need in a packaged apps; for those things where you are doing the same all every other company, suck it up and consider a packaged app, even if it’s bulky.

Clearly, Gliedman is either insane or a secret plant from [insert large enterprise vendor name here], and Rymer is an incurable coder who probably has a ponytail tucked into his shirt collar. 🙂 Nonetheless, an entertaining discussion.

How Can Lean Software Enable You To Better Serve The Business? #BTF09

John Rymer and Dave West presented a breakout session in the application development track on how Lean software development practices can be applied in your business. This obviously had a big focus on Agile, and how it can be used within large organizations. Unlike what some people think, Agile isn’t cowboy coding: it is quite disciplined, but it is optimized for delivering the right thing (from a business standpoint) in the minimal time. It’s all based on four principles: deliver the right product, provide hard value, simplify the platform, and allow efficient evolution. An optimal strategy depends on all four of those elements, but Agile projects may deliver on two or three of them, proving the value of Agile before a full Agile strategy is in use.

In order to apply these principles across your entire application development portfolio, you need a strategy that addresses these elements, and provides some way to measure the impact of such a strategy. Delivering the right product requires a focus on people and talent, and the industrial concepts of mass customization rather than mass production; providing hard value requires linking your development process to value streams with their focus on investment return; simplifying the platform requires a focus on tools and technology; and allowing efficient evolution requires optimizing work processes both within development teams and across the organization. I especially liked their chart comparing today’s practices in tools and technologies against Lean practices:

Today’s practices

Lean practices

Install for today and tomorrow Install for today, architect for tomorrow
Configure a general UI for many users Design for people in their work roles
Adopt integrated suites Adopt narrow-purpose modules and services
No component substitution is allowed Component substitution is allowed
Architectural evolution is slow by design Architectural evolution is constant by design

There are ways to bring Agile into an organization, even when budgets are flat and there is the perception that legacy systems just can’t be replaced without yet another huge project expense. Likely, your developers are already practicing some Agile methods already, and you could easily gain permission to prove these out in non-critical systems development.

Good session, with a high-speed tag team between Rymer and West. Unfortunately, the logistics aren’t quite as good as the general sessions: too-small meeting rooms requiring elevator access from the main conference area, no tables and no wifi coverage (at least in the room that I was in at this time).

Waterfall contracts and iterative development don’t mix #BTF09

The post title is the best quote from Tom Higgins, CIO of the Territory Insurance Office in Australia, who came all the way from Darwin to speak at both the Gartner and Forrester conferences this week. I had a chance for a chat at the airport with him while waiting for our flight from Orlando to Chicago (and introduced him to the wonder that is the Chicago transit system), and caught his Appian-sponsored lunch presentation today.

TIO is a government-backed insurance operation that covers risks that most insurance companies won’t take on, including workers’ compensation, cyclone damage and other personal and P&C policies. They were looking to reduce their operational costs by making their claims operation more efficient, but also reducing their claims costs by reducing the length of disability claims, which can often be done through proper case management during the period of a claim. Originally the business was set on a COTS (commercial off-the-shelf) claims management system, but when they compared that with BPM, they realized that it met their requirements much better than the COTS systems available due to the ease of use and flexibility. They short-listed three vendors and did a three-day proof of concept with each; that managed to knock one vendor completely out of the running due to the complexity of the implementation, in spite of them being a large and well-respected vendor in the space (no, he didn’t say who; yes, he told me over a beer; and no, I won’t tell you).

For a short presentation, he spent quite a bit of time talking about the contract – including the “waterfall contracts and iterative development don’t mix” line – and I have to agree that this is an incredibly critical part of any BPM project, and very often is handled extremely poorly. The contract needs to focus on risk management, and you can’t let your lawyers force you into a fixed-price contract that has pre-defined waterfall-type milestones in it if you don’t know exactly what you want; in my experience, no BPM project has ever started with the business knowing exactly what they want ahead of time, and I don’t imagine that many do, so don’t mistake a contract for a project plan. If you plan on doing iterative or Agile development, where the requirements are defined gradually as you go along, then a fixed-price contract just won’t work, and will be a higher risk even though many (misinformed) executives believe that fixed price is always lower risk. Going with a time and materials contract requires a much higher level of trust with the vendor, but it will end up as a much lower risk since there won’t be the constant stream of change requests that you typically see with a waterfall contract. Besides, if you can’t trust the vendor, why are you working with them?

TIO had a number of issues pop up during their implementation: the CEO was replaced just before the vendor was engaged, and the business sponsor was replaced in the middle of development; however, due to a strong sponsorship and governance team, they were able to weather these storms. In fact, he sees the strength of the governance team as a critical success factor, along with using your “A team” for implementation, finding a committed vendor and engaging the business early.

He had a really good point about making sure that your project managers is not a business subject matter expert and does use an appropriate project methodology such as Agile. The PM is supposed to be the coordinator and facilitator of the entire project, and not an SME that will dive down the rabbit hole of specific business issues and requirements at the first sign of trouble. I’m a strong believer that PMs should manage projects, not gather requirements, write code or most other activities since that distracts them from the project and increases the risk that it will run off the rails when no one is looking; it’s good to hear that at least one other person shares my opinion on this.

They used Agile project methodology and Spiral development methodology, with six-week code cycles. The team was fairly small: seven TIO team members, an internal business reference group (the SMEs, who eventually became the rollout leads), four Appian people onsite, four offshore Appian team members, and four part-time specialists. The project started with Appian as the technical lead, but that shifted through the first three project phases, and now TIO essentially works on its own with no assistance from Appian. They established a center of excellence to assist with taking this success on to other projects, and that seems to be working: the initial project cost them over $3M, and the next one – which is three times more complex – cost only one-third of that since BPM is now built into their enterprise infrastructure. And, at the end of the day, they’re seeing a 30% productivity increase in their initial implementations.

Their biggest challenges were the introduction of Agile and Spiral methodologies, the geographic dispersion of the team, and recruiting the right talent for their remote location; they used internal education both for the methodologies and to grow their own BPM specialists locally when they couldn’t recruit them.

There were several things that they did that he feels contributed to their success, such as daily headline meetings, engagement with the business, fostering team spirit, and highlighting and addressing the riskier requirements early so that they can be tried out by the business and tuned as required. He also felt that Agile was a huge contributor, since there were no more 300-page requirements documents that were either not read or not understood by the business, but signed off regardless. He finished with a few strategic lessons learned: begin with the end in mind, including planning how this will become part of the infrastructure; and pick a big project in order to ensure commitment and executive engagement.

Embrace Lean thinking to enable innovation #BTF09

The morning finished with a panel moderated by Dave West of Forrester, and including Kevin Haynes of Dell and Dave Smoley of Flextronics. West started out talking about the inertia within IT: more than 60% of IT is spent just keeping the lights on, legacy systems inhibit change and vendors are slow to change. There is, however, a wave of change coming, with Agile adoption at 36% and rising, 80% of developers using open source and new software development up by 9%.

Smoley told about their experiences at Flextronics, where Lean has spilled over from manufacturing to IT, allowing them to reduce their IT costs to less than 1% of revenue. They were able to reduce their IT costs by 36%, all while performing some major projects such as a new supply chain management system, a global HR system, a WAN refresh, a data center refresh and implementing a global service desk. He credits Lean with being able to determine what technology is important and what is unimportant to the business, allowing them to cut or rework the areas of waste. Their strategy includes a “just enough” mentality, putting the customer first, trying out hardware and software before they buy it, maximizing asset utilization, consolidating wherever possible, and enabling global collaboration. Projects are prioritized by ROI, and business alignment is done at multiple levels. They avoid single source, aren’t afraid to renegotiate maintenance contracts, and use open source and open standards where possible. They also bite the bullet and actually throw stuff out if it no longer is the best thing for them, which often requires putting people’s egos aside if they were involved in the original development or acquisition of a system that is being decommissioned.

Haynes related what they’ve done at Dell, which has been focused on keeping the lights on rather than innovation, but maintaining the important operations and customer service levels while spending less. Their strategy is focused on decreasing variability, focusing on projects that reduce costs while driving service excellence, and consolidation through virtualization and techniques such as reducing to three standard desktop images. All of this is not just about exciting new innovation, either: it’s just as necessary to focus on better ways to handle the steady-state maintenance projects as well, and automate them where possible to free up resources.

The discussion ranged across both strategic and operational aspects of how Lean is helping both organizations to reduce their IT costs, and way too much information for me to capture, especially since West seems to have ADD when it comes to handing the remote control to advance slides :)  There were some good takeaways about Lean, such as transparent reporting on value and waste, establishing the right team culture around a problem-solving approach, creating structure around value chains, and frequent delivery to allow for constant fine-tuning of the business value.

The power of Lean IT #BTF09

John Swainson, CEO of CA, gave a presentation on how Lean can help companies built long-term competitive advantage during tough economic times in industries as diverse as manufacturing, healthcare, retail and IT, and how Lean IT – or what he referred to as the industrialization of IT – can deliver greater value at lower cost. As he pointed out, it’s about time that we applied some discipline to IT, and we can learn from how Lean helped other types of organizations to create just-in-time IT that deploys the right solutions at the right time.

Lean IT is about a “sense and respond” philosophy that has IT paying attention to what’s happening in the business and manage variable volumes, prioritize and create new business services, and ensure ongoing quality levels.

CA commissioned a study on waste, and found that 96% of IT executives agree that there is significant waste in their organization, primarily due to inefficient processes, duplication of effort, redundant applications, and underutilized assets; Swainson sees that the keys to resolving these issues is to analyze, automate, integrate and optimize (respectively).

He was then joined on stage by John Parkinson, CTO of TransUnion. As a credit rating/tracking service that tracks individual consumer credit ratings around the world, IT is absolutely central and critical to their operations, not just a support function. As the recession approached in 2007, however, they had to consider how to grow new sources of revenue and increase operating margins in order to decrease their dependency on the failing consumer credit market. Lean was part of their strategy, helping them to pinpoint the wasted effort and other waste, and allowing them to optimize their operations. With their corporate culture, they needed this to have this be more of a grassroots initiative that made sense to people: adopting Lean since it helped them to do their job better, not because of a corporate mandate. There is, however, monitoring and measurement in place, and performance and compensation are tied to improvements: the ultimate incentive. Their idea is to instill these Lean ideas into their culture, so that these good habits learned in tough times would serve them well when times improve.

Parkinson pointed out specifically that Lean sets you up for taking advantage of cloud computing, and Swainson took over to talk about the opportunities and challenge of working in the cloud. It’s pretty hard to embrace Lean without at least taking a look at using the cloud, where you can provision the right resources at the right time, rather than having a lot of excess capacity sitting around in your data center. Consider non-production environments such as test labs, for example: being able to create test environments as required – either through internal virtualization (which I don’t really consider to be cloud) or in an environment such as Amazon EC2 – rather than having them pre-installed and ready long before required, and sized for peak rather than actual load. Considering that test environments can be 2/3 or more of the server footprint, this is huge.

Mike Gilpin joined them for a discussion, which briefly continued on the topic of using virtualized or cloud test environments, but also covered the issues of how well contract IT employees can adapt to Lean cultures (if they don’t, then find someone else), using other techniques such as Six Sigma together with Lean (they’re all tools focused on process optimization, pick and choose what works for you), and the security challenges of using cloud infrastructure.

George Colony on the CEO’s brain #BTF09

We had a brief address from George Colony, CEO of Forrester, on changing from IT to BT, with one key message: if your CEO doesn’t understand what you’re talking about, then you’re probably not using “BT speak”.

The CEO is focused on two things: higher profits, and revenue growth, and you need to translate your projects and technology strategy into those terms, or risk being marginalized within the organization.

A brief 10-minute address, but a good message.

Lean and the CIO #BTF09

Tom Hughes, currently with CSC but formerly the CIO of the US Social Security Administration, spoke to us about Lean and the CIO. The imperative here is driven by surveys that show that (to paraphrase) business thinks that IT is important, but that they’re doing a crappy job. He believes that CIOs need to break out of the technology pack and focus on business outcomes (e.g., market share) rather than outputs (e.g., number of workstations): exactly the same message as Connie Moore gave us in the opening keynote. CIOs needs to be valid members of the executive team, reporting to the board rather than the COO, HR, general counsel or any of a number of other non-effective reporting structures.

He believes that the CIO of the future must:

  • Be a strategic thinker, not an IT techie
  • Be at the table of chief executives
  • Partner in agency or business transformation
  • Have broad experience

The CIOs focus needs to be on four things: strategy, budget, architecture and security. Delivery and maintenance, on the other hand, are operational issues, and should be handled below the CIO level, even directly in the business units by promoting cross-functional ownership. The CIO needs to be forward-thinking and set strategy for new technologies such as cloud computing and unified communications, but doesn’t need to be responsible for delivering all of it: for things that the business can handle on their own, such as business process analysis, let the business take the lead, even if it means acquiring and deploying some form of technology on their own.

He concluded with the statements that the CIO needs to work with the CEO and develop a collaborative operational model, be at the table with other senior executives, and get other executives to take accountability for how technology impacts their business area. The CIO needs to be seen by the CEO as a partner in business transformation, not the guy fixing his Blackberry.

Questions from the audience included how to transition the current technology-focused IT teams to have more of a business focus: Hughes’s response is that some of them will never change, and won’t make the cut; others can benefit by being seconded to the business for a while.

On a side note, I like the format of the keynotes: Mike Gilpin pops up on stage at the end of each one, he and the speaker move to a couple of comfy chairs at center stage, and he asks some questions to continue the conversation. Questions from the audience are collected on cards and vetted by Forrester analysts, who then distill them into a few key questions to ask.

There’s still a bit of confusion over the Twitter hashtag: the website says #BTF09, then Gilpin announced in the opening address that it is #FBTF09, but then @forrester DM’ed me that it is actually #BTF09 and that Gilpin will correct this, although that hasn’t happened yet.

Why Lean is the new business technology imperative #BTF09

I’ve moved from the Gartner BPM summit in Orlando to Forrester’s Business Technology Forum in Chicago, where the focus is on Lean as the new business imperative: how to use Lean concepts and methods to address the overly complex things in our business environment.

Mike Gilpin opened the conference with a short address on how our businesses and systems got to be so bloated that lean has become such an imperative, then Connie Moore took over for the keynote. From the keynote’s description on the event agenda site:

Lean is not a new business concept — but it is enduring. By embracing Lean years ago, Toyota reached No. 1, while rivals GM and Chrysler collapsed into wards of the state. In its broadest sense, Lean seeks to better satisfy customer needs, improve process and information flows, support continuous improvement, and reduce waste. Today’s recession is a clarion call for businesses and government to reexamine and reapply Lean thinking across people, processes, and technology. When maintenance eats 80% to 90% of IT budgets, it’s beyond time to examine Lean approaches — like process frameworks, cloud computing, SaaS, Agile methodologies, open source, or other fresh ideas. And when the sheer complexity of technology overwhelms information workers, it’s time to simplify and understand what workers really need to get their jobs done. And by focusing on Lean now, your organization will be positioned to power out of the recession and move quickly into the next new era of IT: business technology — where business is technology and technology is business.

She started with discussions about how Lean started in manufacturing, and you can see the obvious parallels in information technology. In Lean manufacturing, the focus is on eliminating waste, and everyone owns quality and problems are fixed at the source. Lean software isn’t a completely new idea either, but Forrester is pushing that further to change “information technology” to “business technology”.

Lean is not just operational, however, it’s also strategic, with a focus on understanding value. However, it’s usually easier to get started on it at the operational level, where it’s focused on eliminating waste through improving quality, eliminating non-productive time, and other factors. Lean can be counterintuitive, especially if you’ve been indoctrinated with an assembly line mentality: it can be much more efficient, for example, for individuals or small teams to complete an entire complex task from start to finish, rather than have each person or team perform only a single step in that task.

Moving on to the concepts of Lean software, she started with the results of a recent Forrester survey that showed that 92% think that enterprise software has an excessive cost of ownership (although personally, I’m not sure why they bothered to take a survey on something so incredibly obvious 🙂 ), and discussed some of the alternatives: SaaS such as Google Apps, open source or free software and other lighter weight tools that can be deployed at much less cost, both in licensing costs and internal resource usage. Like Goldilocks, we need to all start buying what’s just right: not too much or too little, in spite of all those licenses that the vendor wants to unload at a discount before quarter-end.

Looking at the third part of their trifecta, there’s a need to change IT to BT (business technology). That’s mostly about governance – who has responsibility for the technology that is deployed – and turning technology back into a tool that services the business rather than some separate entity off doing technology for its own sake. What this looks like in practice is that the CIO is most likely now focused on business process improvement, with success being measured in business terms (like customer retention) rather than IT terms (like completing that ERP upgrade on time, not that that ever happens). Stop leading with technology solutions, and focus on value, flexibility and eliminating waste. You can’t do this just by having a mandate for business-IT alignment: you need to actually fuse business and IT, and radically change behaviors and reporting structures. We’re stuck in a lot of old models, both in terms of business processes and organizational models, and these are unsustainable practices in the new world order.

There were some good questions from the audience on how this works in practice: whether IT can be Lean even if this isn’t practiced elsewhere in the organization (yes, but with less of an effect), what this means for IT staff (they need to become much more business focused, or even move to business areas), and how to apply Lean in a highly regulated environment (don’t consider required compliance as waste, but consider how to have less assembly-line business processes and look for waste within automated parts of processes).

Getting Business Process Value From Social Networks #GartnerBPM

For the last session of the day, I attended Carol Rozwell’s presentation on social network analysis and the impact of understanding network processes. I’ll be doing a presentation at Business Rules Forum next month on social networking and BPM, so this is especially interesting even though I’ll be covering a lot of other information besides social graphs.

She started with the (by now, I hope obvious) statement that what you don’t know about your social network can, in fact, hurt you: there are a lot of stories around about how companies have and have not made good use of their social network, and the consequences of those activities.

She posited that while business process analysis tells us about the sequence of steps, what can be eliminated and where automation can help, social network analysis tells us about the intricacies of working relationships, the complexity and variability of roles, the critical people and untapped resources, and operational effectiveness. Many of us are working very differently than we were several years ago, but this isn’t just about “digital natives” entering the workforce, it’s about the changing work environment and resources available to all of us. We’re all more connected (although many Blackberry slaves don’t necessarily see this as an advantage), more visual in terms of graphical representations and multimedia, more interactively involved in content creation, and we do more multitasking in an increasingly dynamic environment. The line between work and personal life blurs, and although some people decry this, I like it: I can go to many places in the world, meet up with someone who I met through business, and enjoy some leisure time together. I have business contacts on Facebook in additional to personal friends, and I know that many business contacts read my personal blog (especially the recent foodie posts) as well as my business blog. I don’t really have a lot to hide, so don’t have problem with that level of transparency; I’m also not afraid to turn off my phone and stop checking my email if I want to get away from it all.

Your employees are already using social media, whether you allow it within your firewall or not, so you might as well suck it up and educate them on what they can and can’t say about your company on Twitter. If you’re on the employee side, then you need to embrace the fact that you’re connected, and stop publishing those embarrassing photos of yourself on Facebook even if you’re not directly connected to your boss.

She showed a chart of social networks, with the horizontal axis ranging from emergent to engineered, and the vertical axis from interest-driven to purpose-driven. I think that she’s missing a few things here: for example, open source communities are emergent and purpose-driven, that is, at the top left of the graph, although all of her examples range roughly along the diagonal from bottom left to top right.

There are a lot of reasons for analyzing social networks, such as predicting trends and identifying new potential sources of resources, and a few different techniques for doing this:

  • Organizational network analysis (ONA), which examines the connections amongst people in groups
  • Value network analysis (VNA), which examines the relationships used to create economic value
  • Influence analysis, a type of cluster analysis that pinpoints people, associations and trends

Rozwell showed an interesting example of a company’s organizational chart, then the same players represented in an ONA. Although it’s not clear exactly what the social network is based on – presumably some sort of interpersonal interaction – it highlights issues within the company in that some people have no direct relation to their direct reports, and one person who was low in the organizational chart was a key linkage between different departments and people.

She showed an example of VNA, where the linkages between a retailer, distributor, manufacturer and contract manufacturer where shown: orders, movements of goods, and payments. This allows the exchanges of value, whether tangible or intangible, to be highlighted and analyzed.

Her influence analysis example discussed the people who monitor social media – either within a company or their PR agency – to analyze the contributors, determine which are relevant and credible, and use that to drive engagement with the social media contributors. I get a few emails per day from people who start with “I read your blog and think that you should talk to my customer about their new BPM widget”, so I know that there are a lot of these around.

There are some basic features that you look for when doing network analysis: central connectors (those people in the middle of a cluster), peripheral players (connected to only one or two others), and brokers (people who form the connection between two clusters).

There are some pretty significant differences between ONA, VNA and business process analysis, although there are some clear linkages: VNA could have a direct impact on understanding the business process flows, while ONA could help to inform the roles and responsibilities. She discussed a case study of a company that did a business process analysis and an ONA, and used the ONA on the redesigned process in order to redesign roles to reduce variability, identify roles most impacted by automation, and expose critical vendor relationships.

Determining how to measure a social network can be a challenge: one telecom company used records of voice calls, SMS and other person-to-person communications in order to develop marketing campaigns and pricing strategies. That sounds like a complete invasion of privacy to me, but we’ve come to expect that from our telecom providers.

The example of using social networks to find potential resources is something that a lot of large professional services firms are testing out: she showed an example that looked vaguely familiar where employees indicated their expertise and interests, and other employees could look for others with specific sets of skills. I know that IBM does some of this with their internal Beehive system, and I saw a presentation on this at the last Enterprise 2.0 conference.

There are also a lot of examples of how companies use social networks to engage their customers, and a “community manager” position has been created at many organizations to help manage those relationships. There are a lot of ways to do this poorly – such as blasting advertising to your community – but plenty of ways to make it work for you. Once things get rolling in such a public social network, the same sort of social network analysis techniques can be applied in order to find the key people in your social network, even if they don’t work for you, and even if they primarily take an observer role.

Tons of interesting stuff here, and I have a lot of ideas of how this impacts BPM – but you’ll have to come to Business Rules Forum to hear about that.

Fujitsu process discovery case study #GartnerBPM

I first saw Fujitsu’s process discovery offering last year, and it looked pretty useful at the time, but it didn’t have much of a track record yet. Today’s session brought forward Greg Mueller of Electro Scientific Industries (ESI), a manufacturer of photonic and laser systems for microengineering applications, to talk about their successes with it.

Basically, the Automated Process Discovery (APD) uses log files and similar artifacts from any variety of systems in order to derive a process model, analyzing frequencies of process variations, and slicing and dicing the data based on any of the contributing parameters. I’ve written a lot about why you would want to do process discovery, including some of the new research that I saw at BPM 2009 in Germany last month.

ESI wanted to reduce inventory and improve manufacturing cycle time, and needed to understand their opportunity-to-order process better in order to do that. They used APD to determine the actual process flows based on about 15 months of data from SAP and other systems, then validated those flows with the team who worked with those flows. They wanted to look at variations based on business unit and other factors to figure out what was causing some of their cycle time and inventory problems.

They assumed a relatively simple four-step process of opportunity-quote-order-shipment, possibly with 3-4 additional steps to allow revisions at each of these steps; what they actually found when they looked at about 11,500 process instances is that they had over 1,300 unique process flows. Yikes. Some of this was cycling through steps such as order change: you would expect an order to be changed, but not 120 times as they found in some of their instances. There were also loopbacks from order to quote, each of these representing wasted employee time and increased cycle time. They found that one task took an average of 58 days to complete, with a standard deviation of 68 days – again, a sign of a process out of control. They realize that they’re never going to get it down to 25 unique process flows, but they are aiming for something far lower than 1,300.

They did a lot of data slicing and analysis: by product, by region, by sales manager and many other factors. APD allows for that sort of analysis pretty easily (from what I saw last year), much like any sort of dimensional modeling that you would do in a data warehouse.

They observed that less than 20% of their opportunities followed the happy path, and the rest were taking too long, duplicating efforts, having too many rework loopbacks, and sometimes not even shipping after a great deal of up-front work.

In their process improvement phase, they established 22 projects including a number of improvement features such as automating processes to reduce repeated steps, improving entry flow to reduce time intervals, require the entry of initial data early in the process in order to reduce loopbacks and rework. Since their business runs on SAP, a lot of this was implemented there (which begs the question of who did such a crappy SAP implementation for them in the first place such that they had problems like this – seriously, insufficient required data entry at the start of an process?), and they’re able to keep extracting and analyzing the logs from there in order to see what level of improvement that they are experiencing.

After a much too short presentation by ESI, Ivar Alexander from Fujitsu gave us a demo of APD with ESI’s basic process; I’ve seen a demo before, but it’s still fascinating so see how the system correlates data and extracts the process flows, then performs detailed dimensional analysis on the data. All of this is done without having to do a lot of interviews of knowledge workers, so is non-invasive both from a people and system standpoint.

It’s important to recognize that since APD is using the system logs to generate the process flows, only process steps that have some sort of system touch-point will be recorded: purely manual process steps will not. Ultimately, although they can make big improvements to their SAP-based processes based on the analysis through APD, they will probably need to combine this with some manual analysis of off-system process steps in order to fully optimize their operations.