Chris Rocke and Jane Long from Whirlpool presented on their experiences with integrating LSS tools into BPM practices to move beyond traditional process mapping. Whirlpool is a mature Six Sigma company: starting in their manufacturing areas, it has spread to all other functions, and they’ve insourced their own training certification program. Six Sigma is not tracked as separate cost/benefit within a project, but is an inherent part of the way every project is done.
They introduced BPM during to a large-scale overhaul of their systems, processes and practices; their use of BPM is includes process modeling and monitoring, but not explicit process automation with a BPMS outside of their existing financial and ERP systems. However, they are creating a process-centric culture that does manage business processes in the governance and management sense, if not the automation sense in all cases. They brought LSS tools to their BPM efforts, such as process failure mode and effects analysis (PFMEA), data sampling and structure methods, thought maps and control charts; these provide more rigorous analysis than is often done within BPM projects.
Looking at their dashboards, they had the same problem as Johnson & Johnson: lots of data but no consistent and actionable information. They developed some standard KPIs, visualized in a suite of seven dashboards, with alert when certain control points are exceeded. Their Six Sigma analytics are embedded within the dashboards, not explicit, so that the business owners view and click through the dashboards in their own terms. The items included in the dashboard are fairly dynamic: for example, in the shipping dashboard, the products that vary widely from expected and historic values are brought forward, while those that are within normal operating parameters may not even appear. Obviously, building the models underlying this was a big part of the work in creating the dashboards: for example, shipping dashboard alerts are based on year-over-year differences (because sales of most products are seasonal) with control limits that are the mean of the YOY differences +/- two standard deviations for a yellow alert, or three standard deviations for a red alert, plus other factors such as checking to see if the previous year’s value was an anomaly, weighted by the number of units shipped and a few other things thrown in.
The analytical calculations behind a dashboard might include internal forecasts or market/industry values, include seasonal fluctuations or not, depending on the particular measurement. The dashboard visuals, however, conceal all the complications of the underlying model. Alerts aren’t necessarily bad, but indicate a data point that’s outside the expected range and warrants investigation or explanation. They’ve seen some success in reducing variability and therefore making their forecasts more accurate: preventing rather than detecting defects.
They’re also using SAP’s Xcelsius for the dashboard itself; that’s the third company that I’ve heard here that is using that, which is likely due in part to the large number of SAP users but also gives credit to the flexibility and ease of use of that tool. They’re using SAP’s Business Warehouse for housing the data, which extracts from their core ERP system nightly: considerably more up-to-date than some of the others that we’ve seen here, which rely on monthly extracts manipulated in Excel. Although IT was involved in creating and maintaining BW, the LSS team owns their own use of Xcelsius, which allows them to modify the dashboards quickly.