SAP’s Bigger Picture: The AppDev Play

Although I attended some sessions related to BPM and operational process intelligence, last week’s trip to SAP TechEd && d-code 2014 gave me a bit more breathing room to look at the bigger picture — and aspirations — of SAP and their business technology offerings.

I started coming to SAPPHIRE and TechEd when SAP released a BPM product, which means that my area of interest was a tiny part of their primary focus on ERP, financials and related software solutions; most of the attendees (including the analysts and bloggers) at that time were more concerned with licensing models for their Business Suite software than new technology platforms. Fast forwarding, SAP is retooling their core software applications using HANA as an in-memory platform (cloud or on-premise) and SAP UI5/Fiori for user experience, but there’s something much bigger than that afoot: SAP is making a significant development platform play using those same technologies that are working so well for their own application refactoring. In other words, you can consider SAP’s software applications groups to be software developers who use SAP platforms and tools, but those tools are also available to external developers who are building applications completely unrelated to SAP applications.

They have some strong components: in-memory database, analytics, cloud, UI frameworks; they are also starting to push down more functionality into HANA such as some rudimentary rules and process functionality that can be leveraged by a development team that doesn’t want to add a full-fledged BRM or BPM system.

This is definitely a shift for SAP over the past few years, and one that likely most of their customers are unaware; the question becomes whether their application development tools are sufficiently compelling for independent software development shops to take a look.

Disclaimer: SAP paid my travel expenses to be at TechEd last week. I was not compensated for my time in any way, including writing, and the opinions here are my own.

What’s New With SAP Operational Process Intelligence

Just finishing up some notes from my trip to SAP TechEd && d-code last week with the latest on their Operational Process Intelligence product, which can pull events and data from multiple systems – including SAP’s ERP and other core enterprise systems as well as SAP BPM – and provides real-time analytics via their HANA in-memory database. I attended a session on this, then had an individual briefing later to round things out.

Big processes are becoming a thing, and if you have big processes (that is, processes that span multiple systems, and consume/emit big data and high volume from a variety of sources), you need to have operational intelligence integrated into those processes. SAP is addressing this with their SAP Operational Process Intelligence, or what they see as a GPS for your business: a holistic view of where you are relative to your goals, the obstacles in your path, and the best way to reach your goals. It’s not just about what has happened already (traditional business intelligence), but what is happening right now (real-time analytics), what is going to happen (predictive analytics) and the ability to adjust the business process to accommodate the changing environment (sense and respond). Furthermore, it includes data and events from multiple systems, hence needs to provide scope beyond any one system’s analytics; narrow scope has been a major shortcoming of BPMS-based analytics in the past.

In a breakout session, Thomas Volmering and Harsh Jegadeesan gave an update and demo on the latest in their OPInt product. There are some new visualization features since I last saw it, plus the ability to do more with guided tasks including kicking off other processes, and trigger alerts based on KPIs. Their demo is based on a real logistics hub operation, which combines a wide variety of people, processes and systems, with the added complexity of physical goods movement.

Although rules have always been a part of their product suite, BRM is being highlighted as a more active participant in detecting conditions, then making predictions and recommendations, leveraging the ability to run rules directly in HANA: putting real-time guardrails around a business process or scenario. They also use rules to instantiate processes in BPM, such as for exception handling. This closer integration of rules is new since I last saw OPInt back at SAPPHIRE, and clearly elevates this from an analytics application to an operational intelligence platform that can sense and respond to events. Since SAP BPM has been able to use HANA as a database platform for at least a year, I assume that we will eventually see some BPM functionality (besides simple queuing) pushed down into HANA, as they have done with BRM, allowing for more predictive behavior and analytics-dependent functions such as work management to be built into BPM processes. As it is, hosting BPM on HANA allows the real-time data to be integrated directly into any other analytics, including OPInt.

OPInt provides ad hoc task management using a modern collaborative UI to define actions, tasks and participants; this is providing the primary “case management” capability now, although it’s really a somewhat simpler collaborative task management. With HANA behind the scenes, however, there is the opportunity for SAP to take this further down the road towards full case management, although the separation of this from their BPM platform may not prove to be a good thing for all of the hybrid structured/unstructured processes out there.

The creation of the underlying models looks similar to what I’ve been seeing from them for a while: the business scenario is defined as a graphical flow model (or imported from a process in Business Suite), organized into phases and milestones that will frame the visualization, and connected to the data sources; but now the rules can be identified directly on the process elements. The dashboard is automatically created, although it can be customized. In a new view (currently still in the lab), you will also be able to see the underlying process model with overlaid analytics, e.g., cost data; this seems like a perfect opportunity for a process mining/discovery visualization, although that’s more of a tool for an analyst than whoever might be monitoring a process in real-time.

SAP TechEd Keynote with @_bgoerke

I spent yesterday getting to Las Vegas for SAP TechEd && d-code and missed last night’s keynote with Steve Lucas, but up this morning to watch Björn Goerke — head of SAP Product & Innovation Technology — give the morning keynote on putting new technology into action. With the increasing rate of digital disruption, it’s imperative to embrace new ways of doing business, or risk becoming obsolete; this requires taking advantage of big data and real-time analytics as well as modern platforms. SAP’s current catch phrase is “Run Simple”, based in part on the idea of “one truth”, that is, one place for all your data so that you have a real-time view of your business rather than relying on separate sources for operations and analytics. You can’t run — and respond — at the speed that business requires if your analytics are based on yesterday’s transactions.

SAP HANA — their in-memory data store — allows for real-time analytics directly on operational transaction data, events, IoT machine data, social media data and more, all in a single data store. With the release of SAP HANA SPS09, they are adding support for dynamic tiering, streaming, enterprise information management, graphing, Hadoop user-defined functions, and multi-tenancy; these improve the management capabilities as well as the functionality. SAP deploys all of their business software solutions on HANA (although some more traditional databases are still supported in some products) with the goal to providing the basis for the “one truth” within business data.

Goerke was joined on stage by a representative from Alliander, an energy distribution company based in the Netherlands, and he demonstrated a HANA-based analytical dashboard based on geographic data that reduces the time required for geospatial queries — such as filtering by pipelines that are within a certain distance from buildings — from hours using more traditional database technology, to seconds with HANA. Geospatial data is one of the areas where in-memory data and analytics can really make a difference in terms of performance; I did a lot of my early-career software development on geospatial data, and there are some tough problems here that are not easily addressed by more traditional tools.

Another part of the simplicity message is “one experience” via the SAPUI5-based Fiori, providing for a more unified experienced between desktop and mobile, including management and distribution of mobile apps. They’ve added offline capabilities for their mobile apps – a capability widely ignored or dismissed as “unimportant” by developers who live and work only in areas blanketed in 4G and WiFi coverage, but critical in many real-world applications. Goerke demonstrated using some of the application development services — with some “help” from Ian Kimbell — to define an API, use it to create a mobile app, deploy it to a company app store, then install and run it: not something that most executives do live on stage at a keynote.

SAP now has a number of partnerships with hardware and infrastructure vendors to optimize their gear for SAP and especially for HANA: last week we saw an announcement about SAP running on the IBM cloud, and today we heard about how sgi is taking their well-known computational hardware capabilities and applying them to running transactional platforms such as SAP. SAP has also partnered with small software development shops to deliver the innovations in HANA-based applications needed to drive this forward. Applications developed on HANA can run on premise or in SAP’s managed cloud (and now IBM’s managed cloud), where they manage HANA and the SAP applications including Business Suite and Business Warehouse. Through a number of strategic acquisitions, SAP has much more than just your ERP and financials, however: they offer solutions for HR management, procurement, e-commerce, customer engagement and more. They also offer a rich set of development tools and application services for software development unrelated to SAP applications, allowing for applications built and deployed on HANA with modern mobile user interfaces and collaboration. In keeping with Goerke’s Star Trek theme in the keynote, very Borg-like. 🙂

Lots more here than I could possibly capture; you can watch the keynotes and other presentations online at SAP TechEd online.

AIIM Information Chaos Rescue Mission – Toronto Edition

AIIM is holding a series of ECM-related seminars across North America, and since today’s is practically in my back yard, I decided to check it out. It’s a free seminar so heavily sponsored; most of the talks are from the sponsor vendors or conversations with them, but John Mancini kicked things off and moderated mini-panels with the sponsor speakers to tease out some of the common threads.

The morning started with John Mancini talking about disruptive consumer technologies — cloud, mobile, IoT — and how these are “breaking” our internal business processes by fragmenting the channels and information sources. The result is information chaos, where information about a client lives in multiple places and often can’t be properly aggregated and contextualized, while still remaining secure. Our legacy systems, designed to be secure, were put in place before the devices that are causing security leaks were even invented; those older systems can’t even envision all the ways that information can leak out of an organization. Furthermore, the more consumer technologies advance, the further behind our IT people seem, making it more likely that business users will just go outside/around IT for what they need. New technologies need to be put in the context of our information management practices, and those practices adjusted to include the disruptors, rather than just ignore them: consider how to minimize risk in this information chaos state;  how to use information to engage and collaborate, rather than just shutting it away in a vault; how to automate processes that involve information that may not be stored in an ECM; and how to extract insights from this information.

A speaker from Fujitsu was up next, stating some interesting statistics on just how big the information chaos problem is:

  • 50% of business documents are still on paper; most businesses have many of their processes still reliant on paper.
  • Departmental CM systems have proliferated: 75% of organizations with a CM system have more than one, and 25% have more than four. SharePoint is like a virus among them, with an estimated 50% of organizations worldwide using SharePoint ostensibly for collaboration, but usually for ad hoc content management.
  • Legacy CM systems are themselves are a hidden source of costs, inefficiency and risk.

In other words, we have a lot of problems to tackle still: large organizations tend to have a lot of non-integrated content management systems; smaller organizations tend to have none at all.

We finished the first morning segment with an introduction from the event sponsors at small booths around the room:

An obvious omission (to me, anyway) was IBM/FileNet — not sure why they are not here as a sponsor considering that they have a sizable local contingent.

The rest of the morning was taken up with two sets of short vendor presentations, each followed by a Q&A session moderated by John Mancini: first Epson, K2 and EMC; then KnowledgeLake, HP Autonomy, Kodak alaris and OpenText. There were audience questions about information security and risk, collaboration/case management, ECM benefits and change management, auto-classification, SharePoint proliferation, cloud storage, managing content retention and disposal, and many other topics; lots of good discussions from the panelists. I was amazed (or maybe just sadly accepting) at the number of questions dealing with paper capture and disposal; I’ve been working in scanning/workflow/ECM/BPM since the late 80’s, and apparently there are still a lot of people and processes resistant to getting rid of paper. As a small business owner, I run a paperless office, and have spent a big chunk of my career helping much larger enterprises go paperless as part of streamlining their processes, so I know that this is not only possible, but has a lot of benefits. As one of the vendors pointed out, just do something, rather than sitting frozen, surrounded by ever-increasing piles of paper.

I skipped out at lunchtime and missed the closing keynote since it was the only bit remaining after the lunch break, although it looked like a lot of the customer attendees stayed around for the closing and the prize draws afterwards, plus to spend time chatting with the vendors.

Thanks to AIIM and the sponsors for today’s seminar; the presentations were a bit too sales-y for me but some good nuggets of information. There’s still one remaining in Chicago and one in Minneapolis coming up next week if you want to sign up.

PegaWORLD Breakout: The Process Of Everything

Setrag Khoshafian and Bruce Williams of Pega led a breakout session discussing the crossover between the internet of things (IoT) — also known as the internet of everything (IoE) or the industrial internet — and BPM as we know it. The “things” in IoT can be any physical device, from the FitBit on my wrist to my RFID-enabled conference badge to the plane that flew me here, none of which you would think of primarily as a computing device. If you check out my coverage of the Bosch Connected World conference from earlier this year, there’s a lot being done in this area, and these devices are becoming full participants in our business processes. Connected devices are now pervasive in several sectors, from consumer to manufacturing to logistics, with many of the interactions being between machines, not between people and machines, enabled by automation of processes and decisions over standard communication networks. There’s an explosion of products and players, and also an explosion of interest, putting us in the middle of the tipping point for IoT. There are still a number of challenges here, such as standardization of platforms and protocols: I expect to see massive adoption of dead-end technologies, but hopefully they’re so inexpensive that changing out to standardized platforms won’t be too painful in a couple of years.

Getting everything instrumented is the first step, but devices on their own don’t have a lot of value; as Khoshafian pointed out, we need to turn the internet of things into the process of everything. A sea of events needs to feed into a sense/respond engine that drives towards outcomes, whether a simple status outcome, a repair request, or automation and control. BPM, or at least the broad definition of intelligent BPM that includes decisions and analytics, is the perfect match for that sense and respond capability. There are widespread IoT applications for energy saving through smart homes and offices regulating and adjusting their energy consumption based on demand and environmental conditions; in my house, we have a Nest smoke/CO detector and some WeMo smart metered electrical outlets, both of which can be monitored and controlled remotely (which is what happens when a systems engineer and a controls engineer get together). I’ve seen a number of interesting applications in healthcare recently as well; Williams described nanobots being used in surgery and Google Glass used by healthcare workers, as well as many personal health sensors available for everyday home use. Cool stuff, although many people will be freaked out by the level of monitoring and surveillance that is now possible from many devices in your home, office and public environments.

This was more of a visionary session than any practicalities of using Pega products for addressing IoT applications, although we did hear a bit about the technological ramifications in terms of authentication, integration, open standards, and managing and detecting patterns in the sheer volume of device data. Definitely some technical challenges ahead.

We’re headed off to lunch and the technology pavilion, but first I’m going to use the WeMo app on my phone to turn on the desk lamp in my home office so that my cat can snooze under it for the afternoon: the small scale practical application of IoT.

AWD Advance14: The New Face Of Work

I’m spending the last session of the last day at DST’s AWD Advance conference with Arti Deshpande and Karla Floyd as they talk about how their more flexible user experience came to be. They looked at the new case management user experience, which is research-driven and requires very little training to use, and compared it to the processor workspace, which looks kind of like someone strapped the Windows API onto a green screen.

To start on the redesign of the processor workspace, they did quite a bit of usability evaluation, based on a number of different channels, and laid out design principles and specific goals that they were attempting to reach. They focused on 12 key screens and the navigation between them, then expanded to the conceptual redesign of 66 screens. They’re currently continuing to research and conceptualize, and doing iterative usability testing; they actively recruited usability testers from their customers in the audience during the presentation. They’ve worked with about 20 different clients on this, through active evaluations and visits but also through user forums of other sorts.

We saw a demo of the new screens, which started with a demo of the existing screens to highlight some of the problems with their usability, then moved on to the redesigned worklist grid view. The grid column order/presence is configurable by the user, and saved in their profile; the grid can be filtered by a few attributes such as how the work item was assigned to the worklist, and whether it is part of a case. Icons on the work items indicate whether there are comments or attachments, and if they are locked. For a selected work item, you can also display all relationships to that item as a tree structure, such as what cases and folders are associated with it. Reassigning work to another user allows adding a comment in the same action. Actions (such as suspending a work item) can be done from the worklist grid or from the banner of the open work item. The suspend work item action also allows adding a comment and specifying a time to reactivate it back to the worklist – combining actions into a single dialog like this is definitely a time-saver and something that they’ve obviously focused on cleaning up. Suspended items still appear in the worklist and searches but are in a lighter font until their suspense expires – this saves adding another icon or column to indicate suspense.

Comments can be previewed and pinned open by hovering over the work item icon in the worklist, and the comments for a work item can be sorted and filtered. Comments can be nested; this could cause issues for customers who are generating custom reports from the comments table in the database, at least one of whom was in the audience. (For those of you who have never worked with rigid legacy systems, know that generating reports from comment fields is actually quite common, with users being trained to enter some comments in a certain format in order to be picked up in the reports. I *know*.)

The workspace gains a movable vertical divider, allowing the space to be allocated completely to the worklist grid, or completely to the open work item; this is a significant enhancement since it allows the user to personalize their environment to optimize for what they’re working on at the time.

The delivery goal for all of this is Q4 2014, and they have future plans for more personalization and improved search. Some nice improvements here, but I predict that the comments thing is going to be a bit of a barrier for some customers.

That’s it for the conference; we’re all off to the Hard Rock Café for a private concert featuring the Barenaked Ladies, a personal favorite of mine. I’ll be quiet for a few days, then off to bpmNEXT in Monterey next week.

AWD Advance14: Case Management And Unpredictability

I finished off the first day at DST’s AWD Advance conference with Judith Morley’s presentation on case management, which dealt with knowledge work and the unpredictable processes that they deal with every day. She presented last year about their case management, which was pretty new and a strong theme throughout last year’s conference. As I wrote back then, AWD case management is a set of capabilities built on top of their structured BPM, not a separate tool, that manifests through a user workspace that can be enabled for specific users. These capabilities include concepts of case ownership (including team ownership), tasks within cases, task and case prioritization, and collaboration. Their roadmap for case management includes some new mobile case views, more sophisticated case templates, more automation and better reporting.

They don’t have any customers live on case management yet, but some are pretty close. The applications that they are seeing being developed (or considered) at their clients include:

  • New retirement plan onboarding
  • Mutual fund corporate actions, e.g., new fund setup, mergers
  • Transfer of assets
  • Complaints, which involve both structured process and unstructured cases
  • Securitized debt
  • Health insurance appeals and grievances at their BPO operation
  • Immigration services

The key thing for them is to get some of these customers up and running on case management to prove their capabilities and the expected benefits; without that, it’s all a bit academic.

There’s probably not really anything groundbreaking compared to any other case management products, but the fact that it’s built on the standard AWD platform means that it’s completely integrated with their structured process applications, allowing for a mix of transactional workers and knowledge workers on the same piece of work, sharing the security layer and other underlying models. For the huge amount of work that lies in the middle of the structured to unstructured process spectrum, this is essential.

That’s it for day 1 of AWD Advance 2014 – I’m off to enjoy a bit of that Florida sunshine, but I’ll be back tomorrow. Blogging may be a bit light since I’m presenting in the afternoon.

Bosch ConnectedWorld: Smart Cities, Smart Homes

We’re on to the afternoon breakout sessions at the second (last) day of Bosch ConnectedWorld, starting with the smart city initiatives in Monaco. Their goal is to improve the quality of life for citizens and visitors in this city-state that has both the highest population density and the highest per capita income; this is being addressed through capturing and combining data from a number of different connected components, and integrating with higher-level rules and processes. This manifests in a number of different areas, from energy to transportation to waste management. There were not a lot of specifics, but they appear to be fairly early in the process so haven’t designed, much less deployed, much yet; they are starting with an initiative to add smart sensors wherever possible to enable future smart city capabilities.

We moved from smart cities to smart homes with Bernhard Dörstel from Busch-Jaeger Elektro (a division of ABB), discussing trends in smart homes and shift from home automation as an oddity to mass market. For non-residential buildings, the KNX standard provides guidelines for automated devices, but that standard hasn’t been fully adopted for home automation, which has different concerns and functions than non-residential. That inhibits broad acceptance of these systems, so that they remain “toys” for the financially well-off rather than a part of every home. One trend that is likely to change this adoption is the demographic shift to an older population: selling the (now) middle-aged consumer on the security and control benefits of home automation while they are in their prime earning years and living in a home that they own. Although not explicitly stated, those consumers are also positioned to take greater advantage of smart home technology as they grow older, since it can be used to help them to live independently in their own homes for a longer period of time.

We have a break now, then a short panel discussion and closing remarks, so this is likely the last post from Bosch ConnectedWorld. It was great to have the chance to attend and see how BPM and rules are being used within the context of the internet of things.

Smart Energy At Bosch ConnectedWorld

I was a bit late to the start of the breakout track focused on smart energy solutions, and missed some of the presentation by Cordelia Thielitz of Bosch Energy Storage Solutions group, but I was able to see some of the her case studies for renewable energy usage, such as the use of PV (photovoltaic) combined with scheduling to allow the PV to be used during times of peak prices, reverting to the grid when the cost is lower. Although I think that it is less common in Europe, time-of-use pricing is very common in North America; in Toronto, where I live, the off-peak electricity price for households is only 55% of the peak price, so timing the use of locally-generated energy to avoid the peak can result in a significant cost savings.

The second speaker in this track was Thomas Schäfer of Stromnetz Berlin, which operates the power grid and electricity delivery networks for the city. They are adding technology to improve the performance of the energy grid, starting with adding online measurement of network stations and allowing remote control of these stations, which enables faster switchover in times of power outages so that customers spend less time in the dark. The new technologies also can reduce latency of new connections and service changes, as well as reduce costs, allowing them to remain competitive in a deregulated energy market.

The final speaker in the energy track was Roberto Greening of Bosch SI, showcasing their Virtual Power Plant (VPP) vision that will allow for the monitoring and control of distributed energy providers. Traditionally, the energy grid was made up of a small number of large power plants (fossil fuel/nuclear) that generate an expected amount of electricity onto a common transport and delivery infrastructure. As new plants come online — including sources such as wind power that can be highly variable — the grid needs to get smarter in order to completely understand generation, traffic and consumption. In fact, in Germany, wind and PV sources don’t feed into the high tension transport grid, they feed directly into the distribution network, so the location of the monitoring and measurement needs to change as well. In the last couple of years, things have changed even more: wind and solar increased significantly, nuclear power stations were taking offline, consumers produced their own energy back to the grid, and electricity needed to flow from the distribution network back up to the high tension network for long-range transport. What’s needed is intelligent energy management across this complex, heterogenous network of plants, networks and consumers

Bosch SI ConnectedWorld Day 2 Keynotes

Day 2 of Bosch SI’s ConnectedWorld conference in Berlin started with a keynote by Dr. Volkmar Denner, Chairman of Robert Bosch (the parent company of Bosch SI and many other subsidiaries).  He had a strong message about Bosch’s commitment to continue expanding their repertoire of IP-connected devices. As a major manufacturer of sensors and other enablers for smart technology in automotive, industrial and home applications, they have had to build a lot of the infrastructure required for created smart devices and systems, including the software stack of BPM and BRM, interfaced with device management. As with most new technologies, however, it’s more than just the technology: it’s about new business models that take advantage of that technology, and solutions created to serve those business models. Consider car-sharing, a business model enabled by on-vehicle connectivity technology: although the technology is relatively straightforward, the business model is completely disruptive to the rental car market as well as car sales and leasing. Denner spoke about a number of other emerging technologies and how they are enabling further disruption in the transportation/mobility market by considering multi-modal solutions, including electric bikes and cars that require models for shared charging stations.

Bosch is doing some impressive things on their own in the IoT area, and is pushing it forward even further by partnering with ABB, Cisco and LG to develop open standards for smart home solutions. This will eventually need to address issues of data privacy and security; this has been a hot topic of discussion here since the BMW speaker yesterday stated that they own the data generated by the BMW that you bought.

We also heard from Michael Ganser of Cisco in the morning keynote; his talk was a fascinating look at some of the trends in the “internet of everything” in a hyperconnected world, and drivers for embracing that. There’s a lot of paranoia around having everything in your environment connected and monitoring, but a lot of potential benefits as well: he mentioned that 30% (or more) of traffic in some urban centers is just people looking for parking; smart parking solutions could radically reduce that by matching people with parking spots.

Looking forward to today’s sessions on smart energy grids and smart homes.