SAPPHIRENOW Vishal Sikka Keynote – HANA For Speed, Fiori For Usability

Vishal Sikka, who leads technology and innovation at SAP, followed Hasso Platner onto the keynote stage; I decided to break the post and publish just Plattner’s portion since my commentary was getting bit long.

Sikka also started his part of the keynote with HANA, and highlighted some customer case studies from their “10,000 Club”, where operations are more than 10,000 times faster when moved to HANA, plus one customer with an operation that runs 1 million times faster on HANA. He talked about how imperatives for innovation are equal parts math and design: it has to be fast, but it also has to solve business problems. HANA provides the speed and some amount of the problem-solving, but really good user experience design has to be part of the equation. To that end, SAP is launching Fiori, a collection of 25 easy-to-use applications for the most common SAP ERP and data warehouse functions, supported on phone, tablet and desktop platforms with a single code base. Although this doesn’t replace the 1000’s of existing screens, it can likely replace the old screens for many user personas. As part of the development of Fiori, they partnered with Google and optimized the applications for Chrome, which is a pretty bold move. They’ve also introduced a lot of new forms of data visualization, replacing mundane list-style reports with more fluid forms that are more common on specialized data visualization platforms such as Spotfire.

SAP Fiori

Fiori doesn’t depend on HANA (although you can imagine the potential for HANA analytics with Fiori visualization), but can be purchased directly from the HANA Marketplace. You can find out more about SAP’s UX development, including Fiori, on their user experience community site.

Returning to HANA, and to highlight that HANA is also a platform for non-SAP applications, Sikka showed some of the third-party analytics applications developed by other companies on the HANA platform, including eBay and Adobe. There are over 300 companies developing applications on HANA, many addressing specific vertical industries.

That’s it for me from SAPPHIRE NOW 2013 — there’s a press Q&A with Plattner and Sikka coming up, but I need to head for the airport so I will catch it online. As a reminder, you can see all of the recorded video (as well as some remaining live streams today) from the conference here.

SAPPHIRENOW Hasso Plattner Keynote – Is HANA The New Mainframe (In A Good Way)?

It’s the last day of SAP’s enormous SAPPHIRE NOW 2013 conference here in Orlando, and the day opens with Hasso Plattner, one of the founders of SAP who still holds a role in defining technology strategy. As expected, he starts with HANA and cloud. He got a good laugh from the audience when saying that HANA is there to radically speed some of the very slow bits in SAP’s ERP software, such as overnight process, he stated apologetically “I had no idea that we had software that took longer than 24 hours to run. You should have sent me an email.” He also discussed cloud architectures, specifically multi-tenancy versus dedicated instances, and said that although many large businesses didn’t want to share instances with anyone else for privacy and competitive reasons, multi-tenancy becomes less important when everything is in memory. They have three different cloud architectures to deal with all scenarios: HANA One on Amazon AWS, which is fully public multi-tenant cloud currently used by about 600 companies; their own managed cloud using virtualization to provide a private instance for medium to large companies, and dedicated servers without virtualization in their managed cloud (really a hosted server configuration) for huge companies where the size warrants it.

Much of his keynote rebutting myths about HANA — obviously, SAP has been a bit stung by the press and competitors calling their baby ugly — including the compression factor between how much data is on disk versus in memory at any given time, the relative efficiency of HANA columnar storage over classic relational record storage, support on non-proprietary hardware, continued support of other database platforms for their Business Suite, HANA stability and use of HANA for non-SAP applications. I’m not sure that was the right message: it seemed very defensive rather than talking about the future of SAP technology, although maybe the standard SAP user sitting the audience needed to hear this directly from Plattner. He did end up with some words on how customers can move forward: even if they don’t want to change database or platform, moving to the current version of the suite will provide some performance and functionality improvements, while putting them in the position to move to Business Suite on HANA (either on-premise or on the Enterprise Cloud) in the future for a much bigger performance boost.

HANA is more than just database: it’s database, application server, analytics and portals bundled together for greater performance. It’s like the new mainframe, except running on industry-standard x86-based hardware, and in-memory so lacking the lengthy batch operations that we associate with old-school mainframe applications. It’s OLTP and OLAP all in one, so there’s no separation between operational data stores and data warehouses. As long as all of the platform components are (relatively) innovative, this is great, for the same reason that mainframes were great in their day. HANA provides a great degree of openness, allowing for code written in Java and a number of other common languages to be deployed in a JVM environment and use HANA as just a database and application server, but the real differentiating benefits will come with using the HANA-specific analytics and other functionality. Therein lies the risk: if SAP can keep HANA innovative, then it will be a great platform for application development; if they harken to their somewhat conservative roots and the innovations are slow to roll out, HANA developers will become frustrated, and less likely to create applications that fully exploit (and therefore depend upon) the HANA platform.

SAP HANA Enterprise Cloud

Ingrid Van Den Hoogen and Kevin Ichhpurani gave a press briefing on what’s coming for HANA Enterprise Cloud following the launch last week. Now that the cloud offering is available,  existing customers can move any of their HANA-based applications — Business Suite, CRM, Business Warehouse, and custom applications — to the cloud platform. There’s also a gateway that allows interaction between the cloud-based applications and other applications left on premise. Customers can bring their own HANA licences, and use SAP services to onboard and migrate their existing systems to the cloud.

HANA Enterprise Cloud is the enterprise-strength, managed cloud version of HANA in the cloud: there’s also HANA One, which uses the Amazon public cloud for a lower-end entry point at $0.99/hour and a maximum of 30GB of data. Combined with HANA on premise (using gear from a certified hardware partner) and hosting partner OEM versions of HANA cloud that they repackage and run on their own environment (e.g., IBM or telcos), this provides a range of HANA deployment environments. HANA functionality is the same whether on AWS, on premise or on SAP’s managed cloud; moving between environments (such as moving an application from development/test on HANA One to production on HANA Enterprise Cloud) is a simple “lift and shift” to export from one environment and import into the target environment. The CIO from Florida Crystals was in the audience to talk about their experience moving to HANA in the cloud; they moved their SAP ERP environment from an outsourced data center to HANA Enterprise Cloud in 180 hours (that’s the migration time, not the assessment and planning time).

SAP is in the process of baking some of the HANA extensions into the base HANA platform; currently, there’s some amount of confusion about what “HANA” will actually provide in the future, although I’m sure that we’ll hear more about this as the updates are released.

SAPPHIRENOW Day 2 Keynote

This morning, our opening keynote was from SAP’s other co-CEO, Jim Snabe. He started with a bit about competitive advantage and adaptation to changing conditions, illustrated with the fact that Sumatran tigers have evolved webbed feet so that they can chase their prey into water: evolution and even extinction in business is not much different from that in the natural world, it just happens at a much faster pace. In business, we have both gradual evolution through continuous improvement, and quantum leaps caused primarily by the introduction of disruptive technology. Snabe positions HANA as being one of those disruptive technologies.

McLaren racing dashboardRon Dennis, chairman of McLaren Group, joined Snabe to talk about how they’re using HANA to gather, analyze and visualize data from their cars during Formula 1 races: 6.5 billion data points per car per race. We saw a prototype dashboard for visualizing that data, and heard how the data is used to make predictions and optimize performance during the race. Your processes probably don’t generate 6.5B events per instance, but in-flight optimization is something that’s beyond the capabilities of many organizations unless they use big data and predictive analytics. Integrating this functionality into process management may well be what allows the large vendors such as SAP and IBM to regain the BPM innovation advantage over some of the smaller and more nimble vendors. Survival of the fittest, indeed.

Snabe talked about other applications for HANA, such as in healthcare, where big data allows for comprehensive individual DNA analysis and disease prevention, before returning to the idea of using it for realtime business optimization that allows organizations to adapt and thrive. SAP is pushing all of their products onto HANA as the database platform, first providing data warehousing capabilities, SuccessFactors and now their Business Suite on HANA for greatly improved performance due to in-memory processing. They’ve opened up the platform so that other companies can develop applications on HANA, which will help to drive it into vertical industries. Interestingly, Snabe made the point that having realtime in-memory processing not only makes things faster, it also makes applications less complex, since some of the complexity in code is due to disk and processing latency. They have 1,500 customers on HANA now, and that number is growing fast.

HANA and in-memory processing was just one of the three “quantum leaps” that SAP has been effecting during the last three years; the second is having everything available in the cloud. Just as in-memory processing is about increasing speed and reducing complexity, so is cloud, except that it is about increasing speed and reducing complexity of IT implementations. In the three years that they’ve been at it, and including their SuccessFactors and Ariba acquisitions, they’ve gained 29 million users in the cloud. He was joined by executives from PepsiCo, Timken and Nespresso to talk about their transition to cloud, which included SuccessFactors for cloud-based performance management and HR across their global operations, and CRM in the cloud.

Combining their HANA and cloud initiatives, SAP launched HANA Enterprise Cloud last week, with HANA running on SAP’s infrastructure, which will allow organizations to run all of their SAP applications in the cloud, with the resulting benefits of elasticity and availability. I have a more detailed briefing on HANA Enterprise Cloud this afternoon

Their third quantum leap in the past three years is user experience, culminating in today’s launch of Fiori, a new user interface that brings the aesthetic of consumer UI — including mobile interfaces — to enterprise software. We’ll be hearing more about this in tomorrow’s keynote with Vishal Sikka.

By the way, you can watch the keynotes live and replays of many sessions here; I confess to have watched this morning’s keynote online from my hotel room in order to have reliable wifi to research while I watched and wrote this post.

Process Intelligence With @alanrick

I met up with the NetWeaver BPM product management team and sat in on a session given by Alan Rickayzen of SAP and their customer King Tantivejkul of Colgate-Palmolive on putting intelligence into processes. This wasn’t about process automation — it was assumed that you have some sort of process automation in some system already, which constitutes the instrumentation on the processes — but rather taking all of the process events from a heterogeneous collection of systems and analyzing them in the aggregate in order to drive and support decision-making.

Colgate brings funnels all of their data from their global operations through a master data hub to their SAP back-end, including financials, materials, customer and reference data. SAP’s business suite ERP software is great for crunching data, but not so great at visualizing it — Colgate is using some hard-coded monthly reports that showed some metrics, but little about the process itself — so Colgate signed up for the operational process intelligence (OPINT) ramp-up (first customer release) to help them identify potential issues and bottlenecks in the process. They don’t have anything to show yet, but seem pretty excited about what they can get out of it.

OPINT, built on HANA, provides a more responsive and flexible view of process metrics. Without writing any Java or ABAP code, you can put together a dashboard that shows metrics from multiple systems, since HANA is acting as a process event warehouse for Business Workflow and NetWeaver BPM process events as well as custom processes made visible via Process Observer. In the future, they’ll be adding in other data sources, so you can pull in process models and event data from other systems. The HANA studio design environment allows these processes to be imported from the back-end systems and represents them as BPMN; events in these processes can then be mapped to different phases of a business scenario in order to generate the dashboard.

Predictive analytics are built in, as you might expect given the capabilities of HANA, allow for forecasting of missing specific KPIs and milestones. As we saw at IBM Impact a couple of weeks ago, predictive process analytics are becoming big for high-value process instances: it’s not enough to know if you’re meeting a specific KPI right now, you need to know how the process is going to roll out through its entire lifecycle.

The dashboard widgets that we saw in a short video clip look completely adequate: different data visualizations, colors to denote states, KPIs and drilldowns. No big UI innovations, but the real gold here is in the HANA analytics going on behind the scenes, and the ease with which a solution developer can create a dashboard view of the HANA data. Furthermore, this runs completely on HANA: HANA is the database, the analytics engine and the app server, making it a bit easier to deploy than some other analytics solutions. This is big data applied to process, and it’s fair to say that this combination is going to be significant for the future of BPM.

Back At SAPPHIRENOW – Day 1 Keynote

It’s been a couple of years since I last attended SAP’s huge SAPPHIRE NOW conference, but this week I’m here with my 20,000 closest friends at the Orlando Convention Center (plus another 80,000 watching online) to get caught up. The conference kicked off with a keynote from Bill McDermott, SAP’s co-CEO, and it’s all about HANA and cloud: everything from SAP now runs on HANA, and combined with their cloud platforms realize the dream of realtime, predictive supply chains. HANA is also at the heart of how SAP is addressing social enterprise functionality, allowing a company to analyze a flood of consumer social data to find what’s relevant.

They highlighted some of their sports-related customers’ applications — which definitely allowed for some good lead-in video — with executives from Under Armour, the San Francisco 49’ers and the NBA. In part, sports applications are about helping teams play better and manage their talent through play/player data analysis (think Moneyball), but are also about customer engagement online and in the stadium. The most traditional usage of SAP on the panel is with Under Armour, which manufactures sportswear and sports-related biometrics devices, but their incredible growth means that they needed enterprise systems that they won’t outgrow. An interesting new industry vertical focus for SAP.

The keynote finished with Bob Calderoni, CEO of Ariba (recently acquired by SAP) talking about how cloud — in the form of private business networks, of course — drives productivity. Good focus, since too often the current technology buzzwords (social, mobile, cloud) are discussed purely as the end, not the means, and we can lose sight of how these can make us more productive and efficient, as well as fully buzzword-enabled.

As usual, wifi in the keynote area is impossible, and since I’m tablet-only, I couldn’t even plug into the hard-wired internet that they provided for we guests of Global Communications – I’m not the only one in this section with a tablet rather than a laptop, so imagine that they’ll have to do something in the future to allow the media to consume and publish during the keynote. T-Mobile’s iPhone coverage is resolutely stuck at EDGE in this area, so I can’t even reliably set up a hotspot, although that would just contribute to the wifi problems. The WordPress Android app works fine offline, however, so I was able to take notes and publish later.

OpenText EIMDay Toronto, Financial Services Session

After lunch at the Toronto OpenText EIM Day, Catharine MacKenzie of the Mutual Fund Dealers Association talked about how they’re using OpenText MBPM (from the Metastorm acquisition). She spoke on an OpenText webinar last year, and I was interested in how they’ve progressed since then.

The MFDA is very process-based, since they’re a regulatory body, and although their policies don’t change that often, the processes used to deal with members and policies are constantly being improved. There was no packaged solution for their regulatory processes, and the need to have process flexibility without a full-on custom solution (which was beyond their budget and IT capabilities) led them to BPM. As I described in the post about the webinar (linked above), they started with four processes including compliance and enforcement, and sped through the implementation of several other processes through 2012. Although during the webinar, she stated that they would be implementing five new processes in 2012, most of that has been pushed to 2013, in part (it appears) because of a platform upgrade to MBPM 9.

She pointed out that everyone in MFDA is using BPM for internal administrative processes, such as booking time off, as well as for the member-facing processes; for many of these processes, the users don’t even know that they’re using BPM. They’re also an OpenText eDocs customer, so can present content within processes, although apparently they have had to do a lot of that integration work themselves.

As for benefits, they’re seeing a huge decrease in development and deployment time compared to custom applications that they build in Visual Studio, with process versioning and auditing built in. They’ve had challenges around having the business own the processes, rather than IT, while maintaining good process design and disciplined testing; the MBPM upgrade and migration is also taking longer than expected, hence is delaying some of their planned process implementations. This is an interesting result, against the backdrop of this morning’s customer keynote talking about major system upgrades: an upgrade that requires data migration and custom application refactoring is almost always going to cause delays in a previously-defined schedule of roll-outs, but very necessary for setting the stage for future functionality.

I’m skipping out for the rest of the afternoon to get back to my desk, but this has been a good opportunity to get caught up on the entire OpenText product suite and talk to some of their local customers.

Disclosure: OpenText is a customer, for whom I recently did a webinar and related white paper, but I am not paid to be here today, nor for writing any of these blog posts.

OpenText EIMDay Toronto, Customer Keynotes

Following the company/product keynotes, we heard from two OpenText customers.

First up was Tara Drover from Hatch Engineering, a Canadian engineering firm with 11,000 employees worldwide. They have OpenText Content Server on 10 corporate instances containing 32 million documents for more than 37,000 projects, almost half at their corporate headquarters in the Toronto area. They use it for project documentation, but also for a variety of other administrative and management documents. It appears that they have configured and customized Content Server, and built add-ons, to be the core of their corporate information store. They’ve been using Content Server since 2002 (v9.1), and have upgraded through v9.5 (including “de-customization”, a term and philosophy that I adore), v9.7.1 and v10. The latest upgrade, to CS10, is the one that she focused on in her presentation. Their drivers for the upgrade were to move to a 64-bit platform for scalability and performance reasons, to get off v9.7.1 before support ended, and to set the stage for some of the features in CS10: facets and columns, an improved search engine, and multilingual support. However, they wanted to keep the UI as similar as possible, providing more of a back-end upgrade as a platform for growth rather than a radical user experience change.

They started in March 2012 with strategy, change assessment and planning, then continued on to environmental assessment, development and testing, people change management and their first deployment in July 2012. Their readiness assessment identified that they first had to update their Windows Server and SQL Server instances (to 2008 — hardly cutting edge), and showed some of the changes to the integration points with other Hatch systems. As part of their development and testing, they developed an 80-page deployment guide, since this would have to roll out to all of the Content Server sites worldwide, including estimates of times required for the upgrade in order to avoid downtime during local business hours, and plans for using local staff for remote upgrades. During development and testing, they simultaneously ran the v9.7.1 production environment on the upgraded Windows Server platform, plus a CS10 development environment and a separate CS10 test/staging environment where the production processes were cloned and tested.

If you’re upgrading a single Content Server instance, you’re unlikely to go to this level of complexity in your upgrade plans and rollout, but for multiple sites across multiple countries (and languages), it’s a must. In spite of all the planning, they did have a few hiccups and some production performance issues, in part because they didn’t have a load testing tool. From their first rollout in Santiago, Chile in July 2012, followed by a few months of tuning and testing, they’re now rolling out about one site per month. They’re seeing improvements in the UI and search functions, and are ready to start thinking about how to use some of the new CS10 features.

They had a number of success factors that are independent of whatever product that you’re upgrading, such as clearly defined scope, issue management, and upgrading the platform without adding too many new user features all at once.

The second customer keynote was from Robin Thompson, CIO for the shared services sector of the Government of Ontario. They had some pretty serious information and records management issues, pretty much leaving the retention and disposition of information in the hands of individuals, with little sharing of information between ministries. To resolve this, they have developed a framework for information management over the next several years, targeted at improving efficiencies and improving services to constituents. Their guiding principles are that nformation needs to be protected and secure, managed, governed, accessible and relevant, and valued; in other words, the information needs to be managed as business records. Their roadmap identified an enterprise records and document management service as a necessary starting point, which they have deployed (based on OpenText) in the past year to the Social Services Ministries, with six more areas queued up and ready to implement. In addition to deploying in more ministries, they are also expanding functionality, bringing in email records management. to the Ministry of Finance later this year. This information management framework and vision is long overdue for the Ontario government, and hopefully will lead to better services for those of us who live here.

She shared a number of lessons that they learned along the way: the importance of change management and stakeholder communication; the time required for developing data architecture and taxonomy; the balance between overly-rigid standardization and too many customized instances; the need for external and internal resources to develop and maintain a records/document management practice; and the importance of governance. They’ve focused on an incremental approach, and have allowed the business leaders to pull the functionality rather than have IT push it into the business areas.

OpenText EIM Day Toronto, Company/Product Keynotes

I always try to drop in on vendor events that happen in my own backyard, so today I’m at OpenText’s EIM Day in Toronto. OpenText is a success story in the Canadian software space, focused on enterprise information management, which includes content and process management. They have grown significantly through acquisitions, acquiring (somewhat controversially) two different BPM vendors (Metastorm and Global 360) to add to their home-grown content management capabilities.

Following a welcome from Jim McIntyre, the regional VP of sales, we heard a keynote from Mark Barrenechea, CEO. Barrenechea was with SAP Oracle in the past, and obviously has continued to leverage those strong ties into the ERP market by integrating and partnering with SAP and other ERP vendors. He sees information-based strategies as the direction of business-technology transformation today, providing support for all of the unstructured information that lives alongside the structured information in ERP and other line of business systems. He outlined several transformations going on in the information enterprise: paper to digital; hierarchical to social; on premise to hybrid cloud; fragmented to managed, secured and governed; products to platforms; and ERP to EIM. He claimed that they will be able to replace multiple different products with a single platform from OpenText covering everything from capture to archive — capture, content management, process management, customer experience management (CEM) — although it appears that’s not yet released, and not clear if this will be a product branding exercise rather than a fully integrated platform.

This appeared to be a fairly conservative audience in terms of product adoption — I sat with someone who was just in the process of converting their LiveLink installation to Content Server, which I think is a bit overdue — so I’m not sure how well the message about their Tempo social collaboration platform went down, but OpenText will be pushing it later this year by using it for customer support and service interactions. What did go over well was Barrenechea’s scare tactic about Dropbox and Google Docs licensing — “did you know that they have the right to use your content for whatever purposes that they want?” — as a lead-in to the need for content security.

Barrenechea wrapped up with a product overview in their four main categories:

  • ECM, with Content Server, Tempo Box (an enterprise Dropbox-like product) and Archive (storage management)
  • CEM, with Tempo Social, DAM (digital asset management), WEM (web experience management) and CCM (customer communication management) making up the social suite
  • BPM, with Assure, MBPM and targeted apps making up their Smart Process Apps
  • iX (information exchange), with Secure iX, EDI and MFT (managed file transfer) providing secure transactions
  • DX (discovery), with InfoFusion and Semantic Navigation, indicating OpenText’s reentry into enterprise search; keep in mind that OpenText was a spin-off from a University of Waterloo project for indexing and searching the Oxford English Dictionary, making search part of their DNA

This still seems like a lot of products to me, many of which came through acquisitions hence may have quite different internal architecture. Although Barrenechea made claims that these are integrated, I did hear the qualifier “…on some level”. Hopefully they are integrated in more than his slide deck.

We had a deeper product view with Lynn Elwood, VP of product marketing, walking us through a (fictional) customer use case for a tablet manufacturer:

  • Creating and publishing product web pages using WEM (this functionality originated with the Vignette acquisition), including a review/approval cycle for the content before publication, plus cross-platform publication to update Facebook and Twitter with the newly published information, as well as mobile-optimized sites. This also gathers metrics and KPIs about the published information, including user actions, sentiment, ratings and comments.
  • Customer communications using StreamServe for customizing any customer communications, including adding customer-specific messages to invoices and letters.
  • Dynamic case management for help desk and product complaints/returns, which can include scanned documents with content captured automatically and added as case metadata. Mobile device support and Tempo Box allows a customer to take a photo of damaged goods and upload for the CSR to review.
  • Process analytics with ProVision (previously acquired by Metastorm, which was then acquired by OpenText) to model and simulate processes for improvement.
  • Records management within their Content Server product. This includes direct integration with Microsoft Outlook, so that emails can be manually dragged (or automatically moved) into folders that are managed by Content Server, hence can be part of a case and controlled by records management. There’s a lot of automated classification built in, so that content can be automatically found, classified and managed according to policies and usage.
  • Content storage management using their Archive product, which includes media staging and access control (including geographic constraints) based on policies.

A good overview of the product suite, but I’m still left with the feeling that this is a huge grab-bag of partially integrated components based on a variety of acquisitions over the years. They are definitely making progress in bringing them together, and the sort of use cases that Elwood showed us will help customers to understand the range of capabilities that OpenText can provide. As long as the products are individually capable and moving towards a common vision in terms of architecture, integration and user experience, there is an advantage to dealing with a single vendor for an array of related information management functionality: after all, that’s the same reason that many enterprises buy IBM products, in spite of an equally fragmented product acquisition and development strategy.