SAP NetWeaver Business Warehouse with HANA

Continuing in the SAP World Tour in Toronto today, I went to a breakout innovation session on NetWeaver Business Warehouse (BW) and HANA, with Steve Holder from their BusinessObjects center of excellence. HANA, in case you’ve been hiding from all SAP press releases in the past two years, is an analytic appliance (High-performance ANalytic Applicance, in fact) that includes hardware and in-memory software for real-time analysis of non-aggregated information (i.e., not complex event processing). Previously, you would have had to move your BW data (which had probably already been ETL’d from your ERP to BW) over to HANA in order to take advantage of that processing power; now, you can actually make HANA be the persistence layer for BW instead of a relational database such as Oracle or DB2, so that the database behind BW becomes HANA. All the features of BW (such as cubes and analytic metadata) can be used just as they always could be, and any customizations such as custom extractors already done on BW by customers are supported, but moving to an in-memory provides a big uplift in speed.

Previously, BW provided data modeling, an analytical/planning engine, and data management, with the data storage in a relationship database. Now, BW only provides the data modeling, and everything else is pushed into HANA for in-memory performance. What sort of performance increases? Early customer pilots are seeing 10x faster data loading, 30x faster reporting (3x faster than BW Accelerator, another SAP in-memory analytics option), and a 20% reduction in administration and maintenance (no more RDBMS admins and servers). This is before the analytics have been optimized for in-memory: this is just a straight-up conversion of their existing data into HANA’s in-memory columnar storage. Once you turn on in-memory InfoCubes, you can eliminate physical cubes in favor of virtual cubes; there are a lot of other optimizations that can be done by eventually refactoring to take advantage of HANA’s capabilities, allowing for things such as interfacing to predictive analytics, and providing linear scaling of data, users and analysis.

This is not going to deprecate BW Accelerator, but provides options for moving forward that include a transition migration path from BWA to BW on HANA. BWA, however, provides performance increases for only a subset of BW data, so you can be sure that SAP will be encouraging people to move from BWA to BW on HANA.

A key message is that customers’ BW investments are completely preserved (although not time spent on BWA), since this is really just a back-end database conversion. Eventually, the entire Business Suite ERP system will run on top of HANA, so that there will be no ETL delay in moving operational data over to HANA for analysis; presumably, this will have the same sort of transparency to the front-end applications as does BW on HANA.

Sal Vella on Technologies for a Smarter Planet at CASCON2011

I attended the keynote at IBM’s CASCON conference in Toronto today, where Judy Huber, who directs the IBM Canada software lab, kicked off the session by reminding us that IBM software development has been happening in Canada since 1967 and continues to grow, and of the importance of collaboration between the research and industry communities. She introduced Joanna Ng, who is the head of research at the lab, to congratulate the winners of the most influential paper from CASCON 2001 (that date is not a typo, it’s a 10-year thing): Svetlana Kiritchenko and Stan Matwin for “Classification with Co-Training” (on email classification).

The main speaker was Sal Vella, VP of architecture and technology within the IBM software group, talking about technologies to build solutions for a smarter planet. Fresh from the IOD conference two weeks ago, I was all primed for this; there was a great booth at IOD that highlighted “smarter technology” with some interesting case studies. IBM’s smarter planet initiative is about technologies that allow us to do things that we were never able to do before, much of which is based on the immeasurable volume of data constantly produced by people, devices and systems. Consider electricity meters, like the one that you have in your home: it used to be that these were read once per month (if you were lucky) by a human, and the results entered into a billing system. Now, smart meters are read every 15 minutes to allow for time of use billing that rewards people for shifting their electricity usage away from peak periods. Analytics are being used in ways that they were never used before, and he discussed the popular Moneyball case of building a sports team based on player statistics. He also spoke about an even better use of analytics to create “solutions for a partying planet”: a drinks supplier predicting sports games outcomes to ensure that the pubs frequented by the fans of the teams most like to win had enough alcohol on hand to cover the ensuing parties. Now that’s technology used for the greater good. Winking smile

There are a lot of examples of big data and analytics that were previously unmanageable that are now becoming reasonable targets, most of which could be considered event-based: device instrumentation, weather data, social media, credit card transactions, crime statistics, traffic data and more. There are also some interesting problems in determining identity and relationships: figuring out who people really are even when they use different versions of their name, and who they are connected to in a variety of different ways that might indicate potential for fraud or other misrepresentation. Scary and big-brotherish to some, but undeniably providing organizations (including governments) with deeper insights into their customers and constituents. If those who complain about governments using this sort of technology “against” them would learn how to use it themselves, the tables might be turned as we gain insight into how well government is providing services to us.

We heard briefly from Charles Gauthier, acting director at the institute for information technology at National Research Council (NRC) Canada. NRC helped to create the CASCON conference 21 years ago, and continue to sponsor it; they support research in a number of areas that overlap with CAS and the other researchers and exhibitors presenting here.

The program chairs, Marin Litoiu of York University and Eleni Stroulia of University of Alberta presented awards for the two outstanding papers from the 22 papers at the conference:

  • “Enhancing Applications Robustness in Cloud Data Centres” by Madalin Mihailescu, Andres Rodriguez and Cristiana Amza of University of Toronto, and Dmitrijs Palcikovs, Gabriel Iszlai, Andrew Trossman and Joanna Ng of IBM Canada
  • “Parallel Data Cubes on Multi-Core Processors with Multiple Disks” for best student paper, by Hamidreza Zaboli and Frank Dehne of Carlton University

We finished with a presentation by Stan Matwin of University of Ottawa, co-author of the most influential paper presentation on email classification from the CASCON of 10 years past (his co-author is due to give birth on Wednesday, so decided not to attend). It was an interesting look at how the issue of email classification has continued to grow in the past 10 years; systems have become smarter since then, and we have automated spam filtering as well as systems for suggesting actions to take (or even taking actions without human input) on a specific message. The email classification that they discussed in their paper was based on classification systems where multiple training sets were used in concert to provide an overall classification for email messages. For example, two messages might both use the word “meeting” and a specific time in the subject line, but one might include a conference room reference in the body while the other references the local pub. Now, I often have business meetings in the pub, but I understand that many people do not, so I can see the value of such a co-training method. In 2001, they came to the conclusion that co-training can be useful, but is quite sensitive to its parameters and the learning algorithms used. Email classification has progressed since then: Bayesian (and other) classifiers have improved drastically, data representation is richer (through the use of meta formats and domain-specific enrichment) to allow for easier classification. social network and other information can be correlated, and there are specific tailored solutions for some email classification applications such as legal discovery. Interesting to see this sort of perspective on a landmark paper in the field of email classification.

I’m not sticking around for any of the paper presentations, since the ones later today are a bit out of my area of interest, and I’m booked the rest of the week on other work. However, I have the proceedings so will have a chance to look over the papers.

NSERC BI Network at CASCON2011 (Part 2)

The second half of the workshop started with Renée Miller from University of Toronto digging into the deeper database levels of BI, and the evolving role of schema from a prescriptive role (time-invariant, used to ensure data consistency) to a descriptive role (describe/understand data, capture business knowledge). In the old world, a schema was meant to reduce redundancy (via Boyce-Codd normal form), whereas the new world schema is used to understand data, and the schema may evolve. There are a lot of reasons why data can be “dirty” – my other half, who does data warehouse/BI for a living, is often telling me about how web developers create their operational database models mostly by accident, then don’t constrain data values at the UI – but the fact remains that no matter how clean you try to make it, there are always going to be operational data stores with data that needs some sort of cleansing before effective BI. In some cases, rules can be used to maintain data consistency, especially where those rules are context-dependent. In cases where the constraints are inconsistent with the existing data (besides asking the question of how that came to be), you can either repair the data, or discover new constraints from the data and repair the constraints. Some human judgment may be involved in determining whether the data or the constraint requires repair, although statistical models can be used to understand when a constraint is likely invalid and requires repair based on data semantics. In large enterprise databases as well as web databases, this sort of schema management and discovery could be used to identify and leverage redundancy in data to discover metadata such as rules and constraints, which in turn could be used to modify the data in classic data repair scenarios, or modify the schema to adjust for a changing reality.

Sheila McIlraith from University of Toronto presented on a use-centric model of data for customizing and constraining processes. I spoke last week at Building Business Capability on some of the links between data and processes, and McIlraith characterized processes as a purposeful view of data: processes provide a view of the data, and impose policies on data relative to some metrics. Processes are also, as she pointed out, are a delivery vehicle for BI – from a BPM standpoint, this is a bit of a trivial publishing process – to ensure that the right data gets to the right stakeholder. The objective of her research is to develop business process modeling formalism that treats data and processes as first class citizens, and supports specification of abstract (ad hoc) business processes while allowing the specification of stakeholder policies, preferences and priorities. Sounds like data+process+rules to me. The approach is to specify processes as flexible templates, with policies as further constraints; although she represents this as allowing for customizable processes, it really just appears to be a few pre-defined variations on a process model with a strong reliance on rules (in linear temporal logic) for policy enforcement, not full dynamic process definition.

Lastly, we heard from Rock Leung from SAP’s academic research center and Stephan Jou from IBM CAS on industry challenges: SAP and IBM are industry partners to the NSERC Business Intelligence Network. They listed 10 industry challenges for BI, but focused on big data, mobility, consumable analytics, and geospatial and temporal analytics.

  • Big data: Issues focus on volume of data, variety of information and sources, and velocity of decision-making. Watson has raised expectations about what can be done with big data, but there are challenges on how to model, navigate, analyze and visualize it.
  • Consumable analytics: There is a need to increase usability and offering new interactions, making the analytics consumable by everyone – not just statistical wizards – on every type of device.
  • Mobility: Since users need to be connected anywhere, there is a need to design for smaller devices (and intermittent connectivity) so that information can be represented effectively, and seamless with representations on other devices. Both presenters said that there is nothing that their respective companies are doing where mobile device support is not at least a topic of conversation, if not already a reality.
  • Geospatial and temporal analytics: Geospatial data isn’t just about Google Maps mashups any more: location and time are being used as key constraints in any business analytics, especially when you want to join internal business information with external events.

They touched briefly on social in response to a question (it was on their list of 10, but not the short list), seeing it as a way to make decisions better.

For a workshop on business intelligence, I was surprised at how many of the presentations included aspects of business rules and business process, as well as the expected data and analytics. Maybe I shouldn’t have been surprised, since data, rules and process are tightly tied in most business environments. A fascinating morning, and I’m looking forward to the keynote and other presentations this afternoon.

NSERC BI Network at CASCON2011 (Part 1)

I only have one day to attend CASCON this year due to a busy schedule this week, so I am up in Markham (near the IBM Toronto software lab) to attend the NSERC Business Intelligence Network workshop this morning. CASCON is the conference run by IBM’s Centers for Advanced Studies throughout the world, including the Toronto lab (where CAS originated), as a place for IBM researchers, university researchers and industry to come together to discuss many different areas of technology. Sometimes, this includes BPM-related research, but this year the schedule is a bit light on that; however, the BI workshop promises to provide some good insights into the state of analytics research.

Eric Yu from University of Toronto started the workshop, discussing how BI can enable organizations to become more adaptive. Interestingly, after all the talk about enterprise architecture and business architecture at last week’s Building Business Capability conference, that is the focus of Yu’s presentation, namely, that BI can help enterprises to better adapt and align business architecture and IT architecture. He presented a concept for an adaptive enterprise architecture that is owned by business people, not IT, and geared at achieving measurable business success. He discussed modeling variability at different architectural layers, and the traceability between them, and how making BI an integral part of an organization – not just the IT infrastructure – can support EA adaptability. He finished by talking about maturity models, and how a closed loop deployment of BI technologies can help meet adaptive enterprise requirements. Core to this is the explicit representation of change processes and their relationship to operational processes, as well as linking strategic drivers to specific goals and metrics.

Frank Tompa from University of Waterloo followed with a discussion of mapping policies (from a business model, typically represented as high-level business rules) to constraints (in a data model) so that these can be enforced within applications. My mind immediately went to why you would be mapping these to a database model rather than a rules management system; his view seems to be that a DBMS is what monitors at a transactional level and ensures compliance with the business model (rules). His question: “how do make the task of database programming easier?” My question: “why aren’t you doing this with a BRMS instead of a DBMS?” Accepting his premise that this should be done by a database programmer, the approach is to start with object definitions, where an object is a row (tuple) defined by a view over a fixed database schema, and represents all of the data required for policy making. Secondly, consider the states that an object can assume by considering that an object x is in state S if its attributes satisfy S(x). An object can be in multiple states at once; the states seem to be more like functions than states, but whatever. Thirdly, the business model has to be converted to an enforcement model through a sort of process model that also includes database states; really more of a state diagram that maps business “states” to database states, with constraints on states and state transitions denoted explicitly. I can see some value in the state transition constraint models in terms of representing some forms of business rules and their temporal relationships, but his representation of a business process as a constraint diagram is not something that a business analyst is ever going to read, much less create. However, the role of the business person seems to be restricted to “policy designer” listing “states of interest”, and the goal of this research is to “form a bridge between the policy manager and the database”. Their future work includes extracting workflows from database transaction logs, which is, of course, something that is well underway in the BPM data mining community. I asked (explicitly to the presenter, not just snarkily here in my blog post) about the role of rules engines: he said that one of the problems was in vocabulary definition, which is often not done in organizations at the policy and rules level; by the time things get to the database, the vocabulary is sufficiently constrained that you can ensure that you’re getting what you need. He did say that if things could be defined in a rules engine using a standardized vocabulary, then some of the rules/constraints could be applied before things reached the database; there does seem to be room for both methods as long as the business rules vocabulary (which does exist) is not well-entrenched.

Jennifer Horkoff from University of Toronto was up next discussing strategic models for BI. Her research is about moving BI from a technology practice to a decision-making process that starts with strategic concerns, generates BI queries, interprets the results relative to the business goals and decide on necessary actions. She started with the OMG Business Motivation Model (BMM) for building governance models, and extended that to a Business Intelligence Model (BIM), or business schema. The key primitives include goals, situations (can model SWOT), indicators (quantitative measures), influences (relationships) and more. This model can be used at the high-level strategic level, or at a more tactical level that links more directly to activities. There is also the idea of a strategy, which is a collection of processes and quality constraints that fulfill a root-level goal. Reasoning that can be done with BIMs, such as whether a specific strategy can fulfill a specific goal, and influence diagrams with probabilities on each link used to help determine decisions. They are using BIM concepts to model a case study with Rouge Valley Health System to improve patient flow and reduce wait times; results from this will be seen in future research.

Each of these presentations could have filled a much bigger time slot, and I could only capture a flavor of their discussions. If you’re interested in more detail, you can contact the authors directly (links to each above) to get the underlying research papers; I’ve always found researchers to be thrilled that anyone outside the academic community is interested in what they’re doing, and are happy to share.

We’re just at the md-morning break, but this is getting long so I’ll post this and continue in a second post. Lots of interesting content, I’m looking forward to the second half.

Agile Predictive Process Platforms for Business Agility with @jameskobielus

James Kobielus of Forrester brought the concepts of predictive analytics to processes to discuss optimizing processes using the Next Best Action (NBA): using analytics and predictive models to figure out what you should do next in a process in order to optimize customer-facing processes.

As we heard in this morning’s keynote, agility is mandatory not just for competitive differentiation, for but basic business survival. This is especially true for customer-facing processes: since customer relationships are fragile and customer satisfaction is dynamic, the processes need to be highly agile. Customer happiness metrics need to be built into process design, since customer (un)happiness can be broadcast via social media in a heartbeat. According to Kobielus, if you have the right data and can analyze it appropriately, you can figure out what a customer needs to experience in order to maximize their satisfaction and maximizing your profits.

Business agility is all about converging process, data, rules and analytics. Instead of static business processes, historical business intelligence and business rules silos, we need to have real-time business Intelligence, dynamic processes, and advanced analytics and rules that guide and automate processes. It’s all about business processes, but processes infused with agile intelligence.  This has become a huge field of study (and implementation) in customer-facing scenarios, where data mining and behavioral studies are used to create predictive models on what the next best action is for a specific customer, given their past behavior as your customer, and even social media sentiment analysis.

He walked through a number of NBA case studies, including auto-generating offers based on a customer’s portal behavior in retail; tying together multichannel customer communications in telecom; and personalizing cross-channel customer interactions in financial services. These are based on coupling front and back-office processes with predictive analytics and rules, while automating the creation of the predictive models so that they are constantly fine-tuned without human intervention.

IBM IOD Keynote: Turn Insight Into Action

This is a big conference. We’re in the Mandalay Bay Events Center, which is a stadium that would probably hold a hockey rink, and although all the seats are not full, it’s a pretty big turnout. This is IBM’s centennial, which is a theme throughout the conference, and the opening session started with some key points in the history of IBM’s products. IBM might seem like a massive, slow-moving ship at times, but there is no doubt that they’ve been an innovator through the entire age of modern computing. I just hope to be seeing some of that innovation in their ECM and ACM products this week.

The keynote session was hosted by Katty Kay, a BBC news journalist in the Washington bureau, who added a lot of interesting business and social context to the presentations.

Jeff Jonas spoke about analytics, pointing out that with the massive amounts of data available to enterprises, enterprises are actually getting dumber because they’re not analyzing and correlating that data in context. He used a jigsaw puzzle metaphor: you don’t know what any particular piece means until you see it in relation to the others with which it fits. You also don’t need all of the pieces in the puzzle to understand the big picture: context accumulates with each new observation, and at some point, confidence improves while computational effort decreases.

He looked at two sides of analytics – sense and respond, and explore and reflect – and how they fit into the activity of achieving insight. If the keynotes are available online, definitely watch Jonas’ presentation: he’s funny and insightful in equal measure, and has a great example of a test he ran with jigsaw puzzles and human cognition. He went much too fast for me to keep up in these notes, and I’ll be watching it again if I can find it. The only problem was that his presentation ruined me for the rest of the keynotes, which seemed dull in comparison. 🙂

Sarah Diamond was up next to talk about the challenges facing financial institutions, and how analytics can support the transformation of these organizations by helping them to manage risk more effectively. She introduced a speaker from SunTrust, and IBM customer, who spoke about their risk management practices based around shared data warehousing and reporting services. Another SunTrust speaker then talked about how they use analytics in the context of other activities, such as workflow. A good solid case study, but not sure that this was worth such a big chunk of the main keynote.

Mike Rhodin spoke about how innovation across industries is opening new possibilities for business optimization, particularly where analytics create a competitive advantage. Analytics are no longer a nice-to-have, but an imperative for even staying in business: the performance gap between the winners and losers in business is growing, and is fueled in part by the expedient use of analytics to generate insights that allow for business optimization. Interestingly, marketing and finance are the big users of analytics; only 25% of HR leaders are using analytics to help them with hiring an effective workforce.

Robert LeBlanc discussed how the current state of information from everywhere, radical flexibility and extreme scalability impacts organizations’ information strategy, and challenged the audience to consider if their information strategy is bold enough to live in this new environment. Given that 30% of organizations surveyed reported that they don’t even know what to do with analytics, it’s probably safe to say that there are some decidedly meek information strategies out there. Information – both data and unstructured content – can come from anywhere, both inside and outside your organization, meaning that the single-repository dream is really just a fantasy: repositories need to be federated and integrated so that analytics can be applied on all of the sources where they live, allowing you to exploit information from everywhere. He pointed out the importance of leveraging your unstructured information as part of this.

The keynote finished with Arvind Krishna – who will be giving another full keynote later today – encouraging the audience to take the lead on turning insight into action. He summarized this week’s product announcements: DB2 Analytics Accelerator, leveraging Netezza; IMS 12; IBM Content and Predictive Analytics for Healthcare; IBM Case Manager v5.1, bringing together BPM and case management; InfoSphere MDM 10; InfoSphere Information Server 8.7; InfoSphere Optim Test Data Management Self Service Center; Cognos native iPad support; Cognos BI v10.1.1. He also announced that they closed the Algorithmics acquisition last week, and that they will be acquiring Q1 Labs for security intelligence and risk management. He spoke about their new products, InfoSphere BigInsights and InfoSphere Streams, which we’ll be hearing about more in tomorrow’s keynote.

SAP Run Better Tour: Business Analytics Overview

Dan Kearnan, senior director of marketing for business analytics, provided a overview of SAP’s business analytics in the short breakout sessions following the keynote. Their “run smarter” strategy is based on three pillars of knowing your business, deciding with confidence and acting boldly; his discussion of the “act boldly” part seemed to indicate that the round-tripping from data to events back to processes is more prevalent than I would have thought based on my previous observations.

We covered a lot of this material in the bloggers briefing a couple of weeks ago with Steve Lucas; he delved into the strategy for specific customers, that is, whether you’re starting with SAP ERP, SAP NetWeaver BW or non-SAP applications as input into your analytics.

He briefly addressed the events/process side of things – I think that they finally realized that when they bought Sybase, they picked up Aleri CEP with it – and their Event Insight solution is how they’re starting to deliver on this. They could do such a kick-ass demo using all of their own products here: data generated from SAP ERP, analyzed with BusinessObjects, events generated with Event Insight, and exception processes instantiated in NetWeaver BPM. NW BPM, however, seems to be completely absent from any of the discussions today.

He went through a number of the improvements in the new BI releases, including a common (and easier to use) user interface across all of the analytics products, and deep integration with the ERP and BW environments; there is a more detailed session this afternoon to drill into some of these.

I’m going to stick around to chat with a few people, but won’t be staying for the afternoon, so my coverage of the SAP Run Better Tour ends here. Watch the Twitter stream for information from others onsite today and at the RBT events in other cities in the days to come, although expect Twitter to crash spectacularly today at 1pm ET/10am PT when the iPad announcement starts.

Blogger/Analyst Session with Mark Aboud at SAP Run Better Tour

We had the chance for a small group of bloggers and analysts (okay, I was probably the only one with “blogger” on my name tag) with Mark Aboud, Managing Director of SAP Canada, and Margaret Stuart, VP for the Canadian BusinessObjects division. Since this was a roundtable Q&A, I’ll just list some of the discussion points.

  • 50% of SAP Canadian customers are small and medium businesses, sold through their partner network. ERP sales tend to be made through larger partners, whereas analytics are handled by a larger number of smaller partners as well.
  • Business ByDesign has only been launched in Canada within the past 60 days, making it difficult to tell much about the uptake here. There is one live production customer in Canada now, although they were not able to name names. Pricing and minimum number of users is similar to the US offering.
  • It sounds like HANA is a focus in Canada, but nothing concrete to talk about yet – seems like the analytics sales team is being focused on it and has built a good pipeline. Maple Leaf Foods, who spoke at the keynote, is considering it. The use cases exist, but the customer may not realize that the solutions to big data analytics are within their reach.
  • StreamWork is pretty much a big zero in Canada right now: they’re starting to talk to customers, but it sounds like very early days here. I was promised a follow-up on this question.
  • They’re putting a lot of weight on mobile apps for the future, particularly in industries that have remote users. I’m envisioning an underground miner with an iPad. Winking smile
  • The use of analytics such as BusinessObjects has become much more agile: it’s not taking 6 months to create an analytical view any more, the end users have the expectation that this can be done in a much shorter time.
  • I posed the question about how (or whether) all these great analytics are being used to generate events that feed back automatically into business processes; although there was recognition that there’s some interesting potential, it was a bit of a blank. This is the same question that I posed at last year’s SAPPHIRE about creating a link between their sustainability initiatives and BPM – I’m seeing this as a critical missing link from analytics through events back to processes.

A good opportunity for Q&A with Aboud and Stuart about what’s happening with SAP in Canada. Since most of my focus with SAP has been through the US conferences, it was nice to see what’s happening closer to home.

SAP Run Better Tour Toronto

SAP is holding a Run Better Tour to highlight some of their new releases and customer success stories, and today it’s in Toronto which allows me to check it out without having to get on an airplane. I attended the Women’s Leadership Forum breakfast this morning, featuring Amanda Lang of CBC News, and she’s speaking again in the general keynote, along with Mark Aboud, Managing Director of SAP Canada.

To go off on a tangent for a moment, Lang had an interesting anecdote at breakfast from an interview that she did with the ambassador from Norway. Apparently, Norway mandated that there be equal representation of women in senior government and corporate board positions; all of the cries of “but there are no women to take these roles” turned out to be completely untrue once they were actually required to look for them. Very reminiscent of the brouhaha around women speakers at tech conferences that inevitably arises several times per year.

In her general keynote, Lang focused on the economy and market forces (after making a quick joke about economists getting laid), and the factors that could impact a return to prosperity: world instability, a repeat of the financial crisis due to mismanagement, and a decrease in productivity. In the relatively small Canadian market, we have no control over the first two of these – a financial crisis that impacts us is unlikely to come from our conservatively-run banks, but from US or European financial institutions – but we can be more productive. However, our productivity has declined in the past 20-30 years, and we are at risk of leaving our children worse off than we are. This started when our currency was so cheap, and our exports were selling at $0.60 on the dollar: no need to increase productivity when you can keep doing the same old thing and still make money at it. However, the past 8 years or so have seen an exchange increase such that our dollar sits near par with the US, which makes our exports much less competitive. Since we haven’t increased productivity, we don’t have better widgets to sell for less in spite of the exchange leveling. Productivity and innovation, although not identical, are highly correlated: we need to have more people inside organizations who challenge the status quo and bring forward better ideas for how to do things.

Mark Aboud started his presentation with the idea that you can’t just get better, you have to get better faster than your competition. Some of this is based on taming the explosion of data that is resulting from the digitalization of human culture: all that needs to be gathering and analyzed, then made available to a variety of constituents via a number of different channels. Another contributor is social media, both in terms of the power that it has a platform, but also in raising the expectations for user experience: the consumer experience is very powerful, but the typical employee experience is pretty lame. He moved on to talk about SAP, and particularly SAP Canada, where only 40% of their business is based on ERP: much of the rest is business analytics. This stress on analytics became obvious as he talked about one of their customers, Children’s Hospital of Eastern Ontario, and how they’re using a graphical real-time dashboard as their key interface in the emergency department to indicate how well they’re operating, and highlighting problem areas: a great analytics-in-action example, although it’s not clear where the underlying data is coming from. He also talked about CN Railways, and how they’re using Business Objects analytics to reduce their fuel costs.

Last up in the keynote was someone from Maple Leaf Foods (missed the name) talking about their ERP implementation, and how they use it to manage a company that has grown by acquisition and has very different types of operations in different regions, with 200 different systems and islands of data. They are trying to standardize their business processes across these units at some level, and started rolling out SAP in all of the business units early in 2011, with a planned completion date of early 2013. They’ve done 35 go-lives already, which necessitates a minimum of customization and, sometimes, changing their business processes to match out-of-the-box SAP rather than spending the time to customize SAP.

Good balance of keynotes; I’m now off to a bloggers’ briefing with Mark Aboud.

SAP Analytics Update

A group of bloggers had an update today from Steve Lucas, GM of the SAP business analytics group, covering what happened in 2010 and some outlook and strategy for 2011.

No surprise, they saw an explosion in growth in 2010: analytics has been identified as a key competitive differentiator for a couple of years now due to the huge growth into the amount of information and event being generated for every business; every organization is at least looking at business analytics, if not actually implementing it. SAP has approached analytics across several categories: analytic applications, performance management, business intelligence, information management, data warehousing, and governance/risk/compliance. In other words, it’s not just about the pretty data visualizations, but about all the data gathering, integration, cleanup, validation and storage that needs to go along with it. They’ve also released an analytics appliance, HANA, for sub-second data analysis and visualization on a massive scale. Add it all up, and you’ve got the right data, instantly available.

SAP Analytics products

New features in the recent product releases include an event processing/management component, to allow for real-time event insight for high-volume transactional systems: seems like a perfect match for monitoring events from, for example, an SAP ERP system. There has also been some deep integration into their ERP suite using the Business Intelligence Consumer Services (BICS) connector, although all of the new functionality in their analytics suite really pertains to Business Objects customers who are not SAP ERP customers; interestingly, he refers to customers who have an SAP analytics product but not their ERP suite as “non-SAP customers” – some things never change.

In a move that will be cheered by every SAP analytics user, they’ve finally standardized the user interface so that all of their analytics products share a common (or similar, it wasn’t clear) user experience – this is a bit of catch-up on their part, since they’ve brought together a number of different analytics acquisitions to form their analytics suites.

They’ve been addressing the mobile market as well as the desktop market, and are committing to all mainstream mobile platforms, including RIM’s Playbook. They’re developing their own apps, which will hurt partners such as Roambi who have made use of the open APIs to build apps that access SAP analytics data; there will be more information about the SAP apps in some product announcements coming up on the 23rd. Mobile information consumption is good, and possibly sufficient for some users, but I still think that most people need the ability to take action on the analytics, not just view them. That tied into a question about social BI; Lucas responded that there would be more announcements on the 23rd, but also pointed us towards their StreamWork product, which provides more of the sort of event streaming and collaboration environment that I wrote about earlier in Appian’s Tempo. In other words, maybe the main app on a mobile device will be StreamWork, so that actions and collaboration can be done, rather than the analytics apps directly. It will be interesting to see how well they integrate analytics with StreamWork so that a user doesn’t have to hop around from app to app in order to view and take action on information.