TIBCONOW 2014 Day 2 Keynote: Product Direction

Yesterday’s keynote was less about TIBCO products and customers, and more about discussions with industry thought leaders about disruptive innovation. This morning’s keynote continued that theme with a pre-recorded interview with Vivek Ranadive and Microsoft CEO Satya Nadella talking about cloud, mobile, big data and the transformational effects on individual and business productivity. Nadella took this as an opportunity to plug Microsoft products such as Office 365, Cortana and Azure; eventually he moved on to talk about the role of leadership in providing a meaningful environment for people to work and thrive. Through the use of Microsoft products, of course.

Thankfully, we then moved on to actual TIBCO products.

We had a live demo of TIBCO Engage, their real-time customer engagement marketing product, showing how a store can recognize a customer and create a context-sensitive offer that can be immediately consumed via their mobile app. From the marketer’s side, they can define and monitor engagement flows — almost like mini-campaigns, such as social sharing in exchange for points, or enrolling in their VIP program — that are defined by their target, trigger and response. The target audience can be filtered by past interests or demographics; triggers can be a combination of geolocation (via their app), social media interactions, shopping cart contents and time of day; and responses may be an award such as loyalty points or a discount coupon, a message or both, with a follow link customized to the customer. A date range can then be set for each engagement flow, and set to be live/scheduled to start, or in a draft or review mode. Analytics are gathered as the flows execute, and the effectiveness can be measured in real time.

Matt Quinn, TIBCO’s CTO, spoke about the challenges of fast data: volume, speed and complexity. We saw the three blocks of the TIBCO Fast Data platform — analytics, event processing, and integration — in a bit more detail, with him describing how these three layers work together. Their strategy for the past 12 months, and going forward, has three prongs: evolution of the Fast Data platform; improved ease of use; and delivery of the Fast Data platform including cloud and mobile support. The Fast Data platform appears to be a rebranding of their large portfolio of products as if it were a single integrated product; that’s a bit of marketing-speak, although they do appear to be doing a better job of providing integrations and use cases of how the different products within the platform can be combined.

image

In the first part of the strategy, evolution of the platform (that is, product enhancements and new releases), they continue to make improvements to their messaging infrastructure. Fast, secure message transactions are where they started, and they continue to do this really well, in software and on their FTL appliances. Their ActiveSpaces in-memory data grid has improved monitoring and management, as well as multi-site replication, and is now more easily consumed via Node.js and other lighter-weight development protocols. BusinessWorks 6, their integration IDE, now provides more integrated development tooling with greatly improved user interfaces to more easily create and deploy integration applications. They’ve provided plug-ins for SaaS integrations such as Salesforce, and made it easier to create your own plug-ins for integration sources that they don’t yet support directly. On the event processing side, they’ve brought together some related products to more easily combine stream processing, rules and live data marts for real-time aggregation and visualization. And to serve the internet of things (IoT), they are providing connectivity to devices and sensors.

image

User experience is a big challenge with any enterprise software company, especially one that grows through acquisition: in general, user interfaces end up as a hodge-podge of inconsistent interfaces. TIBCO is certainly making some headway at refactoring these into a more consistent and easier to use suite of interfaces. They’ve improved the tooling in the BusinessWorks IDE, but also in the administration and management of integrations during development, deployment and runtime. They’ve provided a graphical UI designer for master data management (MDM). Presented as part of the ease of use initiative, he discussed the case management functions added to AMX BPM, including manual and automatic ad hoc tasks, case folder and documents with CMIS/ECMS access, and support for elastic organization structures (branch model). BPM reporting has also been improved through the integration of Jaspersoft (acquired by TIBCO earlier this year) with out of the box and customizable reports, and Jaspersoft also has been enhanced to more easily embed analytics in any application. They still need to do some work on interoperability between Jaspersoft and Spotfire: having two analytics platforms is not good for the customers who can’t figure out when to use which, and how to move between them.

The third prong of the strategy, delivery of the platform, is being addressed by offering on-premise, cloud, Silver Fabric platform-as-a-service, TIBCO Cloud Bus for hybrid cloud/on premise configurations, consumable apps and more; it’s not clear that you can get everything on every delivery platform, and I suspect that customers will have challenges here as TIBCO continues to build out their capabilities. In the near future, they will launch Simplr for non-technical integration (similar to IFTTT), and Expresso for consuming APIs. They are also releasing TIBCO Clarity for cleansing cloud data, providing cleaner input for these situational consumable apps. For TIBCO Engage, which we saw demonstrated earlier, they will be adding next best engagement optimization and support for third-party mobile wallets, which should improve the hit rate on their customer engagement flows.

He discussed some of the trends that they are seeing impacting business, and which they have on the drawing board for TIBCO products: socialization and gamification of everything; cloud requirements becoming hybrid to combine public cloud, private cloud and on premise; the rise of micro-services from a wide variety of sources that can be combined into apps; and HTML5/web-based developer tooling rather than the heavier Eclipse environments. They are working on Project Athena, a triplestore database that includes context to allow for faster decisioning; this will start to show up in some of the future product development.

Good review of the last year of product development and what to expect in the next year.

The keynote finished with Raj Verma, EVP of sales, presenting “trailblazer” awards to their customers that are using TIBCO technologies as part of their transformative innovation: Softrek for their ClearView CRM that embeds Jaspersoft; General Mills for their internal use of Spotfire for product and brand management; jetBlue for their use of TIBCO integration and eventing for operations and customer-facing services; and Three (UK telecom) for their use of TIBCO integration and eventing for customer engagement.

Thankfully shorter than yesterday’s 3-hour marathon keynote, and lots of good product updates.

Spotfire Content Analytics At TIBCONOW

(This session was from late yesterday afternoon, but I didn’t remember to post until this morning. Oops.)

Update: the speakers were Thomas Blomberg from TIBCO and Rik Tamm-Daniels from Attivio. Thanks, guys!

I went to the last breakout on Monday to look at the new Spotfire Content Analytics, which combines Spotfire in-memory analytics and visualization with Attivio content analysis and extraction. This is something that the ECM vendors (e.g., IBM FileNet) have been offering for a while, and I was interested to see the Spotfire take on it.

Basically, content analytics is about analyzing documents, emails, blogs, press releases, website content and other human-created textual data (also known as unstructured content) in order to find insights; these days, a primary use case is to determine sentiment in social media and other public data, in order for a company to get ahead of any potential PR disasters.

Spotfire Content Analytics — or rather, the Attivio engine that powers the extraction — uses four techniques to find relative information in unstructured content:

  • Text extraction, including metadata
  • Key phrase analysis, using linguistics to find “interesting” phrases
  • Entity extraction, identifying people, companies, places, products, etc.
  • Sentiment analysis, to determine degree of negative/positive sentiment and confidence in that score

Once the piece of content has been analyzed to extract this relevant information, more traditional analytics can be applied to detect patterns, tie these back to revenue, and allow for handling of potential high-value or high-risk situations.

Spotfire Content Analytics (via their ) uses machine learning that allows you to train the system using sample data, since the information that is considered relevant is highly dependent on the specific content type (e.g., a tweet versus a product review). They provide rich text analytics, seamless visualization via Spotfire, agility through combining sources and transformations, and support for diverse content sources. They showed a demo based on a news feed by country from the CIA factbook site (I think), analyzing and showing aggregate sentiment about countries: as you can imagine, countries experiencing war and plague right now aren’t viewed very positively. Visualization using Spotfire allows for some nice geographic map-based searching, as well as text searching. The product will be available later this month (November 2014).

Great visualizations, as you would expect from Spotfire; it will be interesting to see how this measures up to IBM’s and other content analytics offerings once it’s released.

BPM For Today At TIBCONOW

Roger King, who heads up TIBCO’s BPM product strategy, gave us an update on ActiveMatrix BPM, and some of the iProcess to AMX BPM tooling (there is a separate session on this tomorrow that I may attend, so possibly more on that then). It’s been four years since they launched AMX BPM; that forms the model-driven implementation side of their BPM offering, augmented by Nimbus for business stakeholders for procedure documentation and business-IT collaboration. AMX BPM provides a number of process patterns (e.g., maker-checker) built in, intelligent work and resource management, actionable analytic insights and more. This is built on an enterprise-strength platform — as you would expect from TIBCO — to support 24×7 real-time operations.

In May of this year, they released AMX BPM 3.0 with a number of new features:

  • Support all styles of processes in a single solution: human workflow, case management, rules-based processes, automation, etc.
  • To support case management, they enable global data to allow the creation of a case data model in a central repository separate from processes, allowing cases to exist independent of processes, although they can be acted upon by processes. Work items representing actions on cases can retrieve and update case data on demand, since it references the case data rather than having it copied to local instance data.
  • In work management enhancements, support for elastic organizations (branches, such as you see in retail banking). This allows defining a model for a branch — you could have different models for different sizes of branches, for example — then link to those from branch nodes in the static organization model. Work can then be managed relative to the features of those underlying models, e.g., “send to manager”.
  • Also in work management, they have added dynamic performers to allow for distribution based on business data in a running instance rather than pre-determined role assignments. This is supported by dynamic RQL (resource query language), a query language specifically for manipulating resource assignments.
  • Some new LDAP functions.

There will be another session on Wednesday that covers the new features that are new since May, including a lot about case management; I’ll report more from that.

He also gave us some of the details of the iProcess to AMX BPM “conversion” tools, which migrate the process models (although not the applications that use those models): I assume that the conversion rate of their iProcess customers to AMX BPM has been lower than they expected, and they are hoping that this will move things along.

We then heard a Nimbus update from Dan Egan, which will release version 9.5 this month: this is positioned as a “how to” guide for the enterprise, showing process models in a more consumable format than a full technical BPMN model. They have added collaboration capabilities so that users can review and provided feedback on the business processes, and the ability to model multiple process variants as multiple drill-downs from a single object. The idea is that you use Nimbus both as a place to document manual procedures that people need to perform, and as a process discovery tool for eventual automation, although the former is what Nimbus was originally designed for and seems to still be the main use case. They’ve spiffed up the UI, and will soon be offering their authoring, admin and governance functions on the web, allowing them to offer a fully web-based solution.

Nimbus uses their universal process notation (UPN) rather than BPMN for process models; King stated in response to a question about Nimbus supporting BPMN by stating that they do not believe that BPMN is a user-consumable format. They don’t have have tooling — or at least haven’t talked about it — to convert UPN to BPMN; they’re going to need to have that if they want to position UPN as being for business-led process discovery as well as procedural documentation.

If you want to see the replay of this morning’s keynote, or watch tomorrow’s keynotes live or on demand, you can see them here.

BPM COE at TIBCONOW 2014

Raisa Mahomed of TIBCO presented a breakout session on best practices for building a BPM center of excellence. She started with a description of different types of COEs based on Forrester’s divisions (I’m too lazy to hack the HTML to add a table in WordPress for Android, so imagine a 2×2 quadrant with one axis being centralized versus decentralized, the other tactical, i.e., focused on cost and efficiency, versus strategic, i.e., focused on revenue and growth):

  • Center of Expertise (decentralized, strategic) – empowers business stakeholders with expert assistance, provides best practice, governance, technology that is configurable and consumable by business
  • Center of Excellence (centralized, strategic) – governs all processes in organization, enforces strict guidelines and process methodology governance, owns the BPMS, engagement models foster trust and collaboration including internal evangelists
  • Community of Practice (decentralized, tactical) – small teams, departmental priorities and scope, basic workflow capabilities, little or no governance
  • Process Factory (centralized, tactical) – optimized for process automation projects, processes as application development, frameworks

Center of Expertise and Process Factory work well together and are often seen in combination.

image

Best practices (these went by pretty quickly with a lot of detail on the slides, so I’ve just tried to capture some of the high points):

  • Find executive sponsorship for the COE: they must be influential across the organization, and be in the right place for the COE within your organization (e.g., COO, CIO, separate architecture group)
  • Create a governance framework – style will be based on the type(s) of COEs in use
  • Establish a methodology, which may have to accommodate different levels of BPM maturity within organization; be sure to address reusability and common components
  • Start with a core process, but relatively low complexity – this is exactly what I recommend, and I’m always frustrated by the “experts” that recommend starting with a non-core process even if the core processes are the target for implementation.
  • Encourage innovation and introduce disruptive technology.
  • Collaboration is key, via co-location and online collaboration spaces.
  • Don’t skip the metrics: remember that measuring project success is essential for future funding, as well as day-to-day operations and feeding the continuous improvement cycle.
  • Don’t let the program go stale, or become an ivory tower; rotate SMEs from the COE back into the business.
  • There’s not a single BPM skillset: you need a variety of skills spread across multiple people and roles.
  • Make a business case to provide justification for BPM projects.
  • Empower and educate through training and change management.
  • Avoid the “build it and they will come” mentality: just because you create some cool technology, that doesn’t mean that business people will stop doing the things that they’re doing to take it up.
  • Institute formal reviews of process models and solutions.

Nothing revolutionary here, but a good introduction and review of the best practices.

TIBCONOW 2014 Opening Keynote: @Gladwell and More. Much More.

San Francisco! Finally, a large vendor figured out that they really can do a 2,500-person conference here rather than Las Vegas, it just means that attendees are spread out in a number of local hotels rather than in one monster location. Feels like home.

It seems impossible that I haven’t blogged about TIBCO in so long: I know that I was at last year’s conference but was a speaker (as I am this year) so may have been distracted by that. Also, they somehow missed giving me a briefing about the upcoming ActiveMatrix BPM release, which was supposed to be relatively minor but ended up  bit bigger — I’ll be at the breakout session on that later today.

We started the first day with a marathon keynote, with TIBCO CEO Vivek Ranadive welcoming San Francisco’s mayor, Ed Lee, for a brief address about how technology is fueling San Francisco’s growth and employment, as well as helping the city government to run more effectively. The city actually have a chief data officer responsible for their open data intiatives.

Ranadive addressed the private equity buy-out of TIBCO head-on: 15 years ago, they took the company public, and by the end of this year, they will be a private company again. I think that this is a good thing, since it removes them from the pressures of quarterly public filings, which artificially impacts product announcements and sales. It allows them to make any necessary organization restructuring or divestiture without being punished on the stock market. Also, way better than being absorbed by one of the bigger tech companies, where the product lines would have be to realigned with incumbent technologies. He talked about key changes in the past years: the explosion of data; the rise of mobility; the emergence of social platforms; Asian economies; and how math is trumping science by making the “how” more important than the “why”. Wicked problems, but some wicked solutions, too. He claims that every industry will have an “Uberization”: controversies aside, companies such as Uber and AirBnB are letting service businesses flourish on a small scale using technology and social networks.

We then heard from Malcolm Gladwell — he featured Ranadive in one of his books — on technology-driven transformation, and the kinds of attitudes that make this possible. He told the story of Malcolm McLean, who created the first feasible intermodal containerized shipping in the 1950s because of his frustration with how long it took to unload his truck fleet at seaports, and how that innovation transformed the physical goods economy. In order to do this, McLean had to overcome the popular opinion that containerized shipping would fail (based on earlier failed attempts by others): as Gladwell put it, he had the three necessary characteristics of successful entrepreneurs: he was open/imaginative with creative ideas; he was conscientious and had the discipline to bring ideas to fruition including a transformation of the supply chain and sales model; and he was “disagreeable”, that is, had the resolve to pursue an idea in the face of his peers’ disapproval and ridicule. Every transformative innovation must be driven by someone with these three traits, who has the imagination to reframe the incumbent business to address unmet needs, and kill the sacred cows. Great talk.

Ranadive then invited Marc Andreessen on stage for a conversation (Andreessen thanked him for letting him “follow Malcolm freaking Gladwell on the stage”) about innovation, which Andreessen says is currently driven by mobile devices: businesses now must assume that every customer is connected 24×7 with a mobile device. This provides incredible opportunities — allowing customers to order products/services on the go — but also threats for businesses behind the curve, who will see customers comparing them to their competitors in real-time before making a purchasing decision. They discussed the future of work; Andreessen sees this as leveraging small teams, but that things need to change to make that successful, including incentives (a particular interest of mine, since I’ve been looking at incentives for collaboration amongst knowledge workers). Diversity is becoming a competitive advantage since it draws talent from a larger pool. He talked about the success rates of typical venture-funded companies, such as those that they fund: of 4,000 companies, about 15 will make it to being big companies, that is, with a revenue of $100M or more that would position them to go public; most of their profits as a VC come from those 15 companies. They fund good ideas that look like terrible ideas, because if everyone thought that these were great ideas, the big companies would already be doing them; the trick is filtering out all of ideas that look terrible because they actually are. More important is the team: a bad team can ruin a good idea, but a great team with a bad idea can find their way to a good idea.

Next up was TIBCO’s CTO Matt Quinn talking with Box CEO Aaron Levie: Box has been innovating in the enterprise by taking the consumer cloud storage that we were accustomed to, and bringing it into the enterprise space. This not only enables internal innovation because of the drastically lower cost and simpler user experience than enterprise content solutions such as SharePoint, but also has the ability to transform the interface between businesses and their customers. Removing storage constraints is critical to supporting that explosion of data that Ranadive talked about earlier, enabling the internet of everything.

We saw a pre-recorded interview that Ranadive did with PepsiCo CEO Indra Nooyi: she discussed the requirement to perform while transforming, and the increase in transparency (and loss of privacy) as companies seek to engage with customers. She characterized a leader’s role as that of not just envisioning the future, but making that vision visible and attainable.

Mitch Barns, CEO of Nielsen (the company that measures and analyzes what people watch on TV), talked about how their business of measurement has changed as people went from watching broadcast TV at times determined by the broadcasters, to time-shifting with DVRs and consuming TV content on mobile devices on demand. They have had to shift their methods and business to accommodate this change in viewing models, and deal with a flood of data about how, when and where that consumption is occurring.

I have to confess, by this point, 2.5 hours into the keynote without a break, my attention span was not what it could have been. Or maybe these later speakers just didn’t inspire me as much as Gladwell and Andreessen.

Martin Taylor from Vista Equity Partners, the soon-to-be owners of TIBCO, spoke next about what they do and their vision for TIBCO. Taylor was at Microsoft for 14 years before joining Vista, and helps to support their focus on applying their best practices and operating platform to technology companies that they acquire. Since their start in 2000, they have spent over $14B on 140 transactions in enterprise software. He showed some of their companies; since most of these are vertical industry solutions, TIBCO is the only name on that slide that I recognized. They attempt to foster collaboration between their portfolio companies: not just sharing best practices, but doing business together where possible; I assume that this could be very good for TIBCO as a horizontal platform provider that could leveraged by their sibling companies. The technology best practices that they apply to their companies include improved product management roadmaps that address the needs of their customers, and improved R&D practices to speed product release cycles and improve quality. They’re still working through the paperwork and regulatory issues, but are starting to work with the TIBCO internal teams to ensure a smooth transition. It doesn’t sound as if there will be any big technology leadership changes, but a continued drive into new technologies including cloud, IoT, big data and more.

Murray Rode, TIBCO’s COO, finished up the keynote talking about their Fast Data positioning: organizations are collecting a massive volume of data, but that data has a definite shelf life and degrades in value over time. In order to take advantage of short-lived opportunities where timing is everything, you have to be able to analyze and take actions on that data quickly. As he put it, big data lets you understand what’s already happened, but fast data lets you influence what’s about to happen. To do this, you need to combine analytics to define situations of interest and decisions; event processing to understand and act on real-time information; and integration (including BPM) to unify your transactional and big data sources. Rode outlined the four themes of their positioning: expanded reach, ease of consumption, compelling user journey, and faster time to value; I expect that we will see more along these themes throughout the conference.

All in all, a great keynote, even though it stretched to an ass-numbing three hours.

Disclosure: TIBCO is paying my expenses to be at TIBCO NOW and a speaking fee for me to be on a panel tomorrow. What I write here is my own opinion, and I am not compensated in any way for blogging.

SAP’s Bigger Picture: The AppDev Play

Although I attended some sessions related to BPM and operational process intelligence, last week’s trip to SAP TechEd && d-code 2014 gave me a bit more breathing room to look at the bigger picture — and aspirations — of SAP and their business technology offerings.

I started coming to SAPPHIRE and TechEd when SAP released a BPM product, which means that my area of interest was a tiny part of their primary focus on ERP, financials and related software solutions; most of the attendees (including the analysts and bloggers) at that time were more concerned with licensing models for their Business Suite software than new technology platforms. Fast forwarding, SAP is retooling their core software applications using HANA as an in-memory platform (cloud or on-premise) and SAP UI5/Fiori for user experience, but there’s something much bigger than that afoot: SAP is making a significant development platform play using those same technologies that are working so well for their own application refactoring. In other words, you can consider SAP’s software applications groups to be software developers who use SAP platforms and tools, but those tools are also available to external developers who are building applications completely unrelated to SAP applications.

They have some strong components: in-memory database, analytics, cloud, UI frameworks; they are also starting to push down more functionality into HANA such as some rudimentary rules and process functionality that can be leveraged by a development team that doesn’t want to add a full-fledged BRM or BPM system.

This is definitely a shift for SAP over the past few years, and one that likely most of their customers are unaware; the question becomes whether their application development tools are sufficiently compelling for independent software development shops to take a look.

Disclaimer: SAP paid my travel expenses to be at TechEd last week. I was not compensated for my time in any way, including writing, and the opinions here are my own.

What’s New With SAP Operational Process Intelligence

Just finishing up some notes from my trip to SAP TechEd && d-code last week with the latest on their Operational Process Intelligence product, which can pull events and data from multiple systems – including SAP’s ERP and other core enterprise systems as well as SAP BPM – and provides real-time analytics via their HANA in-memory database. I attended a session on this, then had an individual briefing later to round things out.

Big processes are becoming a thing, and if you have big processes (that is, processes that span multiple systems, and consume/emit big data and high volume from a variety of sources), you need to have operational intelligence integrated into those processes. SAP is addressing this with their SAP Operational Process Intelligence, or what they see as a GPS for your business: a holistic view of where you are relative to your goals, the obstacles in your path, and the best way to reach your goals. It’s not just about what has happened already (traditional business intelligence), but what is happening right now (real-time analytics), what is going to happen (predictive analytics) and the ability to adjust the business process to accommodate the changing environment (sense and respond). Furthermore, it includes data and events from multiple systems, hence needs to provide scope beyond any one system’s analytics; narrow scope has been a major shortcoming of BPMS-based analytics in the past.

In a breakout session, Thomas Volmering and Harsh Jegadeesan gave an update and demo on the latest in their OPInt product. There are some new visualization features since I last saw it, plus the ability to do more with guided tasks including kicking off other processes, and trigger alerts based on KPIs. Their demo is based on a real logistics hub operation, which combines a wide variety of people, processes and systems, with the added complexity of physical goods movement.

Although rules have always been a part of their product suite, BRM is being highlighted as a more active participant in detecting conditions, then making predictions and recommendations, leveraging the ability to run rules directly in HANA: putting real-time guardrails around a business process or scenario. They also use rules to instantiate processes in BPM, such as for exception handling. This closer integration of rules is new since I last saw OPInt back at SAPPHIRE, and clearly elevates this from an analytics application to an operational intelligence platform that can sense and respond to events. Since SAP BPM has been able to use HANA as a database platform for at least a year, I assume that we will eventually see some BPM functionality (besides simple queuing) pushed down into HANA, as they have done with BRM, allowing for more predictive behavior and analytics-dependent functions such as work management to be built into BPM processes. As it is, hosting BPM on HANA allows the real-time data to be integrated directly into any other analytics, including OPInt.

OPInt provides ad hoc task management using a modern collaborative UI to define actions, tasks and participants; this is providing the primary “case management” capability now, although it’s really a somewhat simpler collaborative task management. With HANA behind the scenes, however, there is the opportunity for SAP to take this further down the road towards full case management, although the separation of this from their BPM platform may not prove to be a good thing for all of the hybrid structured/unstructured processes out there.

The creation of the underlying models looks similar to what I’ve been seeing from them for a while: the business scenario is defined as a graphical flow model (or imported from a process in Business Suite), organized into phases and milestones that will frame the visualization, and connected to the data sources; but now the rules can be identified directly on the process elements. The dashboard is automatically created, although it can be customized. In a new view (currently still in the lab), you will also be able to see the underlying process model with overlaid analytics, e.g., cost data; this seems like a perfect opportunity for a process mining/discovery visualization, although that’s more of a tool for an analyst than whoever might be monitoring a process in real-time.

SAP TechEd Keynote with @_bgoerke

I spent yesterday getting to Las Vegas for SAP TechEd && d-code and missed last night’s keynote with Steve Lucas, but up this morning to watch Björn Goerke — head of SAP Product & Innovation Technology — give the morning keynote on putting new technology into action. With the increasing rate of digital disruption, it’s imperative to embrace new ways of doing business, or risk becoming obsolete; this requires taking advantage of big data and real-time analytics as well as modern platforms. SAP’s current catch phrase is “Run Simple”, based in part on the idea of “one truth”, that is, one place for all your data so that you have a real-time view of your business rather than relying on separate sources for operations and analytics. You can’t run — and respond — at the speed that business requires if your analytics are based on yesterday’s transactions.

SAP HANA — their in-memory data store — allows for real-time analytics directly on operational transaction data, events, IoT machine data, social media data and more, all in a single data store. With the release of SAP HANA SPS09, they are adding support for dynamic tiering, streaming, enterprise information management, graphing, Hadoop user-defined functions, and multi-tenancy; these improve the management capabilities as well as the functionality. SAP deploys all of their business software solutions on HANA (although some more traditional databases are still supported in some products) with the goal to providing the basis for the “one truth” within business data.

Goerke was joined on stage by a representative from Alliander, an energy distribution company based in the Netherlands, and he demonstrated a HANA-based analytical dashboard based on geographic data that reduces the time required for geospatial queries — such as filtering by pipelines that are within a certain distance from buildings — from hours using more traditional database technology, to seconds with HANA. Geospatial data is one of the areas where in-memory data and analytics can really make a difference in terms of performance; I did a lot of my early-career software development on geospatial data, and there are some tough problems here that are not easily addressed by more traditional tools.

Another part of the simplicity message is “one experience” via the SAPUI5-based Fiori, providing for a more unified experienced between desktop and mobile, including management and distribution of mobile apps. They’ve added offline capabilities for their mobile apps – a capability widely ignored or dismissed as “unimportant” by developers who live and work only in areas blanketed in 4G and WiFi coverage, but critical in many real-world applications. Goerke demonstrated using some of the application development services — with some “help” from Ian Kimbell — to define an API, use it to create a mobile app, deploy it to a company app store, then install and run it: not something that most executives do live on stage at a keynote.

SAP now has a number of partnerships with hardware and infrastructure vendors to optimize their gear for SAP and especially for HANA: last week we saw an announcement about SAP running on the IBM cloud, and today we heard about how sgi is taking their well-known computational hardware capabilities and applying them to running transactional platforms such as SAP. SAP has also partnered with small software development shops to deliver the innovations in HANA-based applications needed to drive this forward. Applications developed on HANA can run on premise or in SAP’s managed cloud (and now IBM’s managed cloud), where they manage HANA and the SAP applications including Business Suite and Business Warehouse. Through a number of strategic acquisitions, SAP has much more than just your ERP and financials, however: they offer solutions for HR management, procurement, e-commerce, customer engagement and more. They also offer a rich set of development tools and application services for software development unrelated to SAP applications, allowing for applications built and deployed on HANA with modern mobile user interfaces and collaboration. In keeping with Goerke’s Star Trek theme in the keynote, very Borg-like. 🙂

Lots more here than I could possibly capture; you can watch the keynotes and other presentations online at SAP TechEd online.

AIIM Information Chaos Rescue Mission – Toronto Edition

AIIM is holding a series of ECM-related seminars across North America, and since today’s is practically in my back yard, I decided to check it out. It’s a free seminar so heavily sponsored; most of the talks are from the sponsor vendors or conversations with them, but John Mancini kicked things off and moderated mini-panels with the sponsor speakers to tease out some of the common threads.

The morning started with John Mancini talking about disruptive consumer technologies — cloud, mobile, IoT — and how these are “breaking” our internal business processes by fragmenting the channels and information sources. The result is information chaos, where information about a client lives in multiple places and often can’t be properly aggregated and contextualized, while still remaining secure. Our legacy systems, designed to be secure, were put in place before the devices that are causing security leaks were even invented; those older systems can’t even envision all the ways that information can leak out of an organization. Furthermore, the more consumer technologies advance, the further behind our IT people seem, making it more likely that business users will just go outside/around IT for what they need. New technologies need to be put in the context of our information management practices, and those practices adjusted to include the disruptors, rather than just ignore them: consider how to minimize risk in this information chaos state;  how to use information to engage and collaborate, rather than just shutting it away in a vault; how to automate processes that involve information that may not be stored in an ECM; and how to extract insights from this information.

A speaker from Fujitsu was up next, stating some interesting statistics on just how big the information chaos problem is:

  • 50% of business documents are still on paper; most businesses have many of their processes still reliant on paper.
  • Departmental CM systems have proliferated: 75% of organizations with a CM system have more than one, and 25% have more than four. SharePoint is like a virus among them, with an estimated 50% of organizations worldwide using SharePoint ostensibly for collaboration, but usually for ad hoc content management.
  • Legacy CM systems are themselves are a hidden source of costs, inefficiency and risk.

In other words, we have a lot of problems to tackle still: large organizations tend to have a lot of non-integrated content management systems; smaller organizations tend to have none at all.

We finished the first morning segment with an introduction from the event sponsors at small booths around the room:

An obvious omission (to me, anyway) was IBM/FileNet — not sure why they are not here as a sponsor considering that they have a sizable local contingent.

The rest of the morning was taken up with two sets of short vendor presentations, each followed by a Q&A session moderated by John Mancini: first Epson, K2 and EMC; then KnowledgeLake, HP Autonomy, Kodak alaris and OpenText. There were audience questions about information security and risk, collaboration/case management, ECM benefits and change management, auto-classification, SharePoint proliferation, cloud storage, managing content retention and disposal, and many other topics; lots of good discussions from the panelists. I was amazed (or maybe just sadly accepting) at the number of questions dealing with paper capture and disposal; I’ve been working in scanning/workflow/ECM/BPM since the late 80’s, and apparently there are still a lot of people and processes resistant to getting rid of paper. As a small business owner, I run a paperless office, and have spent a big chunk of my career helping much larger enterprises go paperless as part of streamlining their processes, so I know that this is not only possible, but has a lot of benefits. As one of the vendors pointed out, just do something, rather than sitting frozen, surrounded by ever-increasing piles of paper.

I skipped out at lunchtime and missed the closing keynote since it was the only bit remaining after the lunch break, although it looked like a lot of the customer attendees stayed around for the closing and the prize draws afterwards, plus to spend time chatting with the vendors.

Thanks to AIIM and the sponsors for today’s seminar; the presentations were a bit too sales-y for me but some good nuggets of information. There’s still one remaining in Chicago and one in Minneapolis coming up next week if you want to sign up.

What’s Next In camunda – Wrapping Up Community Day

We finished the camunda community day with an update from camunda on features coming in 7.2 next month, and the future roadmap. camunda releases the community edition in advance of the commercial edition; this is the way that open source should work, but some commercial open source vendors switch that around so that the community version lags by as much as a full version.

The highlights of the 7.2 release are as follows:

  • CMMN-based case management engine, which includes the core activities (stages, human tasks, process tasks, case tasks, milestones and sentries), the base case instance and plan item lifecycle, and a CMMN model API and REST API on a common process engine. They demonstrated a basic case manager UI that can manage cases and the related tasks; I assume that this is really just a demo of what can be done rather than intended as production code. They also don’t have case modeling in their modeler yet, so it’s early times.
  • A variety of functions for speeding development: connectors (currently REST and SOAP), dataformats, templating and scripting (calling external scripts, currently Groovy or Javascript but with others to come)
  • New tasklist, updating the tasklist UI that they released just before announcing camunda as an open source project. It allows filters to be defined, including specifying who can see the results of a filter in addition to the search criteria; that filter then appears as a tab on the task list, in the color defined by the filter author. The sort order can’t currently be defined as part of the filter, but can be set on the general tasklist interface. This adds a third (left) column to the tasklist UI, which also shows the list of tasks and the form for the selected task. Still work to be done, but the new filters capability is a big step up, providing a conceptually similar (but much different graphically) functionality to the Brazos portal filters.

There were a list of other smaller enhancements and fixes, from platform support to performance improvements to new functions.

We also saw some work in progress from the labs. First of all, an update on bpmn.io, which I saw at bpmNEXT earlier this year: a BPMN viewer and web modeler.. The viewer allows embedding of a BPMN diagram into a web page, including adding annotations, overlays and markers on the diagram, via a Javascript API. Check out a live demo here, demonstrating a BPMN diff function based on two similar process diagrams. From the viewer, you can export the model to a file. You can also create BPMN diagrams from scratch or import from a file, either directly on their site or embedded in another page. The modeler is still bit basic, and doesn’t handle containers (pools, lanes, subprocesses) very well yet, but that’s all coming; keep up with new functionality on the bpmn.io blog.

Another lab project is the camunda BPM workbench, a debugging tool that allows inspection of the runtime state of processes alongside the process model, allowing breakpoints to be set in the process model (rather than in code). A console interface allows for interrogation and updating of the process variables as the developer steps through the process. The process model is displayed using the bpmn.io viewer.

At the end of all the roadmap sessions, the audience had a chance to say what was most important for them in terms of what will be implemented when; there were questions about case management, centralized model repositories, bulk runtime operations and other features.

A great half-day; this is the first time that I’ve attended an open source code community day, and it’s quite a different environment from a typical vendor conference. We’re about to enter the beer-drinking portion of the day so I will sign off for today; I’m giving the keynote at the main camunda user conference tomorrow morning, and not sure how much blogging that I’ll do during after that.

Disclaimer: camunda paid my travel expenses to be here today and tomorrow, and is providing a speaking fee for tomorrow’s keynote. I was not compensated for blogging, and the opinions here (and in my keynote) are my own.