Category Archives: cloud

OpenText Enterprise World 2017 day 2 keynote with @muhismajzoub

We had a brief analyst Q&A yesterday at OpenText Enterprise World 2017 with Mark Barrenechea (CEO/CTO), Muhi Majzoub (EVP of engineering) and Adam Howatson (CMO), and today we heard more from Majzoub in the morning keynote. He started with a review of the past year of product development — specific product enhancements and releases, more applications, and Magellan analytics suite — then moved on to the ongoing vision and strategy.

Dean Haacker, CTO of The PrivateBank, joined Majzoub to talk about their use of OpenText content products. They moved from an old version of Content Server to the curret CS16, adding WEM integrated with CS for their intranet, Teleform for scanning, and ShinyDrive (OpenText’s partner of the year) for easy access to the content repository. The improved performance, capabilities and user experience are driving adoption within the bank; more of their employees are using the content capabilities for their everyday document needs, and as one measure of the success, their paper consumption has reduced by 20%.

Majzoub continued with a discussion of their recent enhancements in their content products, and demoed their life sciences application built on Documentum D2. There’s a new UI for D2 and a D2 mobile app, plus Brava! widgets for building apps. They can deploy their content products (OTMM, Content Suite, D2 and eDocs) across a variety of OpenText Cloud configurations, from on-premise to hybrid to public cloud. Content in the cloud allows for external sharing and collaboration, and we saw a demo of this capability using OpenText Core, which is their personal/team cloud product. Edits to an Office365 document by an external collaborator (or, presumably, edited using a desktop app and saved back to Core) can be synchronized back into Content Suite.

Other products and demos that he covered:

  • A demo of Exstream for updating and publishing a customer communication asset, which can automatically push the communication to specific customers and platforms via email, document notifications in Core, or mobile notifications. It actually popped up in the notifications section of the Enterprise World app on my phone, so worked as expected.
  • Their People Center HR app, which we saw demonstrated yesterday, built on AppWorks and Process Suite.
  • A demo of Extended ECM, which integrates content capabilities directly into other applications such as SAP, supporting both private and shared public cloud platforms for both internal and external participants.
  • Enhancements coming to Business Network, which is their collection of supply chain technologies, including B2B integration, fax, secure messaging and more; most interesting is the upcoming integration with Process Suite to merge internal and external processes.
  • A bit about the Covisint acquisition — not yet closed so not too many details — for IoT and deveice messaging.
  • AppWorks is their low-code development environment that enables both desktop and mobile apps to be created quickly, while still supporting more advanced developers.
  • Applying machine-assisted discovery to information lakes formed by a variety of hetereogenous content sources for predictions and insights.
  • eDOCS InfoCenter for an improved portal-style UI (in case you haven’t been paying attention for the past few years, eDOCS is focused purely on legal applications, although has functionality that overlaps with Content Suite and Documentum).

Majzoub finished with commitments for their next version — EP3 coming in October 2017 — covering enhancements across the full range of products, and the longer-term view of their roadmap of continuous innovation including their new hybrid platform, Project Banff. This new modern architecture will include a common RESTful services layer and an underlying integrated data model, and is already manifesting in AppWorks, People Center, Core, LEAP and Magellan. I’m assuming that some of their legacy products are not going to be migrated onto this new architecture.


I also attended the Process Suite product roadmap session yesterday as well as a number of demos at the expo, but decided to wait until later today when I’ve seen some of the other BPM-related sessions to write something up. There are some interesting changes coming — such as Process Suite becoming part of the AppWorks low-code application development environment — and I’m still getting a handle on how the underlying Cordys DNA of the product is being assimilated.

The last part of the keynote was a session on business creativity by Fredrik Härén — interesting!

Cloud ECM with @l_elwood @OpenText at AIIM Toronto Chapter

Lynn Elwood, VP of Cloud and Services Solutions at OpenText, presented on managing information in a cloud world at today’s AIIM chapter meeting in Toronto. This is of particular interest to Canadians, since most of the cloud service offerings that we see are in the US, and many companies are not comfortable with keeping their private data in a jurisdiction where it can be somewhat easily exposed to foreign government and intelligence agencies.

She used a building analogy to talk about cloud services:

  • Infrastructure as a service (IaaS) is like a piece of serviced land on which you need to build your own building and worry about your connections to services. If your water or electricity is off, you likely need to debug the problem yourself although if you find that the problem is with the underlying services, you can go back to the service provider.
  • Platform as a service (PaaS) is like a mobile home park, where you are responsible for your own dwelling but not for the services, and there are shared services used by all residents.
  • Software as a service (SaaS) is like a condo building, where you own your own part of it, but it’s within a shared environment. SaaS by Gartner’s definition is multi-tenant, and that’s the analogy: you are at the whim, to a certain extent, of the building management in terms of service availability, but at a greatly reduced cost.
  • Dedicated, hosted or managed is like a private house on serviced land, where everything in the house is up to you to maintain. In this set of analogies, not sure that there is a lot of distinction between this and IaaS.
  • On-premises is like a cottage, where you probably need to deal with a lot of the services yourself, such as water and septic systems. You can bring in someone to help, but it’s ultimately all your responsibility.
  • Hybrid is a combo of things — cloud to cloud, cloud to on-premise — such as owning a condo and driving to a cottage, where you have different levels of service at each location but they share information.
  • Managed services is like having a property manager, although it can be cloud or on-premise, to augment your own efforts (or that of your staff).

Regardless of the platform, anything that touches the cloud is going to have a security consideration as well as performance/up-time SLAs if you want to consider it as part of your core business. From my experience, on-premise solutions can be just as insecure and unstable as any cloud offering, so good to know what you’re comparing with when you are looking at cloud versus on-premise.

Most organziations require that their cloud provider have some level of certification: of the facility (data centre), platform (infrastructure) and service (application). Elwood talked about the cloud standards that impact these, including ISO 27001, and SOC 1, 2 and 3.

A big concern is around applications in the cloud, namely SaaS such as Box or Salesforce. Although IT will be focused on whether the security of that application can be breached, business and information managers need to be concerned about what type of data is being stored in those applications and whether it potentially violates any privacy regulations. Take a good look at those SaaS EULAs — Elwood took us through some Apple and Google examples — and have your lawyers look at them as well if you’re deploying these solutions within the enterprise. You also need to look at data residency requirements (as I mentioned at the start): where the data resides, the sovereignty of the hosting company, the routing between you and the data even if the data resides in your own country, and the backup policies of the hosting company. The US Patriot Act allows the US government to access any data that passes through, is stored in, or is hosted by a company that is domiciled in the US; other countries are also adding similar laws. Although a company may have a data centre in your country, if they’re a US company, they probably have a default to store/process/backup in the US: check our the Microsoft hosting and data processing agreement, for example, which specifies that your data will be hosted and/or processed in the US unless you explicitly request otherwise. There’s an additional issue that even if your data has the appropriate residency, if an employee is travelling to a restricted country and accesses the data remotely, you may be violating privacy regulations; not all applications have the ability to filter otherwise authenticated access based on IP address. If you add this to the ability of foreign governments to demand device passwords in order to enter a country, the information accessible via an employee’s computer — not just the information stored it — is at risk for exposure.

Elwood showed a map of the information governance laws and regulations around the world, and it’s a horrifying mix of acronyms for data protection and privacy rules, regulated records retention, eDiscovery requirements, information integrity and authenticity, and reporting obligations. There’s a new EU regulation — the General Data Protection Regulation (GDPR) — that is going to be a game-changer, harmonizing laws across all 28 member nations and applying to any data collected about an EU citizen even outside the EU. The GDPR includes increased consent standards, stronger individual data rights, stronger breach notification, increased governance obligation, stronger recordkeeping requirements, and data transfer constraints. Interestingly, Canada is recognized as one of the countries that is deemed to have “adequate protection” for data transfer, along with Andorra, Argentina, the Faroe Islands, the Channel Islands (Guernsey and Jersey), Isle of Man, Israel, New Zealand, Switzerland and Uruguay. In my opinion, many companies aren’t even aware of the GDPR, much less complying with it, and this is going to be a big wake-up call. Your compliance teams need to be aware of the global landscape as it impacts your data usage and applications, whether in the cloud or on premise; companies can receive huge fines (up to 4% of annual revenue) for violating GDPR whether they are the “owner” of the data or just a data processor/host.

OpenText has a lot of GDPR information on their website that is not specific to their products if you want to read more. 

There are a lot of benefits to cloud when it comes to information management, and a lot of things to consider: agility to grow and change quickly; a services approach that requires partnering with the service provider; mobility capabilities offered by cloud platforms that may not be available for on premise; and analytics offered by cloud vendors within and across applications.

She finished up with a discussion on the top areas of concerns for the attendees: security, regulations, GDPR, data sovereignty, consumer applications, and others. Great discussion amongst the attendees, many of whom work in the Canadian financial services industry: as expected, the biggest concerns are about data residency and sovereignty. GDPR is seen as having the potential to level the regulatory playing field by making everyone comply; once the data centres and service providers start to comply, it will be much easier for most organizations to outsource that piece of their compliance by moving to cloud services. I think that cloud service providers are already doing a better job at security and availability than most on-premise systems, so once they crack the data residency and sovereignty problem there is little reason to have a private data centre. IT’s concern has mostly been around security and availability, but now is the time for information and compliance managers to get involved to ensure that privacy regulations are supported by these platforms.

There are Canadian companies using cloud services, even the big banks and government, although I am guessing that it’s for peripheral rather than core services. Although some are doing this “accidentally” as the only way to share information with external participants, it’s likely time for many companies to revisit their information management strategies to see if they can be more inclusive of property vetted cloud solutions.

We did get a very brief review of OpenText and their offerings at the end, including their software solutions and their EIM cloud offerings under the OpenText Cloud banner. They are holding their Enterprise World user conference in Toronto this July, which is the first (but likely not the last) big software company to see the benefits of a non-US North American conference location.

Pegaworld 2016 Day 1 Keynote: Pega direction, Philips and Allianz

It seems like I was just here in Vegas at the MGM Grand…oh, wait, I *was* just here. Well, I’m back for Pegaworld 2016, and 4,000 of us congregated in the Grand Garden Arena for the opening keynote on the first day. If you’re watching from home, or want to catch a replay, there is a live stream of the keynotes that will likely feature an on-demand replay at some point.

IMG_9776Alan Trefler, Pega’s CEO, kicked things off by pointing out the shift from a focus on technology to a focus on the customer. Surveys show that although most companies think that they understand their customers, the customers don’t agree; companies need to undergo a serious amount of digital transformation in order to provide the level of service that today’s customers need, while still improving efficiencies to support that experience. One key to this is a model-driven technology environment that incorporates insights and actions, allowing the next best action to be provided at any given point depending on the current context, while supporting organizational evolution to allow constant change to meet the future demands. Model-driven environments let you create applications that are future-proof, since it is relatively quick to make changes to the models without changing a lot of code. Pega has a lot of new online training at the Pega Academy, a marketplace of third-party Pega applications at the Pega Exchange, and the continuing support of their Pega Express easy-to-use modeler; they continue to work on breaking free from their tech-heavy past to support more agile digital transformation. Pega recently sponsored an Economist report on digital transformation; you can grab that here.

wp-1465232175851.jpgDon Schuerman, Pega’s CTO, took over as MC for the event to introduce the other keynote speakers, but first announced a new partnership with Philips that links Pega’s care management package with Philips’ HealthSuite informatics and cloud platform for home healthcare. Jeroen Tas, CEO of Connected Care & Health Informatics at Philips presented more on this, specifically in the context of the inefficient and unevenly-distributed US healthcare system. He had a great chart that showed the drivers for healthcare transformation: from episodic to continuous, by orchestrating 24/7 care; from care provider to human-centric, by focusing on patient experience; from fragmented to connected, by connecting patients and caregivers; and from volume to value, by optimizing resources. Connected, personalized care links healthy living to disease prevention, and supports the proper diagnosis and treatment since healthcare providers all have access to a comprehensive set of the patient’s information. Lots of cool personal healthcare devices, such as ultrasound-as-a-service, where they will ship a device that can be plugged into a tablet to allow your GP to do scans that might normally be done by a specialist; continuous glucose meters and insulin regulation; and tools to monitor elderly patients’ medications. Care costs can be reduced by 26% and readmissions reduced by 52% through active monitoring in networked care delivery environments, such as by monitoring heart patients for precursors of a heart attack; this requires a combination of IoT, personal health data, data analytics and patient pathways provided by Philips and Pega. He ended up stating that it’s a great time to be in healthcare, and that there are huge benefits for patients as well as healthcare providers.

Although Tas didn’t discuss this aspect, there’s a huge amount of fear of connected healthcare information in user-pay healthcare systems: people are concerned that they will be refused coverage if their entire health history is known. Better informatics and analysis of healthcare information improves health and reduces overall healthcare costs, but it needs to be provided in an environment that doesn’t punish people for exposing their health data to everyone in the healthcare system.

We continued on the healthcare topic, moving to the insurance side with Birgit König, CEO of Allianz Health Germany. Since basic healthcare in Germany is provided by the state, health insurance is for additional services not covered by the basic plan, and for travelers while they are outside Germany. There is a lot of competition in the market, and customer experience for claims is becoming a competitive differentiator especially with new younger customers. In order to accommodate, Allianz is embracing a bimodal architecture approach, where back-end systems are maintained using traditional development techniques that focus on stability and risk, while front-end systems are more agile and innovative with shorter release cycles. I’ve just written a paper on bimodal IT and how it plays out in enterprises; not published yet, but completely aligned with what König discussed. Allianz is using Pega for more agile analytics and decisioning at the front end of their processes, while keeping their back-end systems stable. Innovation and fast development has been greatly aided by co-locating their development and business teams, not surprisingly.

wp-1465232200882.jpgThe keynote finished with Kerim Akgonul, Pega’s SVP of Products, for a high-level product update. He started by looking at the alignment between internal business goals and the customer journey, spanning marketing, sales, customer service and operations. The Pega Customer Decision Hub sits at the middle of these four areas, linking information so that (for example), offers sent to customers are based on their past orders.



  • Marketing: A recent Forrester report stated that Pega Marketing yields an 8x return on marketing investment (ROMI) due to the next-best-action strategies and other smart uses of analytics. Marketers don’t need to be data scientists to create intelligent campaigns based on historical and real-time data, and send those to a targeted list based on filters including geolocation. We saw this in action, with a campaign created in front of us to target Pegaworld attendees who were actually in the arena, then sent out to the recipients via the conference mobile app.
  • Sales: The engagement map in the Pega Sales Automation app uses the Customer Decision Hub information to provide guidance that links products to opportunities for salespeople; we saw how the mobile sales automation app makes this information available and recommends contacts and actions, such as a follow-up contact or training offer. There are also some nice tools such as capturing a business card using the mobile camera and importing the contact information, merging it if a similar record is found.
  • wp-1465234409405.jpgCustomer service: The Pega customer service dashboard shows individual customer timelines, but the big customer service news in this keynote is the OpenSpan acquisition that provides robotic process automation (RPA) to improve customer service environments. OpenSpan can monitor desktop work as it is performed, and identify opportunities for RPA based on repetitive actions. The new automation is set up by recording the actions that would be done by a worker, such as copying and pasting information between systems. The example was an address change, where a CSR would take a call from a customer then have to update three different systems with the same information by copying and pasting between applications. We saw the address change being recorded, then played back on a new transaction; this was also included as an RPA step in a Pega Express model, although I’m not sure if that was just to document the process as opposed to any automation driven from the BPM side.
  • Operations: The Pega Field Service application provides information for remote workers doing field support calls, reducing the time required to complete the service while documenting the results and tracking the workers. We saw a short video of Xerox using this in Europe for their photocopier service calls: the field engineer sees the customer’s equipment list, the inventory that he has with him, and other local field engineers who might have different skills or inventory to assist with his call. Xerox has reduced their service call time, improved field engineer productivity, and increased customer satisfaction.

Good mix of vision, technology and customer case studies. Check out the replay when it’s available.

bpmNEXT 2016 demos: IBM, Orquestra, Trisotech and

On the home stretch of the Wednesday agenda, with the last session of the four last demos for the day.

BPM in the Cloud: Changing the Playing Field – Eric Herness, IBM

wp-1461193672487.jpgIBM Bluemix process-related cloud services, including cognitive services leveraging Watson. Claims process demo that starts by uploading an image of a vehicle and passing to Watson image recognition for visual classification; returned values show confidence in vehicle classification, such as “car”, and sends any results over 90% to the Alchemy taxonomy service to align those — in the demo, Watson returned “cars” and “sedan” with more than 90% confidence, and the taxonomy service determined that sedan is a subset of cars. This allows routing of the claim to the correct process for the type of vehicle. If Watson has not been trained for the specific type of vehicle, the image classification won’t be determined with a sufficient level of confidence, and it will be passed to a work queue for manual classification. Unrecognized images can be used to add to classifier either as example of an existing classification or as a new classification. Predictive models based on Spark machine learning and analytics of past cases create predictions of whether claim should be approved, and the degree of confidence in that decision; at some point, as this confidence increases, some of the claims could be approved automatically. Good examples of how to incorporate cognitive computing to make business processes smarter, using cognitive services that could be called from any BPM system, or any other app that can call REST services.

Model, Generate, Compile in the Cloud and Deploy Ready-To-Use Mobile Process Apps – Rafael Bortolini and Leonardo Luzzatto, CRYO/Orquestra

Demo of Orquestra BPMS implementation for Rio de Janeiro’s municipal processes, e.g., business license requests. From a standard worklist style of process management, generate a process app for a mobile platform: specify app name and logo, select app functionality based on templates, then preview it and compile for iOS or Android. The .ipa or .apk files are generated ready for uploading to the Apple or Google app stores, although that upload can’t be automated. Full functionality to allow mobile user to sign up or login, then access the functionality defined for the app to request a business license. Although an app is generated, the data entry forms are responsive HTML5 to be identical to the desktop version. Very quick implementation of a mobile app from an existing process application without having to learn the Orquestra APIs or even do any real mobile development, but it can also produce the source code in case this is just wanted as a quick starting point for a mobile development project.

Dynamic Validation of Integrated BPMN, CMMN and DMN – Denis Gagné, Trisotech

wp-1461196893964.jpgKommunicator tool based on their animation technology that animates models, which allows tracing the animation directly from a case step in the BPMN model to the CMMN model, or from a decision step to the DMN model. Also links to the semantic layer, such as the Sparx SOA architecture model or other enterprise architecture reference models. This allows manually stepping through an entire business model in order to learn and communicate the procedures, and to validate the dynamic behavior of the model against the business case. Stepping through a CMMN model requires selecting the ad hoc tasks as the case worker would in order to step through the tasks and see the results; there are many different flow patterns that can emerge depending on the tasks selected and the order of selection, and stages will appear as being eligible to close only when the required tasks have been completed. Stepping through a DMN model allows selecting the input parameters in a decision table and running the decision to see the behavior. Their underlying semantic graph shows the interconnectivity of all of the models, as well as goals and other business information.

Simplified CMMN – Lloyd Dugan,

wp-1461198272050.jpgLast up is not a demo (by design), but a proposal for a simplified version of CMMN, starting with a discussion of BPMN’s limitations in case management modeling: primarily that BPMN treats activities but not events as first-class citizens, making it difficult to model event-driven cases. This creates challenges for event subprocesses, event-driven process flow and ad hoc subprocesses, which rely on “exotic” and rarely used BPMN structures and events that many BPMN vendors don’t even support. Moving a business case – such as an insurance claim – to a CMMN model makes it much clearer and easier to model; the more unstructured that the situation is, the harder it is to capture in BPMN, and the easier it is to capture in CMMN. Proposal for simplifying CMMN for use by business analysts include removing PlanFragment and removing all notational attributes (AutoComplete, Manual Activitation, Required, Repetition) that are really execution-oriented logic. This leaves the core set of elements plus the related decorators. I’m not enough of a CMMN expert to know if this makes complete sense, but it seems similar in nature to the subsets of BPMN commonly used by business analysts rather than the full palette.

IBM ECM and Cloud

I’m at the IBM Content 2015 road show mini-conference in Toronto today, and sat in on a session with Mike Winter (who I know from my long-ago days at FileNet prior to its acquisition by IBM) discussing ECM in the cloud.

The content at the conference so far has been really lightweight: I think that IBM sees this more as a pre-sales prospecting presentation than an actual informational conference for existing customers. Although there is definitely a place for the former, it should not necessarily be mixed with the latter; it just frustrates knowledgeable customers who were really looking for more product detail and maybe some customer presentations.

ECM in the cloud has a lot of advantages, such as being able to access content on mobile devices and share with external parties, but also has a lot of challenges in terms of security — or, at least, perceived security — when implementing in larger enterprise environments. IBM ECM has a very robust and granular security/auditing model that was already in place for on-premise capabilities; they’re working to bring that same level of security and auditing to hybrid and cloud implementations. They are using the CMIS content management standard as the API into their Navigator service for cloud implementation: their enhanced version of CMIS provides cloud access to their repositories. The typical use case is for a cloud application to access an ECM repository that is either on premise or in IBM’s SoftLayer managed hosting in a sync-and-share scenario; arguably, this is not self-provisioned ECM in the cloud as you would see from cloud ECM vendors such as Box, although they are getting closer to it with per-user subscription pricing. This is being rolled out under the Navigator brand, which is a bit confusing since Navigator is also the term used for the desktop UI. There was a good discussion on user authentication for hybrid scenarios: basically, IBM replicates the customers’ LDAP on a regular basis, and is moving to do the same via a SAML service in the future.

Winter gave us a quick demo of the cloud (hosted) Navigator running against a repository in Amsterdam: adding a document, adding tags (metadata) and comments, viewing via an HTML5 viewer that includes annotations, and more. Basically, a nice web-based UI on an IBM ECM repository, with most of the rich functionality exposed. It’s quick to create a shared teamspace and add documents for collaboration, and create simple review workflows. He’s a tech guy, so didn’t know the SLA or the pricing, although he did know that the pricing is tiered.

bpmNEXT 2015 Day 2 Demos:, BP-3, Oracle

We’re finishing up this full day of demos with a mixed bag of BPM application development topics, from integration and customization that aims to have no code, to embracing and measuring code complexity, to cloud BPM services. Towards Zero Coding

Tim Stephenson discussed how extremely low-code solutions could be used to automate marketing processes, in place of using more costly marketing automation solutions. Their solution integrates workflow and decisioning with WordPress using JavaScript libraries, with detailed tracking and attribution, by providing forms, tasks, decision tables, business processes and customer management. He demonstrated an actual client solution, with custom forms created in WordPress, then referenced in a WordPress page (or post) that is used as the launch page for an email campaign. Customer information can be captured directly in their solution, or interfaced to another CRM such as Sugar or Salesforce. Marketers interact with a custom dashboard that allows them to define tasks, workflows, decisions and customer information that drive the campaigns; Tim sees the decision tables as a key interface for marketers to create the decision points in a campaign based on business terms, using a format that is similar to an Excel spreadsheet that they might now be using to track campaign rules.

BP-3: Sleep at Night Again: Automated Code Analysis

Scott Francis and Ivan Kornienko presented their new code analysis tool, Neches, that applies a set of rules based on best practices and anti-patterns based on their years of development experience to identify code and configuration issues in IBM BPM implementations that could adversely impact performance and maintainability. They propose that proper code reviews — including Neches reviews — at the end of each iteration of development can find design flaws as well as implementation flaws. Neches is a SaaS cloud tool that analyzes uploads of snapshots exported from the IBM BPM design environment; it scores each application based on complexity, which is compared to the aggregate of other applications analyzed, and can visualize the complexity score over time compared to found, resolved and fixed issues. The findings are organized by category, and you can drill into the categories to see the specific rules that have been triggered, such as UI page complexity or JavaScript block length, which can indicate potential problems with the code. The specific rules are categorized by severity, so that the most critical violations can be addressed immediately, while less critical ones are considered for future refactoring. Specific unused services, such as test harnesses, can be excluded from the complexity score calculation. Interesting tool for training new IBM BPM developers as well as review code quality and maintainability of existing projects, leveraging the experience of BP-3 and Lombardi/IBM developers as well as general best coding practices.

Oracle: Rapid Process Excellence with BPM in the Public Cloud

Linus Chow presented Oracle’s public cloud BPM service for developing both processes and rules, deployable in a web workspace or via mobile apps. He demonstrated an approval workflow, showing the portal interface, a monitoring view overlaid on the process model, and a mobile view that can include offline mode. The process designer is fully web-based, including forms and rules design; there are also web-based administration and deployment capabilities. This is Oracle’s first cloud BPM release and looks pretty full-featured in terms of human workflow; it’s a lightweight, public cloud refactoring of their existing Oracle BPM on-premise solution, but doesn’t include the business architecture or SOA functionality at this time.

Great day of demos, and lots of amazing conversations at the breaks. We’re all off to enjoy a free night in Santa Barbara before returning for a final morning of five more demos tomorrow.

bpmNEXT 2014 Wednesday Morning: Cloud, Synthetic APIs and Models

I’m not going to say anything about last night, but it’s a bit of a subdued crowed here this morning at bpmNEXT. Smile

We started the day with Tom Baeyens of Effektif talking about cloud workflow simplified. I reviewed Effektif in January at the time of launch and liked the simple and accessible capabilities that it offers; Tom’s premise is that BPM is just as useful as email, and it needs to be just as simple to use as email so that we are not reliant on a handful of power users inside an organization to make them work. To do this, we need to strip out features rather than add features, and reduce the barriers to trying it out by offering it in the cloud. Inspired by Trello (simple task management) and IFTTT (simple cloud integration, which basically boils down every process to a trigger and an action), Effektif brings personal DIY workflow to the enterprise that also provides a bridge to enterprise process management through layers of functionality. Individual users can get started building their own simple workflows to automate their day-to-day tasks, then more technical resources can add functionality to turn these into fully-integrated business processes. Tom gave a demo of Effektif, starting with creating a checklist of items to be completed, with the ability to add comments, include participants and add attachments to the case. There have been a few changes since my review: you can use Markdown to format comments (I think that understanding of Markdown is very uncommon in business and may not be well-adopted as, for example, a TinyMCE formatted text field); cases can now to started by a form as well as manually or via email; and Google Drive support is emerging to support usage patterns such as storing an email attachment when the email is used to instantiate the case. He also talked about some roadmap items, such as migrating case instances from one version of a process definition to another.

Next up was Stefan Andreasen of Kapow (now part of Kofax) on automation of manual processes with synthetic APIs – I’m happy for the opportunity to see this because I missed seeing anything about Kapow during my too-short trip to the Kofax Transform conference a couple of weeks ago. He walked through a scenario of a Ferrari sales dealership who looks up SEC filings to see who sold their stock options lately (hence has some ready cash to spend on a Ferrari), and narrow that down with Bloomberg data on age, salary and region to find some pre-qualified sales leads, then load them into Salesforce. Manually, this would be an overwhelming task, but Kapow can create synthetic APIs on top of each of these sites/apps to allow for data extraction and manipulation, then run those on a pre-set schedule. He started with a “Kapplet” (applications for business analysts) that extracts the SEC filing data, allows easy manual filtering by criteria such as filing amount and age, then select records for committal to Salesforce. The idea is that there are data sources out there that people don’t think of as data sources, and many web applications that don’t easily integrated with each other, so people end up manually copying and pasting (or re-keying) information from one screen to another; Kapow provides the modern-day equivalent to screen-scraping that taps into the presentation logic and data (not the physical layout or style, hence less likely to break when the website changes) of any web app to add an API using a graphical visual flow/rules editor. Building by example, elements on a web page are visually tagged as being list items (requiring a loop), data elements to extract, and much more. It can automate a number of other things as well: Stefan showed how a local directory of cryptically-named files can be renamed to the actual titles based on table of contents HTML document; this is very common for conference proceedings, and I have hundreds of file sets like this that I would love to rename. The synthetic APIs are exposed as REST services, and can be bundled into Kapplets so that the functionality is exposed through an application that is useable by non-technical users. Just as Tom Baeyens talked about lowering the barriers for BPM inside enterprises in the previous demo, Kapow is lowering the bar for application integration to service the unmet needs.

It would be great if Tom and Stefan put their heads together and lunch and whipped up an Effektif-Kapow demo, it seems like a natural fit.

Next was Scott Menter of BP Logix on a successor to flowcharts, namely their Process Director GANTT chart-style process interface – he said that he felt like he was talking about German Shepherds to a conference of cat-lovers – as a different way to represent processes that is less complex to build and modify than a flow diagram, and also provides better information on the temporal aspects and dependencies such as when a process will complete and the impacts of delays. Rather than a “successor” model such as a flow chart, that models what comes after what, a GANTT chart is a “predecessor” model, that models the preconditions for each task. A subtle but important difference when the temporal dependencies are critical. Although you could map between the two model types on some level, BP Logix has a completely different model designer and execution engine, optimized for a process timeline. One cool thing about it is that it incorporates past experience: the time required to do a task in the past is overlaid on the process timeline, and predictions made for how well this process is doing based on current instance performance and past performance, including tasks that are far in the future. In other words, predictive analytics are baked right into each process since it is a temporal model, not an add-on such as you would have in a process based on a flow model.

For the last demo of this session, Jean-Loup Comeliau of W4 on their BPMN+ product, which provides model-driven development using BPMN 2, UML 2, CMIS and other standards to generate web-based process applications without generating code: the engine interprets and executes the models directly. The BPMN modeling is pretty standard compared to other process modeling tools, but they also allow UML modeling of the data objects within the process model; I see this in more complete stack tools such as TIBCO’s, but this is less common from the smaller BPM vendors. Resources can be assigned to user tasks using various rules, and user interface forms are generated based on the activities and data models, and can be modified if required. The entire application is deployed as a web application. The data-centricity is key, since if the models change, the interface and application will automatically update to match. There is definitely a strong message here on the role of standards, and how we need more than just BPMN if we’re going to have fully model-driven application development.

We’re taking a break, and will be back for the Model Interchange Working Group demonstration with participants from around the world.

Effektif: Simple BPM In The Cloud

Effektif BPMTen months ago, Tom Baeyens (creator of jBPM and Activiti) briefed me on a new project that he was working on: Effektif, a cloud-based BPM service that seeks to bridge the gap between simple collaborative task lists and complex IT-driven BPMS. In October, he gave me a demo on the private beta version, with some discussion of what was coming up, and last week he demonstrated the public version that was launched today. With Caberet-inspired graphics on the landing page and a name spelling that could only have been dreamed up by a Belgian influenced by Germans 😉 the site has a minimalistic classiness but packs a lot of functionality in this first version.

We talked about his design inspirations: IFTTT and zapier, which handle data mappings transparently and perform the simplest form of integration workflow; Box and Dropbox, which provide easy content sharing; Trello and Asana, which enable micro-collaboration around individual tasks; and Wufoo, which allows anyone to build online forms. As IFTT has demonstrated, smaller-grained services and APIs are available from a number of cloud services to more easily enable integration. If you bring together ideas about workflow, ad hoc tasks, collaboration, content, forms and integration, you have the core of a BPMS; if you’re inspired by innovative startups that specialize in each of those, you have the foundation for a new generation of cloud BPM. All of this with a relatively small seed investment by Signavio and a very lean development team.

One design goal of Effektif is based on a 5-minute promise: users should be able to have a successful experience within 5 minutes. This is achievable, considering that the simplest thing that you can do in Effektif is create a shared task list, which is no more complex than typing in the steps and (optionally) adding participants to the list or individual tasks. However, rather than competing with popular shared task list services such as Trello and Asana, Effektif allows you to take that task list and grow it into something much more powerful: a reusable process template with BPMN flow control, multiple start event types, and automated script tasks that allow integration with common cloud services. Non-technical users that want to just create and reuse task lists never need to go beyond that paradigm or see a single BPMN diagram, since the functionality is revealed as you move from tasks to processes, but technical people can create more complex flows and add automated tasks.

Within the Effektif interface, there are two main tabs: Tasks and Processes. Tasks is for one-off collaborative task lists, whereas Processes allows you to create a process, which may be a reusable task list or a full BPMN model.

Within Tasks:

  • Effektif task definition and executionThe Tasks interface is a simple list of tasks, with a default filter of “Assigned to me”. The user can also select “I’m a candidate”, “Unassigned” or “Assigned to others” as task filters.
  • Each task is assigned to the creator by default, but can be assigned to another user or have other users added as participants, which will cause the task to appear on their task lists.
  • Each task can have a description, and can have documents attached to it at any point by any participant, either through uploading or via URL. Since any URL can be added, this doesn’t have to be a “document” per se, but any link or reference. Eventually, there will be direct integration with Google Drive and Box for attachments, but for the next month or two, you have to copy and paste the URL. Although you can upload documents as attachments, this really isn’t meant as a document repository, and the intention is that most documents will reside in an external ECM (cloud or on-premise).
  • Each task can have subtasks, created by any participants; each of those subtasks is the same as a task, that is, it can have a description, documents and subtasks, but is nested as part of the parent task.
  • Any participant can add comments to a task or subtask, which appear in the activity stream alongside the task list but only in context: that is, a comment added to a subtask will only appear when that subtask is selected. Other actions, such as task creation and completion, are also shown in the activity stream.
  • When the subtask assignee checks Done to complete the subtask, they are prompted with the remaining subtasks in that task that are assigned to them. This does not happen when completing a top-level task, which seemed a bit inconsistent, but I probably need to play around with this functionality a bit more. In looking at how processes instances are handled, likely a task is executed as a process instance with its subtasks as activities within that instance, but that distinction probably isn’t clear to (or cared about by) a non-technical user.

Effektif process definition and execution (release version)Within Processes, the basic process creation looks very much like creating a task list in Tasks, except that you’re creating a reusable process template rather than a one-off task list. In its simplest form, a process is defined as a set of tasks, and a process instance is executed in the same way as a task with the process activities as subtasks. When defining a new process:

  • Each process has a name. By default, instances of this process will use the same name followed by a unique number.
  • Each process has a trigger, either manually in Effektif using the Start Process button, or by email to a unique email address generated for that process template.
  • The activities in the process are initially defined as a task list, where each is either a User Task or Script Task.
  • Each user task can have a description and be assigned to a user, similar to in the Tasks tab, but can also have a form created for that activity that includes text fields, checkboxes and drop-down selection lists. A key functionality with forms is that defining the form fields at any activity within a process creates process instance variables that can be reused at other activities in the process, including within scripts. In other words, you create the process data model implicitly by designing the UI form.
  • Effektif process definition and execution (release version)Each script task allows you to write Javascript code that will be executed in a secure NodeJS environment. Some samples are provided, plus field mapping for mapping instance variables to Javascript variables, and an inline test environment.
  • Optionally, the activities can be viewed as a BPMN process flow using an embedded, simplified version of the Signavio modeler: the list of tasks is just converted to process activities, and you can then draw connectors between them to define serial logic. XOR gateways can also be added, which automatically adds buttons to the previous activity to select the outbound pathway from the gateway. You can switch between the Activities (task list) and Process Flow (BPMN) views, creating tasks in either view, although I was able to cause some weird behaviors by doing that – my Secret Superpower is breaking other people’s code.
  • The process is published in order to allow process instances to be started from it.

To create a simple reusable task list template, you just give it a name, enter the activities as a list, and publish. If you want to enhance it later with triggers, forms and script tasks, you can come back and edit it later, and republish.

When running a process instance:

  • Effektif process definition and execution (release version)The process is started either by an email or manual trigger, which then creates a task in the assigned user’s task list for the process instance, containing the activities as subtasks. If no process flow was defined, then all activities appear as subtasks; if a flow was defined, then only the next available one is visible.
  • As with the ad hoc tasks, participants can create new subtasks for this process instance or its activities at execution time.
  • If gateways were added, then buttons will appear at the step prior to the gateway prompting which path to follow out of the gateway. I’m not sure what happens if the step prior is a script task, e.g., a call to a rules engine to provide the flow logic.

As I played around with Effektif, the similarities and differences between tasks, processes (templates) and process instances started to become more clear, but that’s definitely not part of the 5-minute promise.

I’m not sure of the roadmap for tenanting within the cloud environment and sharing of information; currently they are using a NoSQL database with shards by tenant to avoid bottlenecks, but it’s not clear how a “tenant” is defined or the scope of shared process templates and instances.

Other things on the roadmap:

  • Importing and exporting process models from the full Signavio modeler, or from other BPMN 2.0-compliant modelers, although only a small subset of activity types are supported: start, end, user task, script task, XOR gateway, plus an implied AND gateway by defining multiple paths out of a task.
  • Additional start event types, e.g., embeddable form, triggers from ECM systems such as triggering a workflow when a document is added to a folder.
  • Google Drive/Box integration for content.
  • Salesforce integration for content and triggers.
  • Common process patterns built in as wizards/templates, allowing users to deploy with simple parameterization (and learn BPMN at the same time).

Effektif is not targeting any particular industry verticals, but are positioned as a company-wide BPM platform for small companies, or as a departmental/team solution for support processes within larger companies. A good example of this is software development: both the Effektif and Signavio teams are using it for managing some aspects of their software development, release and support processes.

There will be three product editions, available directly on the website or (for the Enterprise version) through the Signavio sales force:

  • Collaborate, providing shared task list functionality and document sharing. Free for all users.
  • Team Workflow, adding process flows (BPMN modeling) and connectors to, Google Drive and a few other common cloud services. The first five users are free, then paid for more than five.
  • Enterprise Process Management, adding advanced integration including with on-premise systems such as SAP and Oracle, plus analytics. That will be a paid offering for all users, and likely significantly more than the Team Workflow edition due to the increased functionality.

I don’t know the final pricing, since the full functionality isn’t there yet: Box, Google Drive and Salesforce integration will be released in the next month or two (currently, you still need to copy and paste the URL of a document or reference into Effektiv, and those systems can’t yet automatically trigger a workflow), and the enterprise integration and analytics will be coming later this year.

Go ahead and sign up: it only takes a minute and doesn’t require any information except your name and email address. If you want to hear more about Effektif, they are holding webinars on February 3rd (English) and 6th (German).

Q&A With Vishal Sikka @vsikka

Summary of this morning’s keynote (replay available online within 24 hours):

  • Have seen “HANA effect” over past 2.5 years, and see HANA is being not just a commercial success for SAP but a change in the landscape for enterprise customers. A key technology to help people do more.
  • Partnerships with SAS, Amazon, Intel, Cisco, Cloud Foundry.
  • Enterprise cloud and cloud applications.
  • SuccessFactors Learning products.
  • 1000 startup companies developing products on HANA.
  • Team of 3 teenagers using HANA Cloud and Lego to build shelf-stacking robots.

Vishal Sikki keynote

Q&A with audience (in person and online):

  • SAP has always had an excellent technology platform for building applications, used to build their core enterprise applications. HANA is the latest incarnation of that platform, and one that they are now choosing to monetize directly as an application development platform rather than only through the applications. HANA Enterprise Cloud and HANA Cloud Platform are enterprise-strength managed cloud versions of HANA, and HANA One uses AWS for a lower entry point; they’re the same platform as on-premise HANA for cloud or hybrid delivery models. I had a briefing yesterday with Steve Lucas and others from the Platform Solution Group, which covers all of the software tools that can be used to build applications, but not the applications themselves: mobile, analytics, database and technology (middleware), big data, and partners and customers. PSG now generates about half of SAP revenue through a specialist sales force that augments the standard sales force; although obviously selling platforms is more of an IT sell, they are pushing to talk more about the business benefits and verticals that can be built on the platform. In some cases, HANA is being used purely as an application development platform, with little or no data storage.
  • Clarification on HANA Cloud: HANA Enterprise Cloud is the cloud deployment of their business applications, whereas HANA Cloud Platform is the cloud version of HANA for developing applications.
  • SAP is all about innovation and looking forward, not just consolidating their acquisitions.
  • Examples of how SAP is helping their partners to move into their newer innovation solutions: Accenture has a large SuccessFactors practice, for example. I think that the many midrange SIs who have SAP ERP customization as their bread and butter may find it a bit more of a challenge.
  • Mobile has become a de facto part of their work, hence has a lower profile in the keynotes: it is just assumed to be there. I, for one, welcome this: mobile is a platform that needs to be supported, but let’s just get to the point where we don’t need to talk about it any more. Fiori provides mobile and desktop support for the new UI paradigms.

As with the keynote, too much information to capture live. This session was recorded, and will be available online.

SAP TechEd Day 1 Keynote With @vsikka

Vishal Sikka – head technology geek at SAP – started us off at TechEd with a keynote on the theme of how great technology always serves to augment and amplify us. He discussed examples such as the printing press, Nordic skis and the Rosetta Stone, and ends up with HANA (of course) and how a massively parallel, in-memory columnar database with built-in application services provides a platform for empowering people. All of SAP’s business applications – ERP, CRM, procurement, HR and others – are available on or moving to HANA, stripping out the complexity of the underlying databases and infrastructure without changing the business system functionality. The “HANA effect” also allows for new applications to be built on the platform with much less infrastructure work through the use of the application services built into HANA.

He also discussed their Fiori user interface paradigm and platform which can be used to create better UX on top of the existing ERP, CRM, procurement, HR and other business applications that have formed the core of their business. Sikka drew the architecture as he went along, which was a bit of fun:

SAP architecture with HANA and Fiori

He was joined live from Germany by Franz Faerber, who heads up HANA development, who discussed some of the advances in HANA and what is coming next month in version SP7, then Sam Yen joined on stage to demonstrate the HANA developer experience, the Operational Intelligence dashboard that was shown at SAPPHIRE earlier this year as in use at DHL for tracking KPIs in real time, and the HANA Cloud platform developer tools for SuccessFactors. We heard about SAS running on HANA for serious data scientists, HANA on AWS, HANA and Hadoop, and much more.

There’s a lot of information pushing out in the keynote: even if you’re not here, you can watch the keynotes live (and probably watch it recorded after that fact), and there will be some new information coming out at TechEd in Bangalore in six weeks. The Twitter stream is going by too fast to read, with lots of good insights in there, too.

Bernd Leukert came to the stage to highlight how SAP is running their own systems on HANA, and to talk more about building applications, focusing on Fiori for mobile and desktop user interfaces: not just a beautification of the existing screens, but new UX paradigms. Some of the examples that we saw are very tile-based (think Windows 8), but also things like fact sheets for business objects within SAP enterprise systems. He summed up by stating that HANA is for all types of businesses due to a range of platform offerings; my comments on Hasso Plattner’s keynote from SAPPHIRE earlier this year called it the new mainframe (in a good way). We also heard from Dmitri Krakovsky from the SuccessFactors team, and from Nayaki Nayyar about iFlows for connecting cloud solutions.

TechEd is inherently less sales and more education than their SAPPHIRE conference, but there’s a strong sense of selling the concepts of the new technologies to their existing customer and partner base here. At the heart of it, HANA (including HANA cloud) and Fiori are major technology platform refreshes, and the big question is how difficult – and expensive – it will be for an existing SAP customer to migrate to the new platforms. Many SAP implementations, especially the core business suite ERP, are highly customized; this is not a simple matter of upgrading a product and retraining users on new features: it’s a serious refactoring effort. However, it’s more than just a platform upgrade: having vastly faster business systems can radically change how businesses work, since “reporting” is replaced by near-realtime analytics that provide transparency and responsiveness; it also simplifies life for IT due to footprint reduction, new development paradigms and cloud support.

We finished up 30 minutes late and with my brain exploding from all the information. It will definitely take the next two days to absorb all of this and drill down into my points of interest.

Disclosure: SAP is a customer, and they paid my travel expenses to be at this conference. However, what I write here is my own opinion and I have not been financially compensated for it.