All posts by sandy

Insurance case management: SoluSoft and OpenText

It’s the last session of the last morning at OpenText Enterprise World 2017 — so might be my last post from here if I skip out on the one session that I have bookmarked for late this afternoon — and I’m sitting in on Mike Kremer of OpenText and Kiran Thakrar of SoluSoft showing SoluSoft’s Active Client Management for Insurance, built on OpenText’s Process Suite and case management. SoluSoft originally built this capability on earlier OpenText products (Global 360) but have moved to the new low-code platform. Their app can be used out of the box, or can be configured to suit a particular environment.

The goal of Active Client Management for Insurance is to provide a 360 view of the client, including data from a variety of sources (typically systems of record for policy administration or claims), content from any repository, open tasks and pending actions, checklists and ad hoc notes. It includes the entire customer lifecycle, from onboarding and underwriting, through policy administration and claims; basically, it’s user work management and CRM in one.

The solution is built on the core of Process Suite, using the full entity modeling AppWorks-style low-code development. It also includes process intelligence for analytics, Capture Center for document capture, and Streamserve for customer communication management. Above all of these OpenText building blocks, SoluSoft has built a client management solution accelerator that (I believe) they can use for a variety of vertical applications; below the OpenText layer is a service bus integration to line of business systems. For insurance, they’ve created a number of business processes and request types corresponding to different parts of the business, such as processing a new application, amending a policy, or initiating a claim; each of these can be configured for the specific customer’s terminology, or disabled if they don’t require specific functions. It’s not completely clear, however, how much of the functionality of other insurance systems might be replaced by this rather than augmented: clearly, the core policy administration system stays as the system of record, but an underwriting or claims system workflow might be replaced by this functionality. Having done this a few times with clients that use systems such as Guidewire, I have to say that this is a non-trivial architectural exercise to decide what parts of the flow happen where, and how to properly interact with other systems.

At the heart is a generic capture-driven workflow: scan, capture, index, data entry, process, approve, review, fulfill. The names of these can be aliased for different vertical applications — their example is that “processing” becomes “underwriting” — and steps can be skipped for a specific request type. Actions that can be performed at any of these work steps are configured using checklists. Ad hoc processes can be attached to steps in this master flow, either a single-step task or a more complex flow, and be executed before, after or in parallel to the pre-defined work step. Ad hoc processes can be created at runtime, and secondary request processes created for certain case types. The ability to make any of these configuration changes is restricted by role security. Relationships between clients, policies, brokers, claims, etc. are managed using folders for customers, policies and advisers, driven by entity modeling in Process Suite (AppWorks Low Code); this ability to establish relationships between all of these types of entities is critical for providing the complete view of the customer. They also have integrated iHub analytics for showing case statistics and workload analysis, as well as more complex analysis of risk or profitability for specific customer groups or policy types.

 

Although SoluSoft built some of this in custom code. a lot of the application is built directly in the OpenText low code development environment provided by Process Suite. This means that it’s fast to configure or even do some basic customizations, with the caveats that I mentioned earlier about deciding on where some parts of the workflow might happen when you have existing LOB systems that already do that. It also provides them with native mobile support through AppWorks, rather than having to build a separate mobile application.

We saw the version focused on insurance, but they also have flavors for pensions, financial services, government, healthcare and education. However, it appears that there is an existing legacy of the Global 360-based application, and it’s not clear how long it will take for this new AppWorks version to make its way into the market.

Getting started with OpenText case management

I had a demo from Simon English at the OpenText Enterprise World expo earlier this week, and now he and Kelli Smith are giving a session on their dynamic case management offering. English started by defining case management:

  • Management of dynamic, unstructured processes
  • Processes are driven by events or human interactions to support faster, more accurate decisions
  • Decisions are tied to content and the case directs that content to the right conclusion

In their terms, a case is a transaction that is “opened” and “closed” over a period of time: resolve a problem, settle a claim, or fulfill a request. There may be many different types of participants required to complete the case, and a variety of content and data involved.

Similar to the approach of other vendors, OpenText equates “case management” with “vertical application development” to a certain extent, and getting to case handling quickly needs a blueprint to quick-start solution development. To that end, they provide an accelerator as part of Process Suite that includes a pre-defined case model and entities to provide a starting point for developing a case management application, particularly incident management or service requests. Essentially, it’s a sample app/template, albeit a well-structured one that can easily be modified for actual solutions; they have no illusions that this is going to be an out-of-the-box solution for anyone, but rather a guide for people creating new case management applications so that they don’t need to start from scratch.

If you refer back to the more complete description of AppWorks Low Code that I gave in the previous post, they have defined entities, forms, layouts and a case lifecycle that fit a wide variety of request-style case management applications.

Smith then gave us a demonstration of People Center — similar to what we saw her do on the main stage on Tuesday — and discussed how they used the case management accelerator as a starting point for developing the People Center application. They used some parts of the template pretty much as is — such as the request creation form — but made it specific to HR management and extended the capabilities to suit, including a dashboard specific to each role. Checklists and options are specific to the HR application, but as discussed in previous posts, those will persist through an upgrade of the underlying People Center application.

She also walked us through the case management accelerator in the development environment, showing the fairly complete set of entities, forms, layouts, action bars, lists, relationships, rules, email templates, BPM processes, roles and other objects, as well as how easy it is to modify them for your own use. For any partners in the audience, or even customer developers, this will resonate as a method of quickly creating a fully-customized application based on the template that addresses a specific vertical functionality.

OpenText Process Suite becomes AppWorks Low Code

“What was formerly known as Process Suite is AppWorks Low Code, since it has always been an application development environment and we don’t want the focus to be on a single technology within in.”  – Dana Khoyi, architect of OpenText’s Process Suite

That pretty much sums up the biggest BPM positioning/branding announcement at OpenText Enterprise World 2017 this week. BPM is dead, long live low-code application development? Note that AppWorks is the name used for all OpenText developer tools; the technical developer APIs and access points, plus this low code product which is really a separate product.

Khoyi and Kelli Smith (who did the main stage People Center demo on Tuesday) led a session on the last day of Enterprise World to show how Process Suite AppWorks is used to create applications, starting with defining composite entities (business objects made up of multiple pieces of data), then UI constructs including forms, dashboards and lists. Because process and content are built into the environment, there are easy building blocks for content lifecycle, activity flow and history. Declarative rules are supported — triggered on conditions, events or user actions — and dropping out to a full process model for more complex flows and events. They also have a development framework for building customizable applications that persists customizations separately from the application and merges them at runtime, allowing a new version of the core application to be installed without discarding the previous customizations, although obviously you’d want to test and might require some minor retrofits.

Application development starts by defining the core entity for the application (think process or case instance class) then add properties (data fields) and building blocks: forms to edit and display those properties (as well as built-in properties such as state); lists that can be worklists or reporting artifacts; and layouts, which are essentially the application UI screens and can include the previously-created forms plus actions, breadcrumbs, and related content. Data/content security and access/update conflicts are handled automatically on the forms/layouts based on underlying security definitions. Apps that are created can be published immediately to run; these can be moved as packages between testing and production environments although it’s not clear that there’s any versioning or automation around that, so likely some manual governance is required.

Other building blocks that can be added to an application include:

  • A history log that maintains a complete audit trail of everything done during the instance including field-level data changes
  • A discussion for collaborative chat/comments on an instance
  • Content, which can be files/folders that are attached to the case instance using a local document store or other content store via a connector or CMIS, or a businses workspace within Content Server (using Extended ECM) which stores the content in CS and allows access from either environment while syncing properties between them.
  • Email templates that provide a form letter email capability for inbound/outbound email associated with the case
  • Three ways of managing work:
    • Lifecycle, which is a state machine-oriented view (i.e., milestones and the actions required to move between states) for a simple case workflow
    • BPM, for a full drop to the BPMN editor for complex process flows
    • Action flow, which is a simple sequence flow
  • Mobile app creation
  • Entity relationships

There’s a lot of stuff in here, and we didn’t see it all in this short session, but looks like a pretty robust environment for low-code development. Khoyi stated explicitly that this is becoming the development for all OpenText products, replacing the workflow capabilities in Content Server and Documentum.

OpenText Process Suite Roadmap

Usually I live-blog sessions at conferences, publishing my notes at the end of each, but here at OpenText Enterprise World 2017, I realized that I haven’t taken a look at OpenText Process Suite (formerly Cordys, from the 2013 acquisition) for a while, and needed to chew over a couple of sessions to get the whole picture. There’s some significant repositioning happening with Process Suite becoming rebranded as part of their low-code development environment AppWorks, or possibly it’s better to say that Process Suite is becoming the AppWorks developer platform: something that follows naturally from the Cordys history.

Cordys has been on my radar since 2006 when I linked to a post that Bruce Silver wrote about them, then a couple of months later I had a chance to get a more in-depth briefing. At that time, I commented on how they had a pretty complete process application development environment for creating what we were still calling mashups; now this just falls under low-code app dev. By 2008, Forrester had them classified as integration-centric BPM, although many saw that their strong human-centric capabilities defied this categorization. I had another look at Cordys in 2010 at a conference in Oslo, at which time it was positioned as a SaaS-only offering that was tightly integrated with the Google Apps Marketplace; this was possibly due to the injection of the Process Factory DNA when Jon Pyke (formerly of Staffware) joined Cordys after his time starting up Process Factory. By 2013, when OpenText acquired Cordys, it was positioned as a cloud-based platform for creating process-centric applications, although at the time I raised the issue (as did others) of having multiple competing BPM platforms within OpenText. It appears that OpenText would really like their customers to move off the old Metastorm and Global 360 implementations and onto Process Suite, but like most large enterprise vendors with a broad portfolio, they are not sunsetting any of these other products, just not spending a lot of time enhancing them.

OpenText is now using the term Process Automation rather than BPM, with the message shifting to process innovation as part of digital transformation. Process automation needs to be easier (low/no-code, templates, reusability, pre-built apps), smarter (data-driven, IoT, social, sentiment, RPA, AI) and engaging (integrate as part of ecosystem, focus on customer experience). And, as I mentioned earlier, they are repositioning BPM as just part of the larger low-code application development platform, a move that we’ve been seeing from most other BPM vendors over the past few years. This is a move seen as essential for citizen developers and long-tail applications, but is often the bane of more technical developers who want to just write code that can call BPM functionality, not have a monolithic and opaque proprietary development environment. OpenText is providing both options, allowing citizen developers to use low-code methods, while technical developers can use more traditional coding techniques as required.

This week at Enterprise Week, I had the chance to sit in on a few sessions, including the BPM roadmap, low-code application development, the process automation customer market landscape, and integrating analytics with process; I also had the chance to get a couple of excellent demos on People Center (an HR app built on Process Suite) as well as using Process Suite to create other case management applications.

The release later in 2017 will expand the use cases of AppWorks from simple, isolated applications to more comprehensive apps that take advantage of the newly-integrated case and process platform. More advanced analytics will be integrated, with better dashboards and reports, and the foundations laid for IoT and predicitive analytics. By next year, there will be more pre-built applications (similar to People Center) and an applications marketplace, plus more intelligence such as sentiment analysis, cognitive input and RPA. They will also have rolled out support for the complete set of developer personas, from novice citizen developer to technical ninja. Content services are already integrated using Extended ECM for Process Server, the same type of connector that they used to add content to external applications such as SAP.

They’re working at a development style that allows for pre-built applications to be configured and extended by customers while maintaining upgradability; this is pretty critical for applications such as People Center, where you want the customers to create their own checklists and integrate to their own HRIS, but still be able to install the latest version of People Center without breaking that, and requires that guardrails be established and followed. They are also creating templates such as the current dynamic case management, which is really just sample/starter code that can be used by a partner or developer as a starting point but is not intended to be maintained by OpenText.

I ended Wednesday at a session on connecting analytics to process, which rounds out the capabilities. The OpenText Analytics Suite (from the Actuate acquisition, and including the new Magellan offering/branding) is separate from the Process Suite, but there are obvious connections between analytics and process in general: extracting insights from process data, and automating processes based on the results of analytics. The Analytics Suite includes business intelligence services (iHub) that provide enterprise-level analytics and visualization, with the optional addition of data discovery (Big Data Analytics) and text analysis (InfoFusion). The robust API capabilities in iHub allow analytics to be tied in directly to Process Suite or any other application; like process, analytics are ubiquitous and need to be easily integrated across other products and applications. Magellan, as we heard, is a pre-wired set of capabilities from the Analytics Suite that is focused around machine learning, combining data discovery, reporting and dashboarding, text analysis, large-scale data processing, and predictive analytics. This is built on Apache Spark, leveraging Hadoop, and is packaged and supported by OpenText.

We saw a demo of the general capabilities of the Analytics Suite — looked pretty nice, although I’m not the analytics geek in the family — then saw some examples of how process can be integrated with analytics. In one scenario, a low rating provided by a customer on a taxi app causes a customer service process to be kicked off to follow up on the problem with the customer, which is amazingly similar (although much more automated) as a Zipcar experience that I still talk about as an example of customer experience.

I’ll be back at Enterprise World for the last day tomorrow to see a few of the remaining BPM sessions. There’s still a lot that isn’t completely clear about high-level strategy for the complete portfolio of OpenText process-related products — such as how the document-centric workflow in Documentum are going to fit into the mix — but at least I feel like I’m starting to scratch the surface.

OpenText Enterprise World 2017 day 2 keynote with @muhismajzoub

We had a brief analyst Q&A yesterday at OpenText Enterprise World 2017 with Mark Barrenechea (CEO/CTO), Muhi Majzoub (EVP of engineering) and Adam Howatson (CMO), and today we heard more from Majzoub in the morning keynote. He started with a review of the past year of product development — specific product enhancements and releases, more applications, and Magellan analytics suite — then moved on to the ongoing vision and strategy.

Dean Haacker, CTO of The PrivateBank, joined Majzoub to talk about their use of OpenText content products. They moved from an old version of Content Server to the curret CS16, adding WEM integrated with CS for their intranet, Teleform for scanning, and ShinyDrive (OpenText’s partner of the year) for easy access to the content repository. The improved performance, capabilities and user experience are driving adoption within the bank; more of their employees are using the content capabilities for their everyday document needs, and as one measure of the success, their paper consumption has reduced by 20%.

Majzoub continued with a discussion of their recent enhancements in their content products, and demoed their life sciences application built on Documentum D2. There’s a new UI for D2 and a D2 mobile app, plus Brava! widgets for building apps. They can deploy their content products (OTMM, Content Suite, D2 and eDocs) across a variety of OpenText Cloud configurations, from on-premise to hybrid to public cloud. Content in the cloud allows for external sharing and collaboration, and we saw a demo of this capability using OpenText Core, which is their personal/team cloud product. Edits to an Office365 document by an external collaborator (or, presumably, edited using a desktop app and saved back to Core) can be synchronized back into Content Suite.

Other products and demos that he covered:

  • A demo of Exstream for updating and publishing a customer communication asset, which can automatically push the communication to specific customers and platforms via email, document notifications in Core, or mobile notifications. It actually popped up in the notifications section of the Enterprise World app on my phone, so worked as expected.
  • Their People Center HR app, which we saw demonstrated yesterday, built on AppWorks and Process Suite.
  • A demo of Extended ECM, which integrates content capabilities directly into other applications such as SAP, supporting both private and shared public cloud platforms for both internal and external participants.
  • Enhancements coming to Business Network, which is their collection of supply chain technologies, including B2B integration, fax, secure messaging and more; most interesting is the upcoming integration with Process Suite to merge internal and external processes.
  • A bit about the Covisint acquisition — not yet closed so not too many details — for IoT and deveice messaging.
  • AppWorks is their low-code development environment that enables both desktop and mobile apps to be created quickly, while still supporting more advanced developers.
  • Applying machine-assisted discovery to information lakes formed by a variety of hetereogenous content sources for predictions and insights.
  • eDOCS InfoCenter for an improved portal-style UI (in case you haven’t been paying attention for the past few years, eDOCS is focused purely on legal applications, although has functionality that overlaps with Content Suite and Documentum).

Majzoub finished with commitments for their next version — EP3 coming in October 2017 — covering enhancements across the full range of products, and the longer-term view of their roadmap of continuous innovation including their new hybrid platform, Project Banff. This new modern architecture will include a common RESTful services layer and an underlying integrated data model, and is already manifesting in AppWorks, People Center, Core, LEAP and Magellan. I’m assuming that some of their legacy products are not going to be migrated onto this new architecture.

 

I also attended the Process Suite product roadmap session yesterday as well as a number of demos at the expo, but decided to wait until later today when I’ve seen some of the other BPM-related sessions to write something up. There are some interesting changes coming — such as Process Suite becoming part of the AppWorks low-code application development environment — and I’m still getting a handle on how the underlying Cordys DNA of the product is being assimilated.

The last part of the keynote was a session on business creativity by Fredrik Härén — interesting!

OpenText Enterprise World keynote with @markbarrenechea

I’m at OpenText Enterprise World 2017  in Toronto; there is very little motivating me to attend the endless stream of conferences in Vegas, but this one is in my backyard. There have been a couple of days of partner summit and customer training, but this is the first day of the general conference.

We kicked off with a keynote hosted by OpenText CEO/CTO Mark Barrenechea, who presented some ideas on his own and invited others to the stage to chat or present as well. He talked about world-changing concepts that we may see start to have a significant impact over the next decade:

  • Robotics
  • Internet of things
  • Internet of money (virtual and alternative currencies)
  • Artificial intelligence
  • Mobile eating the world
  • New business models
  • Living to 150
  • IQ of 1000, where human intelligence and capabilities will be augmented by machines

He positions OpenText as a leading provider of enterprise information management technologies for digital transformation, leveraging the rapid convergence of connectivity, automation and computing power. My issue with OpenText is that they have grown primarily through acquisitions – a LOT of acquisitions – and the product portfolio is vast and overlapping. OpenText Cloud is a fundamental platform, which makes a lot of sense for them with the amount of B2B integration that their tools support, as well as the push to private, hybrid and public cloud by many organizations. They see documents (whether human or machine created) as fundamental business artifacts and therfore content management is a primary capability, but there are a few different products that fall into their ECM category and I’m not sure of the degree of overlap, for example, with the recently-acquired Documentum and some of their other ECM assets. Application development is also a key category for them, with a few different products including their Appworks low-code environment. The story gets a bit simpler with their information network for inter-enterprise connectivity, new acquisition Covisint for managing IoT messages and actions, and newly-released Magellan for analytics.

He interviewed two customers on their business and use of OpenText products:

  • Kristijan Jarc, VP of Digital at KUKA, a robotics company serving a variety of vertical industries, from welding in automotive manufacturing to medical applications. Jarc’s team develops digital strategies and solutions that help their internal teams build better products, often related to how data collected from the robots is used for analytics and preventative maintenance, and they’re using OpenText technology to capture and store that data.
  • Sarah Shortreed, CIO of Bruce Power, which runs a farm of 8 CANDU reactors that generate 30% of Ontario’s electrical power. They’re in the process of refurbishing the plant, some parts of which are 40 years old, which is allowing more data to be collected from more of their assets in realtime. They have much tighter security restrictions than most organizations, and much longer planning cycles, making enterprise information management a critical capability.

Barrenechea hosted three other OpenText people to give demos (I forgot to note the names, but if anyone can add them in a comment, I’ll update this post); I’ve just put in a couple of notes for each trying to capture the essence of the demo and the technologies that they were showcasing:

  • Magellan analytics for a car-share company: targeted marketing, demand and utilization, and proactive maintenance via different algorithms. Automated clustering, trend derivation within a selected dataset to determine the target market for a campaign. Allow data scientists to open notebooks and directly program in Python, R, Scala to create own algorithms by calling Magellen APIs. Use linear regression on historical usage data and weather forecasts to forecast demand. IoT streaming diagnostics from vehicles to predict likelihood of breakdown and take appropriate automated actions to remove cars from service and schedule maintenance.
  • People Center app built on Appworks. Integrated with HRIs including SAP, SuccessFactors, Workday, Oracle for core HR transactions; People Center adds the unstructured data including documents to create the entire employee file. Manage recruitment and onboarding processes. Magellan analytics to match resumes to open positions using proximity-based matching. Identify employees at risk of leaving using logistic regression.
  • KUKA iiwa robot sending IoT data to cloud for viewing through dashboard, analytics to identify possible problems. Field service tech accesses manuals and reference materials via content platform. Case management foldering to collect and view documents related to a maintenance incident. Collaborative chat within maintenance case to allow product specialist to assist field tech. Future AI application: automatically find and rank related cases and highlight relevant information.

The keynote ended with Barrenechea interviewing Wayne Gretzky, which was a delightful conversation although unrelated to any of the technology topics. However, Gretzky did talk about the importance of teamwork, and how working with people who are better than you at something makes you better at what you do. You could also see analogies in business when he talked about changes in the sport of hockey: globalization, expanding markets, and competition is getting bigger and meaner. As a guy who spent a lot of the early part of his hockey career as the smallest guy on the ice, he learned how to hone his intelligence about the game to be a winner in spite of the more traditional strengths of his competitors: a good lesson for all of us.

Smart City initiative with @TorontoComms at BigDataTO

Winding down the second day of Big Data Toronto, Stewart Bond of IDC Canada interviewed Michael Kolm, newly-appointed Chief Transformation Officer at the city of Toronto, on the Smart City initiative. This initiative is in part about using “smart” technology – by which he appears to mean well-designed, consumer-facing applications – as well as good mobile infrastructure to support an ecosystem of startup and other small businesses for creating new technology solutions. He gave an example from the city’s transportation department, where historical data is used to analyze traffic patterns, allowing for optimization of traffic flow and predictive modeling for future traffic needs due to new development. This includes input in projects such as the King Street Pilot Study that is going into effect later this year, that will restrict private vehicle traffic on a stretch of King in order to optimize streetcar and pedestrian flows. In general, the city has no plan to monetize data, but prefer to use city-owned data (which is, of course, owned by the public) to foster growth through Open Data initiatives.

There were some questions about how the city will deal with autonomous vehicles, short-term (e.g., AirBnB) rentals and other areas where advancing technology is colliding with public policies. Kolm also spoke about how the city needs to work with the startup/small business community for bringing innovation into municipal government services, and also how our extensive network of public libraries are an untapped potential channel for civic engagement. For more on digital transformation in the city of Toronto, check out my posts on the TechnicityTO conference from a few months back.

I was going to follow this session with the one on intelligent buildings and connected communities by someone from Tridel, which likely would have made an interesting complement to this presentation, but unfortunately the speaker had to cancel at the last minute. That gives me a free hour to crouch in a hallway by an electrical outlet to charge my phone. Winking smile

Consumer IoT potential: @ZoranGrabo of @ThePetBot has some serious lessons on fun

I’m back for a couple of sessions at the second day at Big Data Toronto, and just attended a great session by Zoran Grabovac of PetBot on the emerging markets for consumer IoT devices. His premise is that creating success with IoT devices is based on saving/creating time, strengthening connections, and having fun.

It also helps to be approaching an underserved market, and if you believe his somewhat horrifying stat that 70% of pet owners consider themselves to be “pet parents”, there’s a market with people who want to interact with and entertain their pets with technology while they are gone during working hours. PetBot’s device gives you a live video feed of your pet remotely, but can also play sounds, drop treats (cue Pavlov) and record pet selfies using facial recognition to send to you while you’re out. This might seem a bit frivolous, but his lessons on using devices to “create” time (allowing for interaction during a time that you would not normally be available), make your own type of interactions (e.g., create a training regimen using voice commands), and have fun to promote usage retention (who doesn’t like cute pet selfies?).

I asked about integrating with pet activity trackers and he declined to comment, so we might see something from them on this front.; other audience questions asked about the potential for learning and recognition algorithms that could automatically reward specific behaviours. I’m probably not going to run out and get a PetBot – it seems much more suited for dogs than cats – but his insights into consumer IoT devices are valid across a broader range of applications.

Data-driven deviations with @maxhumber of @borrowell at BigDataTO

Any session at a non-process conference with the word “process” in the title gets my attention, and I’m here to see Max Humber of Borrowell discuss how data-driven deviations allow you to make changes while maintaining the integrity of legacy enterprise processes. Borrowell is a fintech company focused on lending applications: free credit score monitoring, and low-interest personal loans for debt consolidation or reducing credit card debt. They partner with existing financial institutions such as Equifax and CIBC to provide the underlying credit monitoring and lending capabilities, with Borrowell providing a technology layer that’s more than just a pretty face: they use a lot of information sources to create very accurate risk models for automated loan adjudication. As Borrowell’s deep learning platforms learn more about individual and aggregate customer behaviour, their risk models and adjudication platform becomes more accurate, reducing the risk of loan defaults while fine-tuning loan rates to optimize the risk/reward curve.

Great application of AI/ML technology to financial services, which sorely need some automated intelligence applied to many of their legacy processes.

IBM’s cognitive, AI and ML with @bigdata_paulz at BigDataTO

I’ve been passing on a lot of conferences lately – just too many trips to Vegas for my liking, and insufficient value for my time – but tend to drop in on ones that happen in Toronto, where I live. This week, it’s Big Data Toronto, held in conjunction with Connected Plus and AI Toronto.

Paul Zikopoulos, VP of big data systems at IBM gave a keynote on what cognitive, AI and machine learning mean to big data. He pointed out that no one has a problem collecting data – all companies are pros at that – but the problem is knowing what to do with it in order to determine and act on competitive advantage, and how to value it. He talked about some of IBM’s offerings in this area, and discussed a number of fascinating uses of AI and natural language that are happening in business today. There are trendy chatbot applications, such as Sephora’s lipstick selection bot (upload your selfie and a picture of your outfit to match to get recommendations and purchase directly); and more mundane but useful cases of your insurance company recommending that you move your car into the garage since a hailstorm is on the way to your area. He gave us a quick lesson on supervised and unsupervised learning, and how pattern detection is a fundamental capability of machine learning. Cognitive visual inspection – the descendent of the image pattern analysis algorithms that I wrote in FORTRAN about a hundred years ago – now happens by training an algorithm with examples rather than writing code. Deep learning can be used to classify pictures of skin tumors, or learn to write like Ernest Hemingway, or auto-translate a sporting event. He finished with a live demo combining open source tools such as sentiment analysis, Watson for image classification, and a Twitter stream into a Bluemix application that classified pictures of cakes at Starbucks – maybe not much of a practical application, but you can imagine the insights that could be extracted and analyzed in the same fashion. All of this computation doesn’t come cheap, however, and IBM would love to sell you a few (thousand) servers or cloud infrastructure to make it happen.

After being unable to get into three breakout sessions in a row – see my more detailed comments on conference logistics below – I decided to head back to my office for a couple of hours. With luck, I’ll be able to get into a couple of other interesting sessions later today or tomorrow.

A huge thumbs down to the conference organizers (Corp Agency), by the way. The process to pick up badges for pre-registered attendees was a complete goat rodeo, and took me 20+ minutes to simply pick up a pre-printed badge from a kiosk; the person staffing the “I-L” line started at the beginning of the Ks and flipped his way through the entire stack of badge to find mine, so it was taking about 2 minutes per person in our line while the other lines were empty. The first keynote of the day, which was only 30 minutes long, ran 15 minutes late. The two main breakout rooms were woefully undersized, meaning that it was literally standing room in many of the sessions – which I declined to attend because I can’t type while standing – although there was a VIP section with open seats for those who bought the $300 VIP pass instead of getting the free general admission ticket. There was no conference wifi or charging stations for attendees. There was no free water/coffee service (and the paid food items didn’t look very appetizing); this is a mostly free conference but with sponsors such as IBM, Deloitte, Cloudera and SAS, it seems like they could have had a couple of coffee urns set up for free under a sponsor’s name. The website started giving me an error message about out of date content every time I viewed it on my phone; at least I think it was about out of date content, since it was inexplicably only in French. The EventMobi conference app was very laggy, and was missing huge swaths of functionality if you didn’t have a data connection (see above comments about no wifi or charging stations). I’ve been to a lot of conference, and the logistics can really make a big difference for the attendees and sponsors. In cases like this, where crappy logistics actually prevent attendees from going to sessions that feature vendor sponsor speakers (IBM, are you listening?), it’s inexcusable. Better to charge a small fee for everyone and actually have a workable conference.