All posts by sandy

Smart City initiative with @TorontoComms at BigDataTO

Winding down the second day of Big Data Toronto, Stewart Bond of IDC Canada interviewed Michael Kolm, newly-appointed Chief Transformation Officer at the city of Toronto, on the Smart City initiative. This initiative is in part about using “smart” technology – by which he appears to mean well-designed, consumer-facing applications – as well as good mobile infrastructure to support an ecosystem of startup and other small businesses for creating new technology solutions. He gave an example from the city’s transportation department, where historical data is used to analyze traffic patterns, allowing for optimization of traffic flow and predictive modeling for future traffic needs due to new development. This includes input in projects such as the King Street Pilot Study that is going into effect later this year, that will restrict private vehicle traffic on a stretch of King in order to optimize streetcar and pedestrian flows. In general, the city has no plan to monetize data, but prefer to use city-owned data (which is, of course, owned by the public) to foster growth through Open Data initiatives.

There were some questions about how the city will deal with autonomous vehicles, short-term (e.g., AirBnB) rentals and other areas where advancing technology is colliding with public policies. Kolm also spoke about how the city needs to work with the startup/small business community for bringing innovation into municipal government services, and also how our extensive network of public libraries are an untapped potential channel for civic engagement. For more on digital transformation in the city of Toronto, check out my posts on the TechnicityTO conference from a few months back.

I was going to follow this session with the one on intelligent buildings and connected communities by someone from Tridel, which likely would have made an interesting complement to this presentation, but unfortunately the speaker had to cancel at the last minute. That gives me a free hour to crouch in a hallway by an electrical outlet to charge my phone. Winking smile

Consumer IoT potential: @ZoranGrabo of @ThePetBot has some serious lessons on fun

I’m back for a couple of sessions at the second day at Big Data Toronto, and just attended a great session by Zoran Grabovac of PetBot on the emerging markets for consumer IoT devices. His premise is that creating success with IoT devices is based on saving/creating time, strengthening connections, and having fun.

It also helps to be approaching an underserved market, and if you believe his somewhat horrifying stat that 70% of pet owners consider themselves to be “pet parents”, there’s a market with people who want to interact with and entertain their pets with technology while they are gone during working hours. PetBot’s device gives you a live video feed of your pet remotely, but can also play sounds, drop treats (cue Pavlov) and record pet selfies using facial recognition to send to you while you’re out. This might seem a bit frivolous, but his lessons on using devices to “create” time (allowing for interaction during a time that you would not normally be available), make your own type of interactions (e.g., create a training regimen using voice commands), and have fun to promote usage retention (who doesn’t like cute pet selfies?).

I asked about integrating with pet activity trackers and he declined to comment, so we might see something from them on this front.; other audience questions asked about the potential for learning and recognition algorithms that could automatically reward specific behaviours. I’m probably not going to run out and get a PetBot – it seems much more suited for dogs than cats – but his insights into consumer IoT devices are valid across a broader range of applications.

Data-driven deviations with @maxhumber of @borrowell at BigDataTO

Any session at a non-process conference with the word “process” in the title gets my attention, and I’m here to see Max Humber of Borrowell discuss how data-driven deviations allow you to make changes while maintaining the integrity of legacy enterprise processes. Borrowell is a fintech company focused on lending applications: free credit score monitoring, and low-interest personal loans for debt consolidation or reducing credit card debt. They partner with existing financial institutions such as Equifax and CIBC to provide the underlying credit monitoring and lending capabilities, with Borrowell providing a technology layer that’s more than just a pretty face: they use a lot of information sources to create very accurate risk models for automated loan adjudication. As Borrowell’s deep learning platforms learn more about individual and aggregate customer behaviour, their risk models and adjudication platform becomes more accurate, reducing the risk of loan defaults while fine-tuning loan rates to optimize the risk/reward curve.

Great application of AI/ML technology to financial services, which sorely need some automated intelligence applied to many of their legacy processes.

IBM’s cognitive, AI and ML with @bigdata_paulz at BigDataTO

I’ve been passing on a lot of conferences lately – just too many trips to Vegas for my liking, and insufficient value for my time – but tend to drop in on ones that happen in Toronto, where I live. This week, it’s Big Data Toronto, held in conjunction with Connected Plus and AI Toronto.

Paul Zikopoulos, VP of big data systems at IBM gave a keynote on what cognitive, AI and machine learning mean to big data. He pointed out that no one has a problem collecting data – all companies are pros at that – but the problem is knowing what to do with it in order to determine and act on competitive advantage, and how to value it. He talked about some of IBM’s offerings in this area, and discussed a number of fascinating uses of AI and natural language that are happening in business today. There are trendy chatbot applications, such as Sephora’s lipstick selection bot (upload your selfie and a picture of your outfit to match to get recommendations and purchase directly); and more mundane but useful cases of your insurance company recommending that you move your car into the garage since a hailstorm is on the way to your area. He gave us a quick lesson on supervised and unsupervised learning, and how pattern detection is a fundamental capability of machine learning. Cognitive visual inspection – the descendent of the image pattern analysis algorithms that I wrote in FORTRAN about a hundred years ago – now happens by training an algorithm with examples rather than writing code. Deep learning can be used to classify pictures of skin tumors, or learn to write like Ernest Hemingway, or auto-translate a sporting event. He finished with a live demo combining open source tools such as sentiment analysis, Watson for image classification, and a Twitter stream into a Bluemix application that classified pictures of cakes at Starbucks – maybe not much of a practical application, but you can imagine the insights that could be extracted and analyzed in the same fashion. All of this computation doesn’t come cheap, however, and IBM would love to sell you a few (thousand) servers or cloud infrastructure to make it happen.

After being unable to get into three breakout sessions in a row – see my more detailed comments on conference logistics below – I decided to head back to my office for a couple of hours. With luck, I’ll be able to get into a couple of other interesting sessions later today or tomorrow.

A huge thumbs down to the conference organizers (Corp Agency), by the way. The process to pick up badges for pre-registered attendees was a complete goat rodeo, and took me 20+ minutes to simply pick up a pre-printed badge from a kiosk; the person staffing the “I-L” line started at the beginning of the Ks and flipped his way through the entire stack of badge to find mine, so it was taking about 2 minutes per person in our line while the other lines were empty. The first keynote of the day, which was only 30 minutes long, ran 15 minutes late. The two main breakout rooms were woefully undersized, meaning that it was literally standing room in many of the sessions – which I declined to attend because I can’t type while standing – although there was a VIP section with open seats for those who bought the $300 VIP pass instead of getting the free general admission ticket. There was no conference wifi or charging stations for attendees. There was no free water/coffee service (and the paid food items didn’t look very appetizing); this is a mostly free conference but with sponsors such as IBM, Deloitte, Cloudera and SAS, it seems like they could have had a couple of coffee urns set up for free under a sponsor’s name. The website started giving me an error message about out of date content every time I viewed it on my phone; at least I think it was about out of date content, since it was inexplicably only in French. The EventMobi conference app was very laggy, and was missing huge swaths of functionality if you didn’t have a data connection (see above comments about no wifi or charging stations). I’ve been to a lot of conference, and the logistics can really make a big difference for the attendees and sponsors. In cases like this, where crappy logistics actually prevent attendees from going to sessions that feature vendor sponsor speakers (IBM, are you listening?), it’s inexcusable. Better to charge a small fee for everyone and actually have a workable conference.

Cloud ECM with @l_elwood @OpenText at AIIM Toronto Chapter

Lynn Elwood, VP of Cloud and Services Solutions at OpenText, presented on managing information in a cloud world at today’s AIIM chapter meeting in Toronto. This is of particular interest to Canadians, since most of the cloud service offerings that we see are in the US, and many companies are not comfortable with keeping their private data in a jurisdiction where it can be somewhat easily exposed to foreign government and intelligence agencies.

She used a building analogy to talk about cloud services:

  • Infrastructure as a service (IaaS) is like a piece of serviced land on which you need to build your own building and worry about your connections to services. If your water or electricity is off, you likely need to debug the problem yourself although if you find that the problem is with the underlying services, you can go back to the service provider.
  • Platform as a service (PaaS) is like a mobile home park, where you are responsible for your own dwelling but not for the services, and there are shared services used by all residents.
  • Software as a service (SaaS) is like a condo building, where you own your own part of it, but it’s within a shared environment. SaaS by Gartner’s definition is multi-tenant, and that’s the analogy: you are at the whim, to a certain extent, of the building management in terms of service availability, but at a greatly reduced cost.
  • Dedicated, hosted or managed is like a private house on serviced land, where everything in the house is up to you to maintain. In this set of analogies, not sure that there is a lot of distinction between this and IaaS.
  • On-premises is like a cottage, where you probably need to deal with a lot of the services yourself, such as water and septic systems. You can bring in someone to help, but it’s ultimately all your responsibility.
  • Hybrid is a combo of things — cloud to cloud, cloud to on-premise — such as owning a condo and driving to a cottage, where you have different levels of service at each location but they share information.
  • Managed services is like having a property manager, although it can be cloud or on-premise, to augment your own efforts (or that of your staff).

Regardless of the platform, anything that touches the cloud is going to have a security consideration as well as performance/up-time SLAs if you want to consider it as part of your core business. From my experience, on-premise solutions can be just as insecure and unstable as any cloud offering, so good to know what you’re comparing with when you are looking at cloud versus on-premise.

Most organziations require that their cloud provider have some level of certification: of the facility (data centre), platform (infrastructure) and service (application). Elwood talked about the cloud standards that impact these, including ISO 27001, and SOC 1, 2 and 3.

A big concern is around applications in the cloud, namely SaaS such as Box or Salesforce. Although IT will be focused on whether the security of that application can be breached, business and information managers need to be concerned about what type of data is being stored in those applications and whether it potentially violates any privacy regulations. Take a good look at those SaaS EULAs — Elwood took us through some Apple and Google examples — and have your lawyers look at them as well if you’re deploying these solutions within the enterprise. You also need to look at data residency requirements (as I mentioned at the start): where the data resides, the sovereignty of the hosting company, the routing between you and the data even if the data resides in your own country, and the backup policies of the hosting company. The US Patriot Act allows the US government to access any data that passes through, is stored in, or is hosted by a company that is domiciled in the US; other countries are also adding similar laws. Although a company may have a data centre in your country, if they’re a US company, they probably have a default to store/process/backup in the US: check our the Microsoft hosting and data processing agreement, for example, which specifies that your data will be hosted and/or processed in the US unless you explicitly request otherwise. There’s an additional issue that even if your data has the appropriate residency, if an employee is travelling to a restricted country and accesses the data remotely, you may be violating privacy regulations; not all applications have the ability to filter otherwise authenticated access based on IP address. If you add this to the ability of foreign governments to demand device passwords in order to enter a country, the information accessible via an employee’s computer — not just the information stored it — is at risk for exposure.

Elwood showed a map of the information governance laws and regulations around the world, and it’s a horrifying mix of acronyms for data protection and privacy rules, regulated records retention, eDiscovery requirements, information integrity and authenticity, and reporting obligations. There’s a new EU regulation — the General Data Protection Regulation (GDPR) — that is going to be a game-changer, harmonizing laws across all 28 member nations and applying to any data collected about an EU citizen even outside the EU. The GDPR includes increased consent standards, stronger individual data rights, stronger breach notification, increased governance obligation, stronger recordkeeping requirements, and data transfer constraints. Interestingly, Canada is recognized as one of the countries that is deemed to have “adequate protection” for data transfer, along with Andorra, Argentina, the Faroe Islands, the Channel Islands (Guernsey and Jersey), Isle of Man, Israel, New Zealand, Switzerland and Uruguay. In my opinion, many companies aren’t even aware of the GDPR, much less complying with it, and this is going to be a big wake-up call. Your compliance teams need to be aware of the global landscape as it impacts your data usage and applications, whether in the cloud or on premise; companies can receive huge fines (up to 4% of annual revenue) for violating GDPR whether they are the “owner” of the data or just a data processor/host.

OpenText has a lot of GDPR information on their website that is not specific to their products if you want to read more. 

There are a lot of benefits to cloud when it comes to information management, and a lot of things to consider: agility to grow and change quickly; a services approach that requires partnering with the service provider; mobility capabilities offered by cloud platforms that may not be available for on premise; and analytics offered by cloud vendors within and across applications.

She finished up with a discussion on the top areas of concerns for the attendees: security, regulations, GDPR, data sovereignty, consumer applications, and others. Great discussion amongst the attendees, many of whom work in the Canadian financial services industry: as expected, the biggest concerns are about data residency and sovereignty. GDPR is seen as having the potential to level the regulatory playing field by making everyone comply; once the data centres and service providers start to comply, it will be much easier for most organizations to outsource that piece of their compliance by moving to cloud services. I think that cloud service providers are already doing a better job at security and availability than most on-premise systems, so once they crack the data residency and sovereignty problem there is little reason to have a private data centre. IT’s concern has mostly been around security and availability, but now is the time for information and compliance managers to get involved to ensure that privacy regulations are supported by these platforms.

There are Canadian companies using cloud services, even the big banks and government, although I am guessing that it’s for peripheral rather than core services. Although some are doing this “accidentally” as the only way to share information with external participants, it’s likely time for many companies to revisit their information management strategies to see if they can be more inclusive of property vetted cloud solutions.

We did get a very brief review of OpenText and their offerings at the end, including their software solutions and their EIM cloud offerings under the OpenText Cloud banner. They are holding their Enterprise World user conference in Toronto this July, which is the first (but likely not the last) big software company to see the benefits of a non-US North American conference location.

Twelve years – and a million words – of Column 2

In January, I read Paul Harmon’s post at BPTrends on predictions for 2017, and he mentioned that it was the 15th anniversary of BPTrends. This site hasn’t been around quite that long, but today marks 12 years of blogging here on Column 2. Coincidentally, my first post was on the BP Trends 2005 report on BPM suites!

In that time, I’ve written more than a million words in about 2,600 posts – haven’t quite got around to writing that book yet – documenting many conferences and products, as well as emerging trends and standards in BPM. I’ve collected over 3,000 comments from many of you, which I consider a measure of success: I write here to engage people and discuss ideas. Many of you have become clients, colleagues and friends over the years, and it’s always a thrill to meet someone for the first time and hear them say “I read your blog”. I know that I’ve inspired others to pick up that keyboard and start blogging, and my RSS reader is still the first place that I go for news about the industry (hint: I’m more likely to read your site if you publish a full RSS feed; I only get to the partial ones every week or so).

In the early days, I blogged more frequently, every couple of days; now I seem to be caught up in projects that consume a lot of my time and have less hours to spend focused on writing. Also, I’ve cut back on my business conference travel in the past year or so, attending only the ones where I’m presenting or where I feel that there is value for me, which gives me far fewer opportunities to blog about conference sessions. I’m not going to make any predictions about whether I’ll blog more or less in the next 12 years; I’m just happy to have a soapbox to stand on.

AIIM breakfast meeting on Feb 16: digital transformation and intelligent capture

AIIM TorontoI’m speaking at the AIIM breakfast meeting in Toronto on February 16, with an updated version of the presentation that I gave at the ABBYY conference in November on digital transformation and intelligent capture. ABBYY is generously sponsoring this meeting and will give a brief presentation/demo on their intelligent capture and text analytics products after my presentation.

Here’s the description of my talk:

This presentation will look at how digital transformation is increasing the value of capture and text analytics, recognizing that these technologies provide an “on ramp” to the intelligent, automated processes that underlie digital transformation. Using examples from financial services and retail companies, we will examine the key attributes of this digital transformation. We will review step-by-step, the role of intelligent capture in digital transformation, showing how a customer-facing financial services process is changed by intelligent capture technologies. We will finish with a discussion of the challenges of introducing intelligent capture technologies as part of a digital transformation initiative.

You can register to attend here, and there’s a discount if you’re an AIIM member.

You can read about last month’s meeting here, which featured Jason Bero of Microsoft talking about SharePoint and related Microsoft technologies that are used for classification, preservation, protection and disposal of information assets.

BPM skills in 2017–ask the experts!

Zbigniew Misiak over at BPM Tips decided to herd the cats, and asked a number of BPM experts on the skills that are required – and not relevant any more – as we move into 2017 and beyond. I was happy to be included in that group, and my contribution is here.

In a nutshell, I had advice for both the process improvement/engineering groups, and the IT groups that are involved in BPM implementations. Basically, the former needs to learn more about the potential power of automation as a process improvement tool and how BPMS can help with that; while the latter needs to stop using agile low-code BPMS tools to do monolithic, waterfall-driven implementations. I also addressed the need for citizen developers – usually semi-technical business analysts that build “end user computing” solutions directly within business units – to start using low-code BPMS tools to do this instead of spreadsheets.

On the side of skills that are no longer relevant, I’m seeing less need for Lean/Six Sigma efforts that focus on incremental process improvements rather than innovation. There are definitely industries with material assets and processes that benefit greatly from LSS methodologies, but its use in knowledge-based service organizations in waning.

Check out the entire post at the link above for the views of several others in the industry.

AIIM Toronto seminar: @jasonbero on Microsoft’s ECM

I’ve recently rejoined AIIM — I was a member years ago when I did a lot of document capture and workflow implementation projects, but drifted away as I became more focused on process — and decided to check out this month’s breakfast seminar hosted by the AIIM Toronto chapter. Today’s presenter was Jason Bero from Microsoft Canada, who is a certified records manager and information governance specialist, talking about SharePoint and related Microsoft technologies that are used for classification, preservation, protection and disposal of information assets.

He started out with AIIM’s view of the stages of information management (following diagram found online but almost certainly copyright AIIM) as a framework for describing where SharePoint fits in and their new functionality:

There’s a shift happening in information management, since a lot of information is now created natively in electronic form, may be generated by customers and employees using mobile apps, and even stored outside the corporate firewaall on cloud ECM platforms. This creates challenges in authentication and security, content privacy protection, automatic content classification, and content federation across platforms. Microsoft is adding data loss prevention (DLP) and records management capabilities to SharePoint to meet some of these challenges, including:

  • Compliance Policy Center
  • DLP policies and management
  • Policy notification messages
  • Document deletion policies
  • Enhanced retention and disposition policies for working documents
  • Document- and records-centric workflow with a web-based workflow design tool
  • Advanced e-discovery for unstructured documents, including identifying relevant relationships between documents
  • Advanced auditing, including SharePoint Online and OneDrive for Business as well as on-premise repositories
  • Data governance: somewhat controversially (at my table of breakfast colleagues, anyway), this replaces the use of metadata fields with a new “tags” concept
  • Rights management on documents that can be applied to both internal and external viewers of a document

AIIM describes an information classification and protection cycle: classification, labeling, encryption, access control, policy enforcement, document tracking, and document revocation; Bero described how SharePoint addresses these requirements, with particular attention paid to Canadian concerns for the local audience, such as encryption keys. I haven’t looked at SharePoint in quite a while (and I’m not really much of an ECM expert any more), but it looks like lots of functionality that boosts SharePoint into a more complete ECM and RM solution. This muscles in on some of the territory of their ISV partners who have provided these capabilities as SharePoint add-ons, although I imagine that a lot of Microsoft customers are lingering on ancient versions of SharePoint and will still be using those third-party add-ons for a while. In organizations that are pure Microsoft however, the ways that they can integrate their ECM/RM capabilites across all of their information creation, management and collaboration tools — from Office 365 to Skype For Business — provides a seamless environment for protecting and managing information.

He gave a live demo of some of these capabilites at work, showing how the PowerPoint presentation that he used would be automatically classified, shared, protected and managed based on its content and metadata, and the additional manual overrides that can be applied such as emailing him when an internal or external collaborator opens the document. Documents sent to external participants are accompanied by Microsoft Rights Management, providing the ability to see when and where people open the document, limiting or allowing downloads and printing, and allowing the originator to revoke access to the document. [Apparently, it’s now highly discouraged to send emails with attachments within Microsoft, which is a bit ironic considering that bloated Outlook pst files due to email attachments is the scourge of many IT departments.] Some of their rights management can be applied to non-Microsoft repositories such as Box, although this required a third-party add-on.

There was a question about synchronous collaborative editing of documents: you can now do this with shared Office documents using a combination of the desktop applications and browser apps, such that you see other people’s changes in the documents in real time while you’re editing it (like Google Docs), without requiring explicit check-out/check-in. I assume that this requires that the document is stored in a Microsoft repository, either on-premise or cloud, but that’s still an impressive upgrade.

One of the goals in this foray by Microsoft into more advanced ECM is to provide capabilities that are automated as much as possible, and generally easy-to-use for anything requiring manual input. This allows records management to happen on the fly by everyday users, rather than requiring a lot of intervention by trained records management people or complex custom workflows, and to have DLP policies applied directly within the tools that people are already using for creating, managing and sharing documents. Given the dominance of Microsoft on the desktop of today’s businesses, and the proliferation of SharePoint, a good way to improve compliance with better control over information assets.

BPM books for your reading list

I noticed that Zbigniew’s reading list of BPM books for 2017 included both of the books where I have author credit on Amazon: Social BPM, and Best Practices for Knowledge Workers.

You can find the ebooks on Amazon for about $10 each:

 

I’ve also been published in a couple of academic books and journals, but somehow it’s a more exciting to see my name on Amazon, since I don’t really think of myself as an author. After writing almost a million words on this blog (968,978 prior to this post), maybe I should reconsider!