Twelve years – and a million words – of Column 2

In January, I read Paul Harmon’s post at BPTrends on predictions for 2017, and he mentioned that it was the 15th anniversary of BPTrends. This site hasn’t been around quite that long, but today marks 12 years of blogging here on Column 2. Coincidentally, my first post was on the BP Trends 2005 report on BPM suites!

In that time, I’ve written more than a million words in about 2,600 posts – haven’t quite got around to writing that book yet – documenting many conferences and products, as well as emerging trends and standards in BPM. I’ve collected over 3,000 comments from many of you, which I consider a measure of success: I write here to engage people and discuss ideas. Many of you have become clients, colleagues and friends over the years, and it’s always a thrill to meet someone for the first time and hear them say “I read your blog”. I know that I’ve inspired others to pick up that keyboard and start blogging, and my RSS reader is still the first place that I go for news about the industry (hint: I’m more likely to read your site if you publish a full RSS feed; I only get to the partial ones every week or so).

In the early days, I blogged more frequently, every couple of days; now I seem to be caught up in projects that consume a lot of my time and have less hours to spend focused on writing. Also, I’ve cut back on my business conference travel in the past year or so, attending only the ones where I’m presenting or where I feel that there is value for me, which gives me far fewer opportunities to blog about conference sessions. I’m not going to make any predictions about whether I’ll blog more or less in the next 12 years; I’m just happy to have a soapbox to stand on.

AIIM breakfast meeting on Feb 16: digital transformation and intelligent capture

AIIM TorontoI’m speaking at the AIIM breakfast meeting in Toronto on February 16, with an updated version of the presentation that I gave at the ABBYY conference in November on digital transformation and intelligent capture. ABBYY is generously sponsoring this meeting and will give a brief presentation/demo on their intelligent capture and text analytics products after my presentation.

Here’s the description of my talk:

This presentation will look at how digital transformation is increasing the value of capture and text analytics, recognizing that these technologies provide an “on ramp” to the intelligent, automated processes that underlie digital transformation. Using examples from financial services and retail companies, we will examine the key attributes of this digital transformation. We will review step-by-step, the role of intelligent capture in digital transformation, showing how a customer-facing financial services process is changed by intelligent capture technologies. We will finish with a discussion of the challenges of introducing intelligent capture technologies as part of a digital transformation initiative.

You can register to attend here, and there’s a discount if you’re an AIIM member.

You can read about last month’s meeting here, which featured Jason Bero of Microsoft talking about SharePoint and related Microsoft technologies that are used for classification, preservation, protection and disposal of information assets.

BPM skills in 2017–ask the experts!

Zbigniew Misiak over at BPM Tips decided to herd the cats, and asked a number of BPM experts on the skills that are required – and not relevant any more – as we move into 2017 and beyond. I was happy to be included in that group, and my contribution is here.

In a nutshell, I had advice for both the process improvement/engineering groups, and the IT groups that are involved in BPM implementations. Basically, the former needs to learn more about the potential power of automation as a process improvement tool and how BPMS can help with that; while the latter needs to stop using agile low-code BPMS tools to do monolithic, waterfall-driven implementations. I also addressed the need for citizen developers – usually semi-technical business analysts that build “end user computing” solutions directly within business units – to start using low-code BPMS tools to do this instead of spreadsheets.

On the side of skills that are no longer relevant, I’m seeing less need for Lean/Six Sigma efforts that focus on incremental process improvements rather than innovation. There are definitely industries with material assets and processes that benefit greatly from LSS methodologies, but its use in knowledge-based service organizations in waning.

Check out the entire post at the link above for the views of several others in the industry.

AIIM Toronto seminar: @jasonbero on Microsoft’s ECM

I’ve recently rejoined AIIM — I was a member years ago when I did a lot of document capture and workflow implementation projects, but drifted away as I became more focused on process — and decided to check out this month’s breakfast seminar hosted by the AIIM Toronto chapter. Today’s presenter was Jason Bero from Microsoft Canada, who is a certified records manager and information governance specialist, talking about SharePoint and related Microsoft technologies that are used for classification, preservation, protection and disposal of information assets.

He started out with AIIM’s view of the stages of information management (following diagram found online but almost certainly copyright AIIM) as a framework for describing where SharePoint fits in and their new functionality:

There’s a shift happening in information management, since a lot of information is now created natively in electronic form, may be generated by customers and employees using mobile apps, and even stored outside the corporate firewaall on cloud ECM platforms. This creates challenges in authentication and security, content privacy protection, automatic content classification, and content federation across platforms. Microsoft is adding data loss prevention (DLP) and records management capabilities to SharePoint to meet some of these challenges, including:

  • Compliance Policy Center
  • DLP policies and management
  • Policy notification messages
  • Document deletion policies
  • Enhanced retention and disposition policies for working documents
  • Document- and records-centric workflow with a web-based workflow design tool
  • Advanced e-discovery for unstructured documents, including identifying relevant relationships between documents
  • Advanced auditing, including SharePoint Online and OneDrive for Business as well as on-premise repositories
  • Data governance: somewhat controversially (at my table of breakfast colleagues, anyway), this replaces the use of metadata fields with a new “tags” concept
  • Rights management on documents that can be applied to both internal and external viewers of a document

AIIM describes an information classification and protection cycle: classification, labeling, encryption, access control, policy enforcement, document tracking, and document revocation; Bero described how SharePoint addresses these requirements, with particular attention paid to Canadian concerns for the local audience, such as encryption keys. I haven’t looked at SharePoint in quite a while (and I’m not really much of an ECM expert any more), but it looks like lots of functionality that boosts SharePoint into a more complete ECM and RM solution. This muscles in on some of the territory of their ISV partners who have provided these capabilities as SharePoint add-ons, although I imagine that a lot of Microsoft customers are lingering on ancient versions of SharePoint and will still be using those third-party add-ons for a while. In organizations that are pure Microsoft however, the ways that they can integrate their ECM/RM capabilites across all of their information creation, management and collaboration tools — from Office 365 to Skype For Business — provides a seamless environment for protecting and managing information.

He gave a live demo of some of these capabilites at work, showing how the PowerPoint presentation that he used would be automatically classified, shared, protected and managed based on its content and metadata, and the additional manual overrides that can be applied such as emailing him when an internal or external collaborator opens the document. Documents sent to external participants are accompanied by Microsoft Rights Management, providing the ability to see when and where people open the document, limiting or allowing downloads and printing, and allowing the originator to revoke access to the document. [Apparently, it’s now highly discouraged to send emails with attachments within Microsoft, which is a bit ironic considering that bloated Outlook pst files due to email attachments is the scourge of many IT departments.] Some of their rights management can be applied to non-Microsoft repositories such as Box, although this required a third-party add-on.

There was a question about synchronous collaborative editing of documents: you can now do this with shared Office documents using a combination of the desktop applications and browser apps, such that you see other people’s changes in the documents in real time while you’re editing it (like Google Docs), without requiring explicit check-out/check-in. I assume that this requires that the document is stored in a Microsoft repository, either on-premise or cloud, but that’s still an impressive upgrade.

One of the goals in this foray by Microsoft into more advanced ECM is to provide capabilities that are automated as much as possible, and generally easy-to-use for anything requiring manual input. This allows records management to happen on the fly by everyday users, rather than requiring a lot of intervention by trained records management people or complex custom workflows, and to have DLP policies applied directly within the tools that people are already using for creating, managing and sharing documents. Given the dominance of Microsoft on the desktop of today’s businesses, and the proliferation of SharePoint, a good way to improve compliance with better control over information assets.

BPM books for your reading list

I noticed that Zbigniew’s reading list of BPM books for 2017 included both of the books where I have author credit on Amazon: Social BPM, and Best Practices for Knowledge Workers.

You can find the ebooks on Amazon for about $10 each:


I’ve also been published in a couple of academic books and journals, but somehow it’s a more exciting to see my name on Amazon, since I don’t really think of myself as an author. After writing almost a million words on this blog (968,978 prior to this post), maybe I should reconsider!

RPA just wants to be free: @WorkFusion RPA Express

Last week, WorkFusion announced that their robotic process automation product, RPA Express, will be released in 2017 as a free product; they published a blog post as well as the press release, and today they hosted a webinar to talk more about it. They are taking requests to join the beta program now, with a plan to launch publicly at the end of Q1 2017.

WorkFusion has a lot of interesting features in their RPA Express and Smart Process Automation (SPA) products, but today’s webinar was really about their business model for RPA Express. This is not a limited-time trial, it’s a free enterprise-ready product that can generate business benefit. Free to purchase and no annual maintenance fees, although you obviously have infrastructure costs for the servers/desktops on which RPA Express runs. Their goal in making it free is to bypass the whole RFP-POC-ROI dance that goes on in most organizations, where a decision to implement RPA – which typically can show a pretty good ROI within a matter of weeks – can take months. With a free product, one major barrier to implementation has been removed.

So what’s the catch? WorkFusion has a more intelligent automation tool, SPA, and they’re hoping that by seeing the benefits of using RPA Express, organizations will want to try out SPA on their more complex automation needs. RPA Express uses deterministic, rules-based automation, which requires explicit training or specification of each action to be taken; SPA uses machine learning to learn from user actions in order to perform automation of tasks that would typically require human intervention, such as handling unstructured and dynamic data. WorkFusion envisions a “stairway to digital operations” that starts with RPA, then steps up the intelligence with cognitive processing, chatbots and crowdsourcing to a full set of cognitive services in SPA.

This doesn’t mean that RPA Express is just a “starter edition” for SPA: there are entire classes of processes that can be handled with deterministic automation, such as synchronizing data between systems that may not talk to each other, such as SAP and Salesforce. This replaces having a worker copy and paste information between screens, or (horrors!) re-type the information in two or more systems; it can result in a huge reduction in cost and time, and remove the tedious work from people to free them up for more complex decision-making or direct customer interaction.

RPA Express bots can also be called from other orchestration and automation tools, including a BPMS, and can run on a server or on individual desktops. We didn’t get a rundown of the technology, so more to come on that as they get closer to the release. We did see one or two screens, and it’s based on modeling processes using a subset of BPMN (start and end events, activities, XOR gateways) that can be easily handled by a business user/analyst to create the automation flows, plus using recorder bots to capture actions while users are running through the processes to be automated. There was a mention of coding on the platform as well, although it was clear that this was not required in many cases, hence development skills are not essential.

Removing the cost of software changes the game, allowing more organizations to get started with this technology without having to do an internal cost justification for the licensing costs. There’s still training and implementation costs, but WorkFusion plans to provide some of this through online video courses, as well as having the large SIs and other partners trained to have this in their toolkit when they are working with organizations. More daunting is the architectural review that most new software needs to go through before being installed within a larger organization: this can still block the implementation even if the software is free.

I’m looking forward to seeing a more complete technical when the product is closer to launch date. I’m also looking forward to see how this changes the price point of RPA from other vendors.

TechnicityTO 2016: Challenges, Opportunities and Change Agents

The day at Technicity 2016 finished up with two panels: the first on challenges and opportunities, and the second on digital change agents.

The challenges and opportunities panel, moderated by Jim Love of IT World Canada, was more of a fireside chat with Rob Meikle, CIO at City of Toronto, and Mike Williams, GM of Economic Development and Culture, both of whom we heard from in the introduction this morning. Williams noted that they moved from an environment of few policies and fewer investements under the previous administration to a more structured and forward-thinking environment under Mayor John Tory, and that this introduced a number of IT challenges; although the City can’t really fail in the way that a business can fail, it can be ineffective at serving its constituents. Meikle added that they have a $12B operating budget and $33B in capital investments, so we’re not talking about small numbers: even at those levels, there needs to be a fair amount of justification that a solution will solve a civic problem rather than just buying more stuff. This is not just a challenge for the City, but for the vendors that provide those solutions.

There are a number of pillars to technological advancement that the City is striving to establish:

  • be technologically advanced and efficient in their internal operations
  • understand and address digital divides that exist amongst residents
  • create an infrastructure of talent and services that can draw investment and business to the City

This last point becomes a bit controversial at times, when there is a lack of understanding of why City officials need to travel to promote the City’s capabilities, or support private industry through incubators. Digital technology is how we will survive and thrive in the future, so promoting technology initiatives has widespread benefits.

There was a discussion about talent: both people who work for the City, and bringing in businesses that draw private-sector talent. Our now-vibrant downtown core is attractive for tech companies and their employees, fueled by our attendance at our universities. The City still has challenges with procurement to bring in external services and solutions: Williams admitted that their processes need improvement, and are hampered by cumbersome procurement rules. Democracy is messy, and it slows things down that could probably be done a lot faster in a less free state: a reasonable trade. 🙂

The last session of the day looked at examples of digital change agents in Toronto, moderated by Fawn Annan of IT World Canada, and featuring Inspector Shawna Coxon of the Toronto Police Service, Pam Ryan from Service Development & Innovation at the Toronto Public Library, Kristina Verner, Director Intelligent Communities of Waterfront Toronto, and Sara Diamond, President of OCAD University. I’m a consumer and a supporter of City services such as these, and I love seeing the new ways that they’re using to include all residents and advance technology. Examples of initiatives include fiber broadband for all waterfront community residences regardless of income level; providing mobile information access to neighbourhood police officers to allow them to get out of their cars and better engage with the community; integrating arts and design education with STEM for projects such as transit and urban planning (STEAM is the new STEM); and digital innovation hubs at some library branches to provide community access to high-tech gadgets such as 3D printers.

There was a great discussion about what it takes to be a digital innovator in these contexts: it’s as much about people, culture and inclusion as it is about technology. There are always challenges in measuring success: metrics need to include the public’s opinion of these agencies and their digital initiatives, an assessment of the impact of innovation on participants, as well as more traditional factors such as number of constituents served.

That’s it for Technicity 2016, and kudos to IT World Canada and the City of Toronto for putting this day together. I’ve been to a couple of Technicity conferences in the past, and always enjoy them. Although I rarely do work for the public sector in my consulting business, I really enjoy seeing how digital transformation is occuring in that sector; I also like hearing how my great city is getting better.

TechnicityTO 2016: Open data driving business opportunities

Our afternoon at Technicity 2016 started with a panel on open data, moderated by Andrew Eppich, managing director of Equinix Canada, and featuring Nosa Eno-Brown, manager of Open Government Office at Ontario’s Treasury Board Secretariat, Lan Nguyen, deputy CIO at City of Toronto, and Bianca Wylie of the Open Data Institute Toronto. Nguyen started out talking about how data is a key asset to the city: they have a ton of it gathered from over 800 systems, and are actively working at establishing data governance and how it can best be used. The city wants to have a platform for consuming this data that will allow it to be properly managed (e.g., from a privacy standpoint) while making it available to the appropriate entities. Eno-Brown followed with a description of the province’s initiatives in open data, which includes a full catalog of their data sets including both open and closed data sets. Many of the provincial agencies such as the LCBO are also releasing their data sets as part of this initiative, and there’s a need to ensure that standards are used regarding the availability and format of the data in order to enable its consumption. Wylie covered more about open data initiatives in general: the data needs to be free to access, machine-consumable (typically not in PDF, for example), and free to use and distribute as part of public applications. I use a few apps that use City of Toronto open data, including the one that tells me when my streetcar is arriving; we would definitely not have apps like this if we waited for the City to build them, and open data allows them to evolve in the private sector. Even though those apps don’t generate direct revenue for the City, success of the private businesses that build them does result in indirect benefits: tax revenue, reduction in calls/inquiries to government offices, and a more vibrant digital ecosystem.

Although data privacy and security are important, these are often used as excuses for not sharing data when an entity benefits unduly by keeping it private: the MLS comes to mind with the recent fight to open up real estate listings and sale data. Nguyen repeated the City’s plan to build a platform for sharing open data in a more standard fashion, but didn’t directly address the issue of opening up data that is currently held as private. Eno-Brown more directly addressed the protectionist attitude of many public servants towards their data, and how that is changing as more information becomes available through a variety of online sources: if you can Google it and find it online, what’s the sense in not releasing the data set in a standard format? They perform risk assessments before releasing data sets, which can result in some data cleansing and redaction, but they are focused on finding a way to release the data if all feasible. Interestingly, many of the consumers of Ontario’s open data are government of Ontario employees: it’s the best way for them to find the data that they need to do their daily work. Wylie addressed the people and cultural issues of releasing open data, and how understanding what people are trying to do with the data can facilitate its release. Open data for business and open data for government are not two different things: they should be covered under the same initiatives, and private-public partnerships leveraged where possible to make the process more effective and less costly. She also pointed out that shared data — that is, within and between government agencies — still has a long ways to go, and should be prioritized over open data where it can help serve constituents better.

The issue of analytics came up near the end of the panel: Nguyen noted that it’s not just the data, but what insights can be derived from the data in order to drive actions and policies. Personally, I believe that this is well-served by opening up the raw data to the public, where it will be analyzed far more thoroughly than the City is likely to do themselves. I agree with her premise that open data should be used to drive socioeconomic innovation, which supports my idea that many of the apps and analysis are likely to emerge from outside the government, but likely only if more complete raw data are released rather than pre-aggregated data.

TechnicityTO 2016: IoT and Digital Transformation

I missed a couple of sessions, but made it back to Technicity in time for a panel on IoT moderated by Michael Ball of AGF Investments, featuring Zahra Rajani, VP Digital Experience at Jackman Reinvents, Ryan Lanyon, Autonomous Vehicle Working Group at City of Toronto, and Alex Miller, President of Esri Canada. The title of the panel is Drones, Driverless Cars and IoT, with a focus is on how intelligent devices are interacting with citizens in the context of a smart city. I used to work in remote sensing and geographic information systems (GIS), and having the head of Esri Canada talk about how GIS acts as a smart fabric on which these devices live is particularly interesting to me. Miller talked about how there needs to be a framework and processes for enabling smarter communities, from observation and measurement, data integation and management, visualization and mapping, analysis and modeling, planning and design, and decision-making, all the way to action. The vision is a self-aware community, where smart devices that are built into infrastructure and buildings can feed back into an integrated view that can inform and decide.

Lanyon talked about autonomous cars in the City of Toronto, from the standpoint of the required technology, public opinion, and cultural changes away from individual car ownership. Rajani followed with a brief on potential digital experiences that brands create for consumers, then we circled back to the other two participants about how the city can explore private-public sensor data sharing, whether for cars or retail stores or drones. They also discussed issues of drones in the city: not just regulations and safety, but the issue of sharing space both on and above the ground in a dense downtown core. A golf cart-sized pizza delivery robot is fine for the suburbs with few pedestrians, but just won’t work on Bay Street at rush hour.

The panel finished with a discussion on IoT for buildings, and the advantages of “sensorizing” our buildings. It goes back to being able to gather better data, whether it’s external local factors like pollution and traffic, internal measurements such as energy consumption, or visitor stats via beacons. There are various uses for the data collected, both by public and private sector organizations, but you can be sure that a lot of this ends up in those silos that Mark Fox referred to earlier today.

The morning finished with a keynote by John Tory, the mayor of Toronto. This week’s shuffling of City Council duties included designating Councillor Michelle Holland as Advocate for the Innovation Economy, since Tory feels that the city is not doing enough to enable innovation for the benefit of residents. Part of this is encouraging and supporting technology startups, but it’s also about bringing better technology to bear on digital constituent engagement. Just as I see with my private-sector clients, online customer experiences for many services are poor, internal processes are manual, and a lot of things only exist on paper. New digital services are starting to emerge at the city, but it’s a bit of a slow process and there’s a long road of innovation ahead. Toronto has made committments to innovation in technology as well as arts and culture, and is actively seeking to bring in new players and new investments. Tory sees the Kitchener-Waterloo technology corridor as a partner with the Toronto technology ecosystem, not a competitor: building a 21st century city requires bring the best tools and skills to bear on solving civic problems, and leveraging technology from Canadian companies brings benefits on both sides. We need to keep moving forward to turn Toronto into a genuinely smart city to better serve constituents and to save money at the same time, keeping us near or at the top of livable city rankings. He also promised that he will step down after a second term, if he gets it. 🙂

Breaking now for lunch, with afternoon sessions on open data and digital change agents.

By the way, I’m blogging using the WordPress Android app on a Nexus tablet today (aided by a Microsoft Universal Foldable Keyboard), which is great except it doesn’t have spell checking. I’ll review these posts later and fix typos.

Exploring City of Toronto’s Digital Transformation at TechnicityTO 2016

I’m attending the Technicity conference today in Toronto, which focuses on the digital transformation efforts in our city. I’m interested in this both as a technologist, since much of my work is related to digital transformation, and as a citizen who lives in the downtown area and makes use of a lot of city services.

After brief introductions by Fawn Annan, President and CMO of IT World Canada (the event sponsor), Mike Williams, GM of Economic Development and Culture with City of Toronto, and Rob Meikle, CIO at City of Toronto, we had an opening keynote from Mark Fox, professor of Urban Systems Engineering at University of Toronto, on how to use open city data to fix civic problems.

Fox characterized the issues facing digital transformation as potholes and sinkholes: the former are a bit more cosmetic and can be easily paved over, while the latter indicate that some infrastructure change is required. Cities are, he pointed out, not rocket science: they’re much more complex than rocket science. As systems, cities are complicated as well as complex, with many different subsystems and components spanning people, information and technology. He showed a number of standard smart city architectures put forward by both vendors and governments, and emphasized that data is at the heart of everything.

He covered several points about data:

  • Sparseness: the data that we collect is only a small subset of what we need, it’s often stored in silos and not easily accessed by other areas, and it’s frequently lost (or inaccessible) after a period of time. In other words, some of the sparseness is due to poor design, and some is due to poor data management hygiene.
  • Premature aggregation, wherein raw data is aggregated spatially, temporally and categorically when you think you know what people want from the data, removing their ability to do their own analysis on the raw data.
  • Interoperability and the ability to compare information between municipalities, even for something as simple as date fields and other attributes. Standards for these data sets need to be established and used by municipalities in order to allow meaningful data comparisons.
  •  Completeness of open data, that is, what data that a government chooses to make available, and whether it is available as raw data or in aggregate. This needs to be driven by what problems that the consumers of the open data are trying to solve.
  • Visualization, which is straightforward when you have a couple of data sets, but much more difficult when you are combining many data sets — his example was the City of Edmonton using 233 data sets to come up with crime and safety measures.
  • Governments often feel a sense of entitlement about their data, such that they choose to hold back more than they should, whereas they should be in the business of empowering citizens to use this data to solve civic problems.

Smart cities can’t be managed in a strict sense, Fox believes, but rather it’s a matter of managing complexity and uncertainty. We need to understand the behaviours that we want the system (i.e., the smart city) to exhibit, and work towards achieving those. This is more than just sensing the environment, but also understanding limits and constraints, plus knowing when deviations are significant and who needs to know about the deviations. These systems need to be responsive and goal-oriented, flexibly responding to events based on desired outcomes rather than a predefined process (or, as I would say, unstructured rather than structured processes): this requires situational understanding, flexibility, shared knowledge and empowerment of the participants. Systems also need to be introspective, that is, compare their performance to goals and find new ways to achieve goals more effectively and predict outcomes. Finally, cities (and their systems) need to be held accountable for actions, which requires that activities need to be auditable to determine responsibility, and the underlying basis for decisions be known, so that a digital ombudsman can provide oversight.

Great talk, and very aligned with what I see in the private sector too: although the terminology is a bit different, the principles, technologies and challenges are the same.

Next, we heard from Hussam Ayyad, director of startup services at Ryerson University’s DMZ — a business incubator for tech startups — on Canadian FinTech startups. The DMZ has incubated more than 260 startups that have raised more than $206M in funding over their six years in existence, making them the #1 university business incubator in North America, and #3 in the world. They’re also ranked most supportive of FinTech startups, which makes sense considering their geographic proximity to Toronto’s financial district. Toronto is already a great place for startups, and this definitely provides a step up for the hot FinTech market by providing coaching, customer links, capital and community.

Unfortunately, I had to duck out partway through Ayyad’s presentation for a customer meeting, but plan to return for more of Technicity this afternoon.