TechnicityTO 2018: Taming Transportation Troubles with Technology

Every year, IT World Canada organizes the Technicity conference in Toronto, providing a technology showcase for the city and an opportunity to hear about some of the things that are happening both in the city government and organizations that operate here. Fawn Annan, president of ITWC, opened the conference and introduced the city manager, Chris Murray for a backgrounder on the city as an economic engine, and how technology enables that.

The sessions started with a panel on transportation technology, moderated by Jaime Leverton, GM of Cogeco Peer 1 and featuring three people from the City of Toronto: Barb Gray, General Manager of Transportation Services; Ryan Landon, Autonomous Vehicle Lead; and Jesse Coleman, Transportation Big Data Team Leader. Erik Mok, Chief Enterprise Architect for the Toronto Transit Commission, is also supposed to be on the panel but not arrived yet: hopefully not delayed on the TTC. 🙂

They spoke about the need for data collection in order to determine how to improve transportation in the city, whether related to personal vehicles, public transit, cycling or walking. In the past, this used to require manual data collection on the street; these days, the proliferation of traffic cameras, embedded sensors and smartphones means that a lot of data is being collected about how people are moving around the streets. This creates a need for understanding how to work with the resulting big data, and huge opportunities for gaining better insights into making the streets more efficient and safer for everyone. Since the city is a big proponent of open data, this means that the data that the city collects is available (in an anonymized format) to anyone who wants to analyze it. The city is trying to do some of this analysis themselves (without the benefit of a data scientist job classification at the city), but the open data initiative means that a lot of commercial organizations — from big companies to startups — are incorporating this into apps and services. For the King Street Pilot, a year-old project that restricts the travel of private cars on our busiest streetcar route in order to prioritize public transit, the city deployed new types of sensors to measure the impact: Bluetooth sensors that track devices, traffic cameras with embedded AI, and more. This allows for unbiased measurement of the actual impact of the pilot (and other initiatives) that can be communicated to constituents.

There are privacy safeguards in place for ensuring that Bluetooth devices that are tracked can’t be traced to an individual on an ongoing basis, but video is a larger issue: in general, intelligence related to the transportation issues is extracted from the video, then the video is discarded. They mentioned the need for privacy by design, that is, building in privacy considerations from the start of any data collection project, not trying to add it on later.

They also discussed some of the smart sensors and signals being used for controlling traffic signals, where the length of the waiting queue of vehicles can influence when the traffic signals change. This isn’t just related to vehicles, however: there’s an impact on pedestrians that use the same intersections, and on public health in terms of people with mobility challenges.

Cities like Seattle, San Francisco and New York, that started with transportation data collection much earlier than Toronto, are doing some innovative things but the panel feels that we’re catching up: there’s an autonomous shuttle project in the works now to fill some of the gaps in our transit system, for example. There’s also some work being done with drones to monitor traffic congestion around special events (presumably both vehicle and pedestrian) in order to understand dispersal patterns.

Interesting audience questions on data storage (Amazon AWS) and standardization of data formats, especially related to IoT.

As a Toronto resident who uses public transit, walks a lot and sometimes even drives, some great information on how big data is feeding into improving mobility for everyone.

Webinar: Unlocking Back Office Value by Automating Processes

I’ve been quiet here for a while – the result of having too much real work, I suppose Winking smile – but wanted to highlight a webinar that I’ll be doing on December 13th with TrackVia and one of their customers, First Guaranty Mortgage Corporation, on automating back office processes:

With between 300 to 800 back-office processes to monitor and manage, it’s no wonder financial services leaders look to automate error-prone manual processes. Yet, IT resources are scarce and reserved for only the most strategic projects. Join Sandy Kemsley, industry analyst, Pete Khanna, CEO of TrackVia, and Sarah Batangan, COO of First Guaranty Mortgage Corporation, for an interactive discussion about how financial services are digitizing the back-office to unlock great economic value — with little to no IT resources.

During this webinar, you’ll learn about:

  • Identifying business-critical processes that need to be faster
  • Key requirements for automating back office processes
  • Role of low-code workflow solutions in automating processes
  • Results achieved by automating back office processes

I had a great discussion with Pete Khanna, CEO of TrackVia, while sitting on a panel with him back in January at OPEX Week, and we’ve been planning to do this webinar ever since then. The idea is that this is more of a conversational format: I’ll do a bit of context-setting up front, then it will become more of a free-flowing discussion between Sarah Batangan (COO of First Guaranty), Pete and myself based around the topics shown above.

You can register for the webinar here.

Unintended consequences (the good kind) of DigitalTransformation with @jkyriakidis

Jordan Kyriakidis, CEO of QRA Corp, spoke at a session at ITWC’s Toronto digital transformation conference on some of the unexpected consequences of technological advances in terms of collaboration and cross-fertilization of ideas. QRA is a tech startup in Atlantic Canada, and Kyriakidis’ examples are about how companies in that relatively small (economically) region are encouraging new ways of thinking about solving business problems through these sorts of “collisions”.

Addressing the complexity introduced by advancing technology means that we have to invent new methods and tools: he gave the example in industrial complexity where design moved from paper to computer-aided design, then added electronic design automation when the complexity of where to put which chip overwhelmed human capabilities, and now design verification allows for model-based (requirements-driven) design to be validated before more expensive engineering and production begins.

Another example in precision diagnosis and treatment was around data-driven farming, combining computer vision and big data analytics (plus drone delivery of individual plant treatment) to optimize crop yields.

His third example was of integrating and analyzing a variety of data sources about a specific athlete to allow a coach to optimize training and performance for that athlete in their chosen sport.

His main theme of precision diagnosis and treatment — essentially, doing something different for every case based on the context — can be extended in pretty much any industry: consider the attempts by many consumer-facing companies to customize individual customer experiences. Interesting look at companies that are actually doing it.

FinTech panel at ITWC DigitalTransformation 2018

Lynn Elwood, VP Cloud and Services from OpenText, hosted a panel on FinTech to close out the morning at the ITWC digital transformation conference in Toronto. She started with some background on digital transformation in financial services, where there is still a strong focus on cost reduction, but customer engagement has become more important. She included survey results with a somewhat disappointing view on paperless offices, with more than 75% of the respondents saying that they would not be going paperless for as much as five years or maybe never. Never??!! Maybe just not within the career lifetime of the respondents, but c’mon, never? I understand that digital transformation is not the same as content digitization, but if you’re still running on paper, that’s just going to fundamentally limit the degree of your transformation. At the same time, more than 75% were saying that they plan to use AI already or within the short term (hopefully to replace the people who think that they’re never going to be paperless), and most organizations said that they were equal or better than their peers in digital transformation (statistically unlikely). Unintentionally hilarious.

The panel was made up of Michael Ball, CISO Advisor for a number of firms including Freedom Mobile; Amer Matar, CTO of Moneris (a large Canadian payment processor); and Patrick Vice, partner at Insurance-Canada.ca (an industry organization for P&C insurance). Matar talked about how legacy technology holds back companies: existing companies have the advantage of being established incumbents, but newer players (e.g., Square in the payments market) can enter with a completely new business model and no legacy customers or infrastructure to drag along. Vice talked about how companies can combat this by spinning off separate business units to provide a more streamlined digital experience and brand, such as how Economical Insurance did with Sonnet (a project that I had the pleasure of working on last year), which still uses the established insurance organization behind a modern customer experience. Ball stressed that the legacy systems are evolving at a much slower rate than is required for digital transformation, and the new front ends need to go beyond just putting a friendly UI on the old technology: they need to incorporate new services to present a transformed customer experience.

They had an interesting discussion about security, and how moving to digital business models means that companies need to offer a more secure environment for customers. Many people are starting to look at security (such as two-factor authentication) as a competitive differentiator when they are selecting service providers, and while most people wouldn’t now change their bank just because it didn’t provide 2FA, it won’t be long before that is a decision point. It’s not just about cloud versus on-premise, although there are concerns about hosting Canadian customers’ financial data outside Canada, where financial laws (and government access to data) may be different; it’s about an organization’s ability to assure their customer that their information won’t be improperly accessed while offering a highly secure customer-facing portal. There’s a huge spend on security these days, but that needs to settle down as this becomes just baked into the infrastructure rather than an emergency add-on to existing (insecure) systems.

Good discussion, although it points out that it’s still early days for digital transformation in financial services.

Digital government with @AlexBenay at IT World DigitalTransformation 2018

I’ve attended IT World Canada conferences in Toronto before — easy for me to attend as a local, and some interesting content such as Technicity — and today they’re running a digital transformation conference (that oddly, has the hashtag #digitaltransformation as if that were a unique tag).

Alex Benay, CIO of the government of Canada, gave the opening keynote: with $6B/year in IT spend and more than a few high-profile mistakes under their belt that happened before he arrived in the job in early 2017, he has some views on how to do things better. He’s even written a book about digital government, but given that the federal government takes five years to write requirements, he’ll probably be long retired before we know if any of his predictions come true. He talked about some models of digital government, such as Estonia, and how the government of Canada is attempting to integrate their digital services into our everyday lives by partnering with the private sector: think Transport Canada road alerts built into your GM car, or passport renewal and customs forms triggered by an Expedia booking. He admits to a lot of obstacles, including untrained staff in spite of massive training spends, but also many enablers to reaching their goals, such as changing policies around cloud-first deployments. He finished with five core tenents for any government IT moving forward:

  • Open data by default while protecting citizens
  • Collaborate in the open
  • Grow digital talent
  • Change laws/policies to avoid situations like Facebook/Cambridge Analytica
  • Adapt business models to focus only on meeting user needs (procurement, tech management, service design)

Good principles, and I hope that our government can learn to live by them.

AI and BPM: my article for @Bonitasoft on making processes more intelligent

Part of my work as an industry analyst is to write papers and articles (and present webinars), sponsored by vendors, on topics that will be of interest to their clients as well as a broader audience. I typically don’t talk about the sponsor’s products or give them any sort of promotion; it’s intended to be educational thought leadership that will help their clients and prospects to understand the complex technology environment that we work in.

I’ve recently written an article on AI and BPM for Bonitasoft that started from a discussion we had after I contributed articles on adding intelligent technologies to process management to a couple of books, as well as writing here on my blog and giving a few presentations on the topic. From the intro of the article:

In 2016, I was asked to contribute to the Workflow Management Coalition’s book “Best Practices for Knowledge Workers.” My section, “Beyond Checklists”, called for more intelligent adaptive case management to drive innovation while maintaining operational efficiency. By the next year, they published “Intelligent Adaptability,” and I contributed a section called “Machine Intelligence and Automation in ACM [Adaptive Case Management] and BPM” that carried forward these ideas further. Another year on, it’s time to take a look at how the crossover between BPM and artificial intelligence (AI) — indeed, between BPM and a wide range of intelligent technologies — is progressing.

I go on to cover the specific technologies involved and what types of business innovation that we can expect from more intelligent processes. You can read the entire article on Bonita’s website, on their LinkedIn feed and their Medium channel. If you prefer to read it in French, it’s also on the Decideo.fr industry news site, and apparently there’s a Spanish version in the works too.

Integrating process and content for digital transformation: my upcoming webinar

As much as I love chatting with the newer crop of entrepreneurs about their products and ideas, sometimes it’s nice to have a conversation with someone who remembers when OS/2 was the cheapest way to buy 3.5” high density disks. You know who’s been hip-deep in the technology of content and process as long as I have? John Newton, founder and CTO of Alfresco, that’s who. John started Documentum back in 1990, around the time that I was selling off my imaging/workflow product startup and starting my services company, and while he’s stayed on the product side and I’ve stayed on the services/industry analyst side (except for a brief period as FileNet’s BPM evangelist), we’re both focused on how this technology helps companies in their digital transformation journey.

John and I will get together on a webinar about integrating process and content on July 24, sponsored by Alfresco, which will combine structured content with a free-ranging conversation. We’re planning to talk about use cases for applications that integrate process and content, some best practices for designing these applications, and overall architectural considerations for process/content applications including cloud and microservices. Add a comment here or on Twitter if there’s something in particular that you’d like us to discuss, and we’ll see if we can work it in.

I wrote a blog post for Alfresco a couple of months ago on use cases for content in process applications, stressing the importance of integrating process and content rather than leaving them as siloed applications; in general, this is what I’ve seen over the years in my practice as a systems architect and consultant helping organizations to get their content digitized and their processes automated. If you have digital content that’s locked up without any way to take actions on it, or automated processes that still require manual lookups of related content, then you should be thinking about how to integrate process and content. Tune in to our webinar for pointers from a couple of industry gray-hairs.

Integrating your enterprise content with cloud business applications? I wrote a paper on that!

Just because there’s a land rush towards SaaS business applications like Salesforce for some of your business applications, it doesn’t mean that your content and data are all going to be housed on that platform. In reality, you have a combination of cloud applications, cloud content that may apply across several applications, and on-premise content; users end up searching in multiple places for information in order to do a single transaction.

In this paper, sponsored by Intellective (who have a bridging product for enterprise content/data with SaaS business applications), I wrote about some of the architecture and design issues that you need to consider when you’re linking these systems together. Here’s the introduction:

Software-as-a-service (SaaS) solutions provide significant utility and value for standard business applications, including customer relationship management (CRM), enterprise resource planning (ERP), supply chain management (SCM), human resources (HR), accounting, insurance claims management, and email. These “systems of engagement” provide a modern and agile user experience that guides workers through actions and enables collaboration. However, they rarely replace the core “systems of record”, and don’t provide the range of content services required by most organizations.

This creates an issue when, for example, a customer service worker’s primary environment is Salesforce CRM, but for every Salesforce activity they may also need to access multiple systems of record to update customer files, view regulatory documentation or initiate line-of-business (LOB) processes not supported in Salesforce. The worker spends too much time looking for information, risks missing relevant content in their searches, and may forget to update the same information in multiple systems.

The solution is to integrate enterprise content from the systems of record – data, process and documents – directly with the primary user-facing system of engagement, such that the worker sees a single integrated view of everything required to complete the task at hand. The worker completes their work more efficiently and accurately because they’re not wasting time searching for information; data is automatically updated between systems, reducing data entry effort and errors.

Head on over to get the full paper (registration required).

Summer BPM reading, with dashes of AI, RPA, low-code and digital transformation

Summer always sees a bit of a slowdown in my billable work, which gives me an opportunity to catch up on reading and research across the topic of BPM and other related fields. I’m often asked what blogs and other websites that I read regularly to keep on top of trends and participate in discussions, and here are some general guidelines for getting through a lot of material in a short time.

First, to effectively surf the tsunami of information, I use two primary tools:

  • An RSS reader (Feedly) with a hand-curated list of related sites. In general, if a site doesn’t have an RSS feed, then I’m probably not reading it regularly. Furthermore, if it doesn’t have a full feed – that is, one that shows the entire text of the article rather than a summary in the feed reader – it drops to a secondary list that I only read occasionally (or never). This lets me browse quickly through articles directly in Feedly and see which has something interesting to read or share without having to open the links directly.
  • Twitter, with a hand-curated list of digital transformation-related Twitter users, both individuals and companies. This is a great way to find new sources of information, which I can then add to Feedly for ongoing consumption. I usually use the Tweetdeck interface to keep an eye on my list plus notifications, but rarely review my full unfiltered Twitter feed. That Twitter list is also included in the content of my Paper.li “Digital Transformation Daily”, and I’ve just restarted tweeting the daily link.

Second, the content needs to be good to stay on my lists. I curate both of these lists manually, constantly adding and culling the contents to improve the quality of my reading material. If your blog posts are mostly promotional rather than informative, I remove them from Feedly; if you tweet too much about politics or your dog, you’ll get bumped off the DX list, although probably not unfollowed.

Third, I like to share interesting things on Twitter, and use Buffer to queue these up during my morning reading so that they’re spread out over the course of the day rather than all in a clump. To save things for a more detailed review later as part of ongoing research, I use Pocket to manually bookmark items, which also syncs to my mobile devices for offline reading, and an IFTTT script to save all links that I tweet into a Google sheet.

You can take a look at what I share frequently through Twitter to get an idea of the sources that I think have value; in general, I directly @mention the source in the tweet to help promote their content. Tweeting a link to an article – and especially inclusion in the auto-curated Paper.li Digital Transformation Daily – is not an endorsement: I’ll add my own opinion in the tweet about what I found interesting in the article.

Time to kick back, enjoy the nice weather, and read a good blog!