TechnicityTO 2016: Open data driving business opportunities

Our afternoon at Technicity 2016 started with a panel on open data, moderated by Andrew Eppich, managing director of Equinix Canada, and featuring Nosa Eno-Brown, manager of Open Government Office at Ontario’s Treasury Board Secretariat, Lan Nguyen, deputy CIO at City of Toronto, and Bianca Wylie of the Open Data Institute Toronto. Nguyen started out talking about how data is a key asset to the city: they have a ton of it gathered from over 800 systems, and are actively working at establishing data governance and how it can best be used. The city wants to have a platform for consuming this data that will allow it to be properly managed (e.g., from a privacy standpoint) while making it available to the appropriate entities. Eno-Brown followed with a description of the province’s initiatives in open data, which includes a full catalog of their data sets including both open and closed data sets. Many of the provincial agencies such as the LCBO are also releasing their data sets as part of this initiative, and there’s a need to ensure that standards are used regarding the availability and format of the data in order to enable its consumption. Wylie covered more about open data initiatives in general: the data needs to be free to access, machine-consumable (typically not in PDF, for example), and free to use and distribute as part of public applications. I use a few apps that use City of Toronto open data, including the one that tells me when my streetcar is arriving; we would definitely not have apps like this if we waited for the City to build them, and open data allows them to evolve in the private sector. Even though those apps don’t generate direct revenue for the City, success of the private businesses that build them does result in indirect benefits: tax revenue, reduction in calls/inquiries to government offices, and a more vibrant digital ecosystem.

Although data privacy and security are important, these are often used as excuses for not sharing data when an entity benefits unduly by keeping it private: the MLS comes to mind with the recent fight to open up real estate listings and sale data. Nguyen repeated the City’s plan to build a platform for sharing open data in a more standard fashion, but didn’t directly address the issue of opening up data that is currently held as private. Eno-Brown more directly addressed the protectionist attitude of many public servants towards their data, and how that is changing as more information becomes available through a variety of online sources: if you can Google it and find it online, what’s the sense in not releasing the data set in a standard format? They perform risk assessments before releasing data sets, which can result in some data cleansing and redaction, but they are focused on finding a way to release the data if all feasible. Interestingly, many of the consumers of Ontario’s open data are government of Ontario employees: it’s the best way for them to find the data that they need to do their daily work. Wylie addressed the people and cultural issues of releasing open data, and how understanding what people are trying to do with the data can facilitate its release. Open data for business and open data for government are not two different things: they should be covered under the same initiatives, and private-public partnerships leveraged where possible to make the process more effective and less costly. She also pointed out that shared data — that is, within and between government agencies — still has a long ways to go, and should be prioritized over open data where it can help serve constituents better.

The issue of analytics came up near the end of the panel: Nguyen noted that it’s not just the data, but what insights can be derived from the data in order to drive actions and policies. Personally, I believe that this is well-served by opening up the raw data to the public, where it will be analyzed far more thoroughly than the City is likely to do themselves. I agree with her premise that open data should be used to drive socioeconomic innovation, which supports my idea that many of the apps and analysis are likely to emerge from outside the government, but likely only if more complete raw data are released rather than pre-aggregated data.

TechnicityTO 2016: IoT and Digital Transformation

I missed a couple of sessions, but made it back to Technicity in time for a panel on IoT moderated by Michael Ball of AGF Investments, featuring Zahra Rajani, VP Digital Experience at Jackman Reinvents, Ryan Lanyon, Autonomous Vehicle Working Group at City of Toronto, and Alex Miller, President of Esri Canada. The title of the panel is Drones, Driverless Cars and IoT, with a focus is on how intelligent devices are interacting with citizens in the context of a smart city. I used to work in remote sensing and geographic information systems (GIS), and having the head of Esri Canada talk about how GIS acts as a smart fabric on which these devices live is particularly interesting to me. Miller talked about how there needs to be a framework and processes for enabling smarter communities, from observation and measurement, data integation and management, visualization and mapping, analysis and modeling, planning and design, and decision-making, all the way to action. The vision is a self-aware community, where smart devices that are built into infrastructure and buildings can feed back into an integrated view that can inform and decide.

Lanyon talked about autonomous cars in the City of Toronto, from the standpoint of the required technology, public opinion, and cultural changes away from individual car ownership. Rajani followed with a brief on potential digital experiences that brands create for consumers, then we circled back to the other two participants about how the city can explore private-public sensor data sharing, whether for cars or retail stores or drones. They also discussed issues of drones in the city: not just regulations and safety, but the issue of sharing space both on and above the ground in a dense downtown core. A golf cart-sized pizza delivery robot is fine for the suburbs with few pedestrians, but just won’t work on Bay Street at rush hour.

The panel finished with a discussion on IoT for buildings, and the advantages of “sensorizing” our buildings. It goes back to being able to gather better data, whether it’s external local factors like pollution and traffic, internal measurements such as energy consumption, or visitor stats via beacons. There are various uses for the data collected, both by public and private sector organizations, but you can be sure that a lot of this ends up in those silos that Mark Fox referred to earlier today.

The morning finished with a keynote by John Tory, the mayor of Toronto. This week’s shuffling of City Council duties included designating Councillor Michelle Holland as Advocate for the Innovation Economy, since Tory feels that the city is not doing enough to enable innovation for the benefit of residents. Part of this is encouraging and supporting technology startups, but it’s also about bringing better technology to bear on digital constituent engagement. Just as I see with my private-sector clients, online customer experiences for many services are poor, internal processes are manual, and a lot of things only exist on paper. New digital services are starting to emerge at the city, but it’s a bit of a slow process and there’s a long road of innovation ahead. Toronto has made committments to innovation in technology as well as arts and culture, and is actively seeking to bring in new players and new investments. Tory sees the Kitchener-Waterloo technology corridor as a partner with the Toronto technology ecosystem, not a competitor: building a 21st century city requires bring the best tools and skills to bear on solving civic problems, and leveraging technology from Canadian companies brings benefits on both sides. We need to keep moving forward to turn Toronto into a genuinely smart city to better serve constituents and to save money at the same time, keeping us near or at the top of livable city rankings. He also promised that he will step down after a second term, if he gets it. 🙂

Breaking now for lunch, with afternoon sessions on open data and digital change agents.

By the way, I’m blogging using the WordPress Android app on a Nexus tablet today (aided by a Microsoft Universal Foldable Keyboard), which is great except it doesn’t have spell checking. I’ll review these posts later and fix typos.

Exploring City of Toronto’s Digital Transformation at TechnicityTO 2016

I’m attending the Technicity conference today in Toronto, which focuses on the digital transformation efforts in our city. I’m interested in this both as a technologist, since much of my work is related to digital transformation, and as a citizen who lives in the downtown area and makes use of a lot of city services.

After brief introductions by Fawn Annan, President and CMO of IT World Canada (the event sponsor), Mike Williams, GM of Economic Development and Culture with City of Toronto, and Rob Meikle, CIO at City of Toronto, we had an opening keynote from Mark Fox, professor of Urban Systems Engineering at University of Toronto, on how to use open city data to fix civic problems.

Fox characterized the issues facing digital transformation as potholes and sinkholes: the former are a bit more cosmetic and can be easily paved over, while the latter indicate that some infrastructure change is required. Cities are, he pointed out, not rocket science: they’re much more complex than rocket science. As systems, cities are complicated as well as complex, with many different subsystems and components spanning people, information and technology. He showed a number of standard smart city architectures put forward by both vendors and governments, and emphasized that data is at the heart of everything.

He covered several points about data:

  • Sparseness: the data that we collect is only a small subset of what we need, it’s often stored in silos and not easily accessed by other areas, and it’s frequently lost (or inaccessible) after a period of time. In other words, some of the sparseness is due to poor design, and some is due to poor data management hygiene.
  • Premature aggregation, wherein raw data is aggregated spatially, temporally and categorically when you think you know what people want from the data, removing their ability to do their own analysis on the raw data.
  • Interoperability and the ability to compare information between municipalities, even for something as simple as date fields and other attributes. Standards for these data sets need to be established and used by municipalities in order to allow meaningful data comparisons.
  •  Completeness of open data, that is, what data that a government chooses to make available, and whether it is available as raw data or in aggregate. This needs to be driven by what problems that the consumers of the open data are trying to solve.
  • Visualization, which is straightforward when you have a couple of data sets, but much more difficult when you are combining many data sets — his example was the City of Edmonton using 233 data sets to come up with crime and safety measures.
  • Governments often feel a sense of entitlement about their data, such that they choose to hold back more than they should, whereas they should be in the business of empowering citizens to use this data to solve civic problems.

Smart cities can’t be managed in a strict sense, Fox believes, but rather it’s a matter of managing complexity and uncertainty. We need to understand the behaviours that we want the system (i.e., the smart city) to exhibit, and work towards achieving those. This is more than just sensing the environment, but also understanding limits and constraints, plus knowing when deviations are significant and who needs to know about the deviations. These systems need to be responsive and goal-oriented, flexibly responding to events based on desired outcomes rather than a predefined process (or, as I would say, unstructured rather than structured processes): this requires situational understanding, flexibility, shared knowledge and empowerment of the participants. Systems also need to be introspective, that is, compare their performance to goals and find new ways to achieve goals more effectively and predict outcomes. Finally, cities (and their systems) need to be held accountable for actions, which requires that activities need to be auditable to determine responsibility, and the underlying basis for decisions be known, so that a digital ombudsman can provide oversight.

Great talk, and very aligned with what I see in the private sector too: although the terminology is a bit different, the principles, technologies and challenges are the same.

Next, we heard from Hussam Ayyad, director of startup services at Ryerson University’s DMZ — a business incubator for tech startups — on Canadian FinTech startups. The DMZ has incubated more than 260 startups that have raised more than $206M in funding over their six years in existence, making them the #1 university business incubator in North America, and #3 in the world. They’re also ranked most supportive of FinTech startups, which makes sense considering their geographic proximity to Toronto’s financial district. Toronto is already a great place for startups, and this definitely provides a step up for the hot FinTech market by providing coaching, customer links, capital and community.

Unfortunately, I had to duck out partway through Ayyad’s presentation for a customer meeting, but plan to return for more of Technicity this afternoon.

Intelligent Capture enables Digital Transformation at #ABBYYSummit16

IMG_0672I’ve been in beautiful San Diego for the past couple of days at the ABBYY Technology Summit, where I gave the keynote yesterday on why intelligent capture (including recognition technologies and content analytics) is a necessary onramp to digital transformation. I started my career in imaging and workflow over 25 years ago – what we would now call capture, ECM and BPM – and I’ve seen over and over again that if you don’t extract good data up front as quickly as possible, then you just can’t do a lot to transform those downstream processes. You can see my slides at Slideshare as usual:

I’m finishing up a white paper for ABBYY on the same topic, and will post a link here when it’s up on their site. Here’s the introduction (although it will probably change slightly before final publication):

Data capture from paper or electronic documents is an essential step for most business processes, and often is the initiator for customer-facing business processes. Capture has traditionally required human effort – data entry workers transcribing information from paper documents, or copying and pasting text from electronic documents – to expose information for downstream processing. These manual capture methods are inefficient and error-prone, but more importantly, they hinder customer engagement and self-service by placing an unnecessary barrier between customers and the processes that serve them.

Intelligent capture – including recognition, document classification, data extraction and text analytics – replaces manual capture with fully-automated conversion of documents to business-ready data. This streamlines the essential link between customers and your business, enhancing the customer journey and enabling digital transformation of customer-facing processes.

I chilled out a bit after my presentation, then decided to attend one presentation that looked really interesting. It was, but was an advance preview of a product that’s embargoed until it comes out next year, so you’ll have to wait for my comments on it. Winking smile

A well-run event with a lot of interesting content, attended primarily by partners who build solutions based on ABBYY products, as well as many of ABBYY’s team from Russia (where a significant amount of their development is done) and other locations. It’s nice to attend a 200-person conference for a change, where – just like Cheers – everybody knows your name.

AIIM Toronto seminar: FNF Canada’s data capture success

Following John Mancini’s keynote, we heard from two of the sponsors, SourceHOV and ABBYY. Pam Davis of SourceHOV spoke about EIM/ECM market trends, based primarily on analyst reports and surveys, before giving an overview of their BoxOffice product.

ABBYY chose to give their speaking slot to a customer, Anjum Iqbal of FNF Canada, who spoke about their capture and ECM projects. FNF provides services to financial institutions in a variety of lending areas, and deals with a lot of faxed documents. A new business line would have their volume move to 4,500 inbound faxes daily, mostly time-sensitive documents, such as mortgage or loan closing, that need to be processed within an hour of receipt. To do this manually, they would have needed to increase their 4 full time staff to 10 people handle the inbound workflow even at a rate of 1 document/minute; instead, they used ABBYY FlexiCapture to build a capture solution for the faxes that would extract the data using OCR, and interface with their downstream content and workflow systems without human intervention. The presentation went by pretty quickly, but we learned that they had a 3-month implementation time.

I stayed on for the roundtable that ABBYY hosted, with Iqbal giving more details on their implementation. They reached a tipping point when the volume of inbound printed faxes just couldn’t be handled manually, particularly when they added some new business lines that would increase their volume significantly. Unfortunately, the processes involving the banks were stuck on fax technology — that is, the banks refused to move to electronic transfer rather than faxes — so they needed to work with that fixed constraint. They needed quality data with near-zero error rates extracted from the faxes, and selected ABBYY and one of their partners to help build a solution that took advantage of standard form formats and 100% machine printing on the forms (rather than handwriting). The forms weren’t strictly fixed format, in that some critical information such as mortgage rates may be in different places on the document depending on the other content of the form; this requires a more intelligent document classification as well as content analytics to extract the information. They have more than 40 templates that cover all of their use cases, although still need to have one person in the process to manage any exceptions where the recognition certainty was below a certain percentage. Given the generally poor quality of faxed documents, undoubtedly this capture process could also handle documents scanned on a standard business scanner or even a mobile device in addition to their current RightFax server. Once the data is captured, it’s formatted as XML, which their internal development team then used to integrate with the downstream processes, while the original faxes are stored in a content management system.

Given that these processes accept mortgage/loan application forms and produce the loan documents and other related documentation, this entire business seems ripe for disruption, although given the glacial pace of technology adoption in Canadian financial services, this could be some time off. With the flexible handling of inbound documents that they’ve created, FNF Canada will be ready for it when it happens.

That’s it for me at the AIIM Toronto seminar; I had to duck out early and missed the two other short sponsor presentations by SystemWare and Lexmark/Kofax, as well as lunch and the closing keynote. Definitely worth catching up with some of the local people in the industry as well as hearing the customer case studies.

Case management in insurance

Case Management In InsuranceI recently wrote a paper on how case management technology can be used in insurance claims processing, sponsored by DST (but not about their products specifically). From the paper overview:

Claims processing is a core business capability within every insurance company, yet it is
often one of the most inefficient and risky processes. From the initial communication that
launches the claim to the final resolution, the end-to-end claims process is complex and
strictly regulated, requiring highly-skilled claims examiners to perform many of the
activities to adjudicate the claim.

Managing a manual, paper-intensive claims processing operation is a delicate balance
between risk and efficiency, with most organizations choosing to decrease risk at the cost
of lower operational efficiency. For example, skilled examiners may perform rote tasks to
avoid the risk of handing off work to less-experienced colleagues; or manual tracking and
logging of claims activities may have to be done by each worker to ensure a proper audit
trail.

A Dynamic Case Management (DCM) system provides an integrated and automated
claims processing environment that can improve claim resolution time and customer
satisfaction, while improving compliance and efficiency.

You can download it from DST’s site (registration required).

10 years on WordPress, 11+ blogging

This popped up in the WordPress Android app the other day:

This blog started in March 2005 (and my online journalling goes back to 2000 or so), but I passed through a Moveable Type phase before settling into self-hosted WordPress in June 2007, porting the complete history over at that time. WordPress continues to be awesome, including a great new visual editor in the latest Android version, although my flaky hosting provider is about to get the boot.

I’ve written 2,575 posts — an average of about one every business day, but quite unevenly distributed — and garnered almost 3,300 comments. Those posts include a total of almost 900,000 words, or 10 good-sized books. Maybe it’s time to actually write one of those books!

Take Mike Marin’s CMMN survey: learn something and help CMMN research

CMMN diagram from OMG CMMN 1.0 specification document
CMMN diagram from OMG CMMN 1.0 specification document

Mike Marin, who had a hand in creating FileNet’s ECM platform and continued the work at IBM as chief architect on their Case Manager product, is taking a bit of time away from IBM to complete his PhD. He’s doing research into complexity metrics for the Case Management Model and Notation standard, and he really needs people to complete a survey in order to complete his empirical research. The entire thing will take 45-60 minutes, and can be completed in multiple sessions; 30-40 minutes of that is an optional tutorial, which you can skip if you’re already familiar with CMMN.

Here’s his invitation to participate (feel free to share with your process and case modeling friends):

We are conducting research on the Case Management Modeling and Notation (CMMN) specification and need your help. You don’t need to be familiar with CMMN to participate, but you should have some basic understanding of process technology or graphical modeling (for example: software modeling, data modeling, object modeling, process modeling, etc.), as CMMN is a new modeling notation. Participation is voluntary and no identifiable personal information will be collected.

You will learn more about CMMN with the tutorial; and you will gain some experience and appreciation for CMMN by evaluating two models in the survey. This exercise should take about 45 to 60 minutes to complete; but it can be done in multiple sessions. The tutorial is optional and it should take 30 to 40 minutes. The survey should take 15 to 20 minutes. You can consider the survey a quiz on the tutorial.

As an appreciation for your collaboration, we will donate $6 (six dollars) to a charity of your choice and we will provide you with early results of the survey.

You can use the following URL to take the tutorial and survey. The first page provides more information on the project.

http://cmmn.limequery.org/index.php/338792?lang=en

He wrote a more detailed description of the research over on BPTrends.

Mike’s a former colleague and a friend, but I’m not asking just because of that: he’s also a Distinguished Engineer at IBM and a contributor to standards and technology that make a huge impact in our field. Help him out, take the survey, and it will help us all out in the long run.

Now available: Best Practices for Knowledge Workers

510DL8BZNlLI couldn’t make it to the BPM and Case Management Summit in DC this week, but it includes the launch of the new book, Best Practices for Knowledge Workers: Innovation in Adaptive Case Management, for which I wrote the foreword. 

In that section, which I called “Beyond Checklists”, I looked at the ways that we are making ACM smarter, using rules, analytics, machine learning and other technologies. That doesn’t mean that ACM will become less reliant on the knowledge workers that work cases; rather, these technologies support them through recommendations and selective automation.

I also cover the ongoing challenges of collaboration within ACM, particularly the issue of encouraging collaboration through social metrics that align the actions of individuals with corporate goals.

You’ll find chapters by many luminaries in the field of BPM and ACM, including some real-world case studies of ACM in action.

American Express digital transformation at Pegaworld 2016

Howard Johnson and Keith Weber from American Express talked about their digital transformation to accommodate their expanding market of corporate card services for global accounts, middle market and small businesses. Digital servicing using their @work portal was designed with customer engagement in mind, and developed using Agile methodologies for improved flexibility and time to market. They developed a set of guiding principles: it needed to be easy to use, scalable to be able to manage any size of servicing customer, and proactive in providing assistance on managing cash flow and other non-transactional interactions. They also wanted consistency across channels, rather than their previous hodge-podge of processes and teams depending on which channels.

wp-1465337619564.jpg

AmEx used to be a waterfall development shop — which enabled them to offshore a lot of the development work but meant 10-16 months delivery time — but have moved to small, agile teams with continuous delivery. Interesting when I think back to this morning’s keynote, where Gerald Chertavian of Year Up said that they were contacted by AmEx about providing trained Java/Pega developers to help them with re-onshoring their development teams; the AmEx presenter said that he had four of the Year Up people on his team and they were great. This is a pretty negative commentary on the effectiveness of outsourced, offshore development teams for agile and continuous delivery, which is considered essential for today’s market. AmEx is now hiring technical people for onshore development that is co-located with their business process experts, greatly reducing delivery times and improving quality.

wp-1465337686253.jpg

Technology-wise, they have moved to an omni-channel platform that uses Pega case management, standardizing 65% of their processes while providing a single source of the truth. This has resulted in faster development (lower cost per market and integration time, with improved configurability) while enabling future capabilities including availability, analytics and a process API. On the business side, they’re looking at a lot of interesting capabilities for the future: big data-enabled insights, natural language search, pluggable widgets to extend the portal, and frequent releases to keep rolling this out to customers.

It sounds like they’re starting to use best practices from a technology design and development standpoint, and that’s really starting to pay off in customer experience. It will be interesting to see if other large organizations — with large, slow-moving offshore development shops — can learn the same lessons.