AIIM Toronto seminar: @jasonbero on Microsoft’s ECM

I’ve recently rejoined AIIM — I was a member years ago when I did a lot of document capture and workflow implementation projects, but drifted away as I became more focused on process — and decided to check out this month’s breakfast seminar hosted by the AIIM Toronto chapter. Today’s presenter was Jason Bero from Microsoft Canada, who is a certified records manager and information governance specialist, talking about SharePoint and related Microsoft technologies that are used for classification, preservation, protection and disposal of information assets.

He started out with AIIM’s view of the stages of information management (following diagram found online but almost certainly copyright AIIM) as a framework for describing where SharePoint fits in and their new functionality:

There’s a shift happening in information management, since a lot of information is now created natively in electronic form, may be generated by customers and employees using mobile apps, and even stored outside the corporate firewaall on cloud ECM platforms. This creates challenges in authentication and security, content privacy protection, automatic content classification, and content federation across platforms. Microsoft is adding data loss prevention (DLP) and records management capabilities to SharePoint to meet some of these challenges, including:

  • Compliance Policy Center
  • DLP policies and management
  • Policy notification messages
  • Document deletion policies
  • Enhanced retention and disposition policies for working documents
  • Document- and records-centric workflow with a web-based workflow design tool
  • Advanced e-discovery for unstructured documents, including identifying relevant relationships between documents
  • Advanced auditing, including SharePoint Online and OneDrive for Business as well as on-premise repositories
  • Data governance: somewhat controversially (at my table of breakfast colleagues, anyway), this replaces the use of metadata fields with a new “tags” concept
  • Rights management on documents that can be applied to both internal and external viewers of a document

AIIM describes an information classification and protection cycle: classification, labeling, encryption, access control, policy enforcement, document tracking, and document revocation; Bero described how SharePoint addresses these requirements, with particular attention paid to Canadian concerns for the local audience, such as encryption keys. I haven’t looked at SharePoint in quite a while (and I’m not really much of an ECM expert any more), but it looks like lots of functionality that boosts SharePoint into a more complete ECM and RM solution. This muscles in on some of the territory of their ISV partners who have provided these capabilities as SharePoint add-ons, although I imagine that a lot of Microsoft customers are lingering on ancient versions of SharePoint and will still be using those third-party add-ons for a while. In organizations that are pure Microsoft however, the ways that they can integrate their ECM/RM capabilites across all of their information creation, management and collaboration tools — from Office 365 to Skype For Business — provides a seamless environment for protecting and managing information.

He gave a live demo of some of these capabilites at work, showing how the PowerPoint presentation that he used would be automatically classified, shared, protected and managed based on its content and metadata, and the additional manual overrides that can be applied such as emailing him when an internal or external collaborator opens the document. Documents sent to external participants are accompanied by Microsoft Rights Management, providing the ability to see when and where people open the document, limiting or allowing downloads and printing, and allowing the originator to revoke access to the document. [Apparently, it’s now highly discouraged to send emails with attachments within Microsoft, which is a bit ironic considering that bloated Outlook pst files due to email attachments is the scourge of many IT departments.] Some of their rights management can be applied to non-Microsoft repositories such as Box, although this required a third-party add-on.

There was a question about synchronous collaborative editing of documents: you can now do this with shared Office documents using a combination of the desktop applications and browser apps, such that you see other people’s changes in the documents in real time while you’re editing it (like Google Docs), without requiring explicit check-out/check-in. I assume that this requires that the document is stored in a Microsoft repository, either on-premise or cloud, but that’s still an impressive upgrade.

One of the goals in this foray by Microsoft into more advanced ECM is to provide capabilities that are automated as much as possible, and generally easy-to-use for anything requiring manual input. This allows records management to happen on the fly by everyday users, rather than requiring a lot of intervention by trained records management people or complex custom workflows, and to have DLP policies applied directly within the tools that people are already using for creating, managing and sharing documents. Given the dominance of Microsoft on the desktop of today’s businesses, and the proliferation of SharePoint, a good way to improve compliance with better control over information assets.

BPM books for your reading list

I noticed that Zbigniew’s reading list of BPM books for 2017 included both of the books where I have author credit on Amazon: Social BPM, and Best Practices for Knowledge Workers.

You can find the ebooks on Amazon for about $10 each:

 

I’ve also been published in a couple of academic books and journals, but somehow it’s a more exciting to see my name on Amazon, since I don’t really think of myself as an author. After writing almost a million words on this blog (968,978 prior to this post), maybe I should reconsider!

RPA just wants to be free: @WorkFusion RPA Express

Last week, WorkFusion announced that their robotic process automation product, RPA Express, will be released in 2017 as a free product; they published a blog post as well as the press release, and today they hosted a webinar to talk more about it. They are taking requests to join the beta program now, with a plan to launch publicly at the end of Q1 2017.

WorkFusion has a lot of interesting features in their RPA Express and Smart Process Automation (SPA) products, but today’s webinar was really about their business model for RPA Express. This is not a limited-time trial, it’s a free enterprise-ready product that can generate business benefit. Free to purchase and no annual maintenance fees, although you obviously have infrastructure costs for the servers/desktops on which RPA Express runs. Their goal in making it free is to bypass the whole RFP-POC-ROI dance that goes on in most organizations, where a decision to implement RPA – which typically can show a pretty good ROI within a matter of weeks – can take months. With a free product, one major barrier to implementation has been removed.

So what’s the catch? WorkFusion has a more intelligent automation tool, SPA, and they’re hoping that by seeing the benefits of using RPA Express, organizations will want to try out SPA on their more complex automation needs. RPA Express uses deterministic, rules-based automation, which requires explicit training or specification of each action to be taken; SPA uses machine learning to learn from user actions in order to perform automation of tasks that would typically require human intervention, such as handling unstructured and dynamic data. WorkFusion envisions a “stairway to digital operations” that starts with RPA, then steps up the intelligence with cognitive processing, chatbots and crowdsourcing to a full set of cognitive services in SPA.

This doesn’t mean that RPA Express is just a “starter edition” for SPA: there are entire classes of processes that can be handled with deterministic automation, such as synchronizing data between systems that may not talk to each other, such as SAP and Salesforce. This replaces having a worker copy and paste information between screens, or (horrors!) re-type the information in two or more systems; it can result in a huge reduction in cost and time, and remove the tedious work from people to free them up for more complex decision-making or direct customer interaction.

RPA Express bots can also be called from other orchestration and automation tools, including a BPMS, and can run on a server or on individual desktops. We didn’t get a rundown of the technology, so more to come on that as they get closer to the release. We did see one or two screens, and it’s based on modeling processes using a subset of BPMN (start and end events, activities, XOR gateways) that can be easily handled by a business user/analyst to create the automation flows, plus using recorder bots to capture actions while users are running through the processes to be automated. There was a mention of coding on the platform as well, although it was clear that this was not required in many cases, hence development skills are not essential.

Removing the cost of software changes the game, allowing more organizations to get started with this technology without having to do an internal cost justification for the licensing costs. There’s still training and implementation costs, but WorkFusion plans to provide some of this through online video courses, as well as having the large SIs and other partners trained to have this in their toolkit when they are working with organizations. More daunting is the architectural review that most new software needs to go through before being installed within a larger organization: this can still block the implementation even if the software is free.

I’m looking forward to seeing a more complete technical when the product is closer to launch date. I’m also looking forward to see how this changes the price point of RPA from other vendors.

TechnicityTO 2016: Challenges, Opportunities and Change Agents

The day at Technicity 2016 finished up with two panels: the first on challenges and opportunities, and the second on digital change agents.

The challenges and opportunities panel, moderated by Jim Love of IT World Canada, was more of a fireside chat with Rob Meikle, CIO at City of Toronto, and Mike Williams, GM of Economic Development and Culture, both of whom we heard from in the introduction this morning. Williams noted that they moved from an environment of few policies and fewer investements under the previous administration to a more structured and forward-thinking environment under Mayor John Tory, and that this introduced a number of IT challenges; although the City can’t really fail in the way that a business can fail, it can be ineffective at serving its constituents. Meikle added that they have a $12B operating budget and $33B in capital investments, so we’re not talking about small numbers: even at those levels, there needs to be a fair amount of justification that a solution will solve a civic problem rather than just buying more stuff. This is not just a challenge for the City, but for the vendors that provide those solutions.

There are a number of pillars to technological advancement that the City is striving to establish:

  • be technologically advanced and efficient in their internal operations
  • understand and address digital divides that exist amongst residents
  • create an infrastructure of talent and services that can draw investment and business to the City

This last point becomes a bit controversial at times, when there is a lack of understanding of why City officials need to travel to promote the City’s capabilities, or support private industry through incubators. Digital technology is how we will survive and thrive in the future, so promoting technology initiatives has widespread benefits.

There was a discussion about talent: both people who work for the City, and bringing in businesses that draw private-sector talent. Our now-vibrant downtown core is attractive for tech companies and their employees, fueled by our attendance at our universities. The City still has challenges with procurement to bring in external services and solutions: Williams admitted that their processes need improvement, and are hampered by cumbersome procurement rules. Democracy is messy, and it slows things down that could probably be done a lot faster in a less free state: a reasonable trade. 🙂

The last session of the day looked at examples of digital change agents in Toronto, moderated by Fawn Annan of IT World Canada, and featuring Inspector Shawna Coxon of the Toronto Police Service, Pam Ryan from Service Development & Innovation at the Toronto Public Library, Kristina Verner, Director Intelligent Communities of Waterfront Toronto, and Sara Diamond, President of OCAD University. I’m a consumer and a supporter of City services such as these, and I love seeing the new ways that they’re using to include all residents and advance technology. Examples of initiatives include fiber broadband for all waterfront community residences regardless of income level; providing mobile information access to neighbourhood police officers to allow them to get out of their cars and better engage with the community; integrating arts and design education with STEM for projects such as transit and urban planning (STEAM is the new STEM); and digital innovation hubs at some library branches to provide community access to high-tech gadgets such as 3D printers.

There was a great discussion about what it takes to be a digital innovator in these contexts: it’s as much about people, culture and inclusion as it is about technology. There are always challenges in measuring success: metrics need to include the public’s opinion of these agencies and their digital initiatives, an assessment of the impact of innovation on participants, as well as more traditional factors such as number of constituents served.

That’s it for Technicity 2016, and kudos to IT World Canada and the City of Toronto for putting this day together. I’ve been to a couple of Technicity conferences in the past, and always enjoy them. Although I rarely do work for the public sector in my consulting business, I really enjoy seeing how digital transformation is occuring in that sector; I also like hearing how my great city is getting better.

TechnicityTO 2016: Open data driving business opportunities

Our afternoon at Technicity 2016 started with a panel on open data, moderated by Andrew Eppich, managing director of Equinix Canada, and featuring Nosa Eno-Brown, manager of Open Government Office at Ontario’s Treasury Board Secretariat, Lan Nguyen, deputy CIO at City of Toronto, and Bianca Wylie of the Open Data Institute Toronto. Nguyen started out talking about how data is a key asset to the city: they have a ton of it gathered from over 800 systems, and are actively working at establishing data governance and how it can best be used. The city wants to have a platform for consuming this data that will allow it to be properly managed (e.g., from a privacy standpoint) while making it available to the appropriate entities. Eno-Brown followed with a description of the province’s initiatives in open data, which includes a full catalog of their data sets including both open and closed data sets. Many of the provincial agencies such as the LCBO are also releasing their data sets as part of this initiative, and there’s a need to ensure that standards are used regarding the availability and format of the data in order to enable its consumption. Wylie covered more about open data initiatives in general: the data needs to be free to access, machine-consumable (typically not in PDF, for example), and free to use and distribute as part of public applications. I use a few apps that use City of Toronto open data, including the one that tells me when my streetcar is arriving; we would definitely not have apps like this if we waited for the City to build them, and open data allows them to evolve in the private sector. Even though those apps don’t generate direct revenue for the City, success of the private businesses that build them does result in indirect benefits: tax revenue, reduction in calls/inquiries to government offices, and a more vibrant digital ecosystem.

Although data privacy and security are important, these are often used as excuses for not sharing data when an entity benefits unduly by keeping it private: the MLS comes to mind with the recent fight to open up real estate listings and sale data. Nguyen repeated the City’s plan to build a platform for sharing open data in a more standard fashion, but didn’t directly address the issue of opening up data that is currently held as private. Eno-Brown more directly addressed the protectionist attitude of many public servants towards their data, and how that is changing as more information becomes available through a variety of online sources: if you can Google it and find it online, what’s the sense in not releasing the data set in a standard format? They perform risk assessments before releasing data sets, which can result in some data cleansing and redaction, but they are focused on finding a way to release the data if all feasible. Interestingly, many of the consumers of Ontario’s open data are government of Ontario employees: it’s the best way for them to find the data that they need to do their daily work. Wylie addressed the people and cultural issues of releasing open data, and how understanding what people are trying to do with the data can facilitate its release. Open data for business and open data for government are not two different things: they should be covered under the same initiatives, and private-public partnerships leveraged where possible to make the process more effective and less costly. She also pointed out that shared data — that is, within and between government agencies — still has a long ways to go, and should be prioritized over open data where it can help serve constituents better.

The issue of analytics came up near the end of the panel: Nguyen noted that it’s not just the data, but what insights can be derived from the data in order to drive actions and policies. Personally, I believe that this is well-served by opening up the raw data to the public, where it will be analyzed far more thoroughly than the City is likely to do themselves. I agree with her premise that open data should be used to drive socioeconomic innovation, which supports my idea that many of the apps and analysis are likely to emerge from outside the government, but likely only if more complete raw data are released rather than pre-aggregated data.

TechnicityTO 2016: IoT and Digital Transformation

I missed a couple of sessions, but made it back to Technicity in time for a panel on IoT moderated by Michael Ball of AGF Investments, featuring Zahra Rajani, VP Digital Experience at Jackman Reinvents, Ryan Lanyon, Autonomous Vehicle Working Group at City of Toronto, and Alex Miller, President of Esri Canada. The title of the panel is Drones, Driverless Cars and IoT, with a focus is on how intelligent devices are interacting with citizens in the context of a smart city. I used to work in remote sensing and geographic information systems (GIS), and having the head of Esri Canada talk about how GIS acts as a smart fabric on which these devices live is particularly interesting to me. Miller talked about how there needs to be a framework and processes for enabling smarter communities, from observation and measurement, data integation and management, visualization and mapping, analysis and modeling, planning and design, and decision-making, all the way to action. The vision is a self-aware community, where smart devices that are built into infrastructure and buildings can feed back into an integrated view that can inform and decide.

Lanyon talked about autonomous cars in the City of Toronto, from the standpoint of the required technology, public opinion, and cultural changes away from individual car ownership. Rajani followed with a brief on potential digital experiences that brands create for consumers, then we circled back to the other two participants about how the city can explore private-public sensor data sharing, whether for cars or retail stores or drones. They also discussed issues of drones in the city: not just regulations and safety, but the issue of sharing space both on and above the ground in a dense downtown core. A golf cart-sized pizza delivery robot is fine for the suburbs with few pedestrians, but just won’t work on Bay Street at rush hour.

The panel finished with a discussion on IoT for buildings, and the advantages of “sensorizing” our buildings. It goes back to being able to gather better data, whether it’s external local factors like pollution and traffic, internal measurements such as energy consumption, or visitor stats via beacons. There are various uses for the data collected, both by public and private sector organizations, but you can be sure that a lot of this ends up in those silos that Mark Fox referred to earlier today.

The morning finished with a keynote by John Tory, the mayor of Toronto. This week’s shuffling of City Council duties included designating Councillor Michelle Holland as Advocate for the Innovation Economy, since Tory feels that the city is not doing enough to enable innovation for the benefit of residents. Part of this is encouraging and supporting technology startups, but it’s also about bringing better technology to bear on digital constituent engagement. Just as I see with my private-sector clients, online customer experiences for many services are poor, internal processes are manual, and a lot of things only exist on paper. New digital services are starting to emerge at the city, but it’s a bit of a slow process and there’s a long road of innovation ahead. Toronto has made committments to innovation in technology as well as arts and culture, and is actively seeking to bring in new players and new investments. Tory sees the Kitchener-Waterloo technology corridor as a partner with the Toronto technology ecosystem, not a competitor: building a 21st century city requires bring the best tools and skills to bear on solving civic problems, and leveraging technology from Canadian companies brings benefits on both sides. We need to keep moving forward to turn Toronto into a genuinely smart city to better serve constituents and to save money at the same time, keeping us near or at the top of livable city rankings. He also promised that he will step down after a second term, if he gets it. 🙂

Breaking now for lunch, with afternoon sessions on open data and digital change agents.

By the way, I’m blogging using the WordPress Android app on a Nexus tablet today (aided by a Microsoft Universal Foldable Keyboard), which is great except it doesn’t have spell checking. I’ll review these posts later and fix typos.

Exploring City of Toronto’s Digital Transformation at TechnicityTO 2016

I’m attending the Technicity conference today in Toronto, which focuses on the digital transformation efforts in our city. I’m interested in this both as a technologist, since much of my work is related to digital transformation, and as a citizen who lives in the downtown area and makes use of a lot of city services.

After brief introductions by Fawn Annan, President and CMO of IT World Canada (the event sponsor), Mike Williams, GM of Economic Development and Culture with City of Toronto, and Rob Meikle, CIO at City of Toronto, we had an opening keynote from Mark Fox, professor of Urban Systems Engineering at University of Toronto, on how to use open city data to fix civic problems.

Fox characterized the issues facing digital transformation as potholes and sinkholes: the former are a bit more cosmetic and can be easily paved over, while the latter indicate that some infrastructure change is required. Cities are, he pointed out, not rocket science: they’re much more complex than rocket science. As systems, cities are complicated as well as complex, with many different subsystems and components spanning people, information and technology. He showed a number of standard smart city architectures put forward by both vendors and governments, and emphasized that data is at the heart of everything.

He covered several points about data:

  • Sparseness: the data that we collect is only a small subset of what we need, it’s often stored in silos and not easily accessed by other areas, and it’s frequently lost (or inaccessible) after a period of time. In other words, some of the sparseness is due to poor design, and some is due to poor data management hygiene.
  • Premature aggregation, wherein raw data is aggregated spatially, temporally and categorically when you think you know what people want from the data, removing their ability to do their own analysis on the raw data.
  • Interoperability and the ability to compare information between municipalities, even for something as simple as date fields and other attributes. Standards for these data sets need to be established and used by municipalities in order to allow meaningful data comparisons.
  •  Completeness of open data, that is, what data that a government chooses to make available, and whether it is available as raw data or in aggregate. This needs to be driven by what problems that the consumers of the open data are trying to solve.
  • Visualization, which is straightforward when you have a couple of data sets, but much more difficult when you are combining many data sets — his example was the City of Edmonton using 233 data sets to come up with crime and safety measures.
  • Governments often feel a sense of entitlement about their data, such that they choose to hold back more than they should, whereas they should be in the business of empowering citizens to use this data to solve civic problems.

Smart cities can’t be managed in a strict sense, Fox believes, but rather it’s a matter of managing complexity and uncertainty. We need to understand the behaviours that we want the system (i.e., the smart city) to exhibit, and work towards achieving those. This is more than just sensing the environment, but also understanding limits and constraints, plus knowing when deviations are significant and who needs to know about the deviations. These systems need to be responsive and goal-oriented, flexibly responding to events based on desired outcomes rather than a predefined process (or, as I would say, unstructured rather than structured processes): this requires situational understanding, flexibility, shared knowledge and empowerment of the participants. Systems also need to be introspective, that is, compare their performance to goals and find new ways to achieve goals more effectively and predict outcomes. Finally, cities (and their systems) need to be held accountable for actions, which requires that activities need to be auditable to determine responsibility, and the underlying basis for decisions be known, so that a digital ombudsman can provide oversight.

Great talk, and very aligned with what I see in the private sector too: although the terminology is a bit different, the principles, technologies and challenges are the same.

Next, we heard from Hussam Ayyad, director of startup services at Ryerson University’s DMZ — a business incubator for tech startups — on Canadian FinTech startups. The DMZ has incubated more than 260 startups that have raised more than $206M in funding over their six years in existence, making them the #1 university business incubator in North America, and #3 in the world. They’re also ranked most supportive of FinTech startups, which makes sense considering their geographic proximity to Toronto’s financial district. Toronto is already a great place for startups, and this definitely provides a step up for the hot FinTech market by providing coaching, customer links, capital and community.

Unfortunately, I had to duck out partway through Ayyad’s presentation for a customer meeting, but plan to return for more of Technicity this afternoon.

What’s on your agenda for 2017? Some BPM conferences to consider

I just saw a call for papers for a conference for next October, and went through to do a quick update of my BPM Event Calendar. I certainly don’t attend all of these events, but like to keep track of who’s doing what, when and where. Here’s what I have in there so far; if you have others, send me a note or add them as a comment to this post and I’ll add to the calendar. I’m posting just the major conferences here, not every regional seminar.

Many vendors are eschewing a single large annual conference in favor of several regional conferences, easing the travel concerns of attendees; since these are usually just one day long, they aren’t announced this far in advance. It will be interesting to see if more vendors decide to go this way, or do more livestreaming to allow people to participate in more of the conference content remotely.

At this point, I don’t have confirmed attendance or speaking spots at any of these, although I will almost certainly be attending bpmNEXT and a few of the vendor conferences, either as a speaker or as an analyst/blogger. If you’re interested in having me attend your conference, let me know; I require that my travel expenses are covered (otherwise they come out of my own pocket in addition to the billable days that I’m giving up to attend), and a speaking fee if you want me to do a keynote or other presentation.

Intelligent Capture enables Digital Transformation at #ABBYYSummit16

IMG_0672I’ve been in beautiful San Diego for the past couple of days at the ABBYY Technology Summit, where I gave the keynote yesterday on why intelligent capture (including recognition technologies and content analytics) is a necessary onramp to digital transformation. I started my career in imaging and workflow over 25 years ago – what we would now call capture, ECM and BPM – and I’ve seen over and over again that if you don’t extract good data up front as quickly as possible, then you just can’t do a lot to transform those downstream processes. You can see my slides at Slideshare as usual:

I’m finishing up a white paper for ABBYY on the same topic, and will post a link here when it’s up on their site. Here’s the introduction (although it will probably change slightly before final publication):

Data capture from paper or electronic documents is an essential step for most business processes, and often is the initiator for customer-facing business processes. Capture has traditionally required human effort – data entry workers transcribing information from paper documents, or copying and pasting text from electronic documents – to expose information for downstream processing. These manual capture methods are inefficient and error-prone, but more importantly, they hinder customer engagement and self-service by placing an unnecessary barrier between customers and the processes that serve them.

Intelligent capture – including recognition, document classification, data extraction and text analytics – replaces manual capture with fully-automated conversion of documents to business-ready data. This streamlines the essential link between customers and your business, enhancing the customer journey and enabling digital transformation of customer-facing processes.

I chilled out a bit after my presentation, then decided to attend one presentation that looked really interesting. It was, but was an advance preview of a product that’s embargoed until it comes out next year, so you’ll have to wait for my comments on it. Winking smile

A well-run event with a lot of interesting content, attended primarily by partners who build solutions based on ABBYY products, as well as many of ABBYY’s team from Russia (where a significant amount of their development is done) and other locations. It’s nice to attend a 200-person conference for a change, where – just like Cheers – everybody knows your name.

Keynoting at @ABBYY_USA Technology Summit

I’ve been in the BPM field since long before it was called BPM, starting with imaging and workflow projects back in the early 1990s. Although my main focus is on process now (hence the name of my blog), almost every project that I’m involved in has some element of content and capture, although not necessarily from paper documents. Basically, content capture is the onramp to many business processes: either the capture of a piece of content is what triggers a process (e.g., an application form) or it adds information to a process to move it along. Capture can mean traditional document scanning with intelligent recognition in the form of OCR, ICR, barcode and mark sense recognition, or can also be capture of information already in electronic form but not in a structured format (e.g., emails).

To get to the point, this is why I’m excited to be keynoting at the ABBYY Technology Summit coming up on November 16-18, in a presentation entitled How Digital Transformation is Increasing the Value of Capture and Text Analytics:

As the business world has been wrestling with the challenge of Digital transformation, the last few years have seen the shift away from BPM and Case Management technology platforms towards the more solutions-orientated approach offered by Smart Process Applications and Case Management Frameworks. A critical component of these business solutions is capability to capture the key business information at the point of origin.

This information is often buried inside forms and other business documents. Capturing this data through recognition technologies and automatic document classification transforms streams of documents of any structure and complexity into business-ready data.

This enables organizations of any size to streamline their existing business processes, increasing efficiency and reducing costs; it also enables real-time customer self-service processes triggered by mobile document capture.

I’ll be covering trends and benefits of intelligent capture, providing ABBYY’s customers and partners in attendance with solid advice on how to best start integrating these technologies to make their business processes run better. I’m also writing a paper covering these topics, sponsored by ABBYY, which will be available in time for the conference.

If you’re at the conference, please stop by and say hi, I’ll be hanging out there for the rest of the day after my keynote.