Crowdsourcing And Microwork With DST

In the afternoon, after my BPM COE presentation, I moved over to the AWD ADVANCE technical track to hear Roy Brackett and Mike Hudgins talk about crowdsourcing and microwork. DST and some of their partners/siblings/subsidiaries are business process outsourcers, and always looking at ways to make that outsourcing more efficient. They use the term crowdsourcing to mean the use of a global, elastic workforce, such as Amazon Mechanical Turk; the concept of microwork is breaking work down into small tasks that can be done with little or no training.

There are some basic requirements for allocating microwork, especially in a financial services environment:

  • Quality, which is typically managed by dual entry (assigning the same piece of work to different workers, then comparing the results) or by validating against another data source (e.g., comparing the values entered independently for name and account number against a customer database).
  • Security, which is typically managed by feeding the tasks in such small tasks that there are few privacy issues since the workers rarely see more than a one or two pieces of data related to any given transaction, and have no way to link the data.
  • Priority, which is typically managed by serving up the tasks to workers only at the point that they are prepared to do the work so that the highest priority task is executed first; also, since the work is divided into tasks, many of those tasks may be executed in parallel.

Looking at common work activities, they typically break down to transcribe (e.g., data entry from scanned form), remediate (e.g., data entry from unstructured content where information may need to be looked up in other systems based on the content), business knowledge, and system knowledge, only the first two of which are appropriate for microwork.

DST is doing some research into microwork, so what they talked about does not represent a product or even, necessarily, a product direction. They started with transcription tasks – not that they want to compete with OCR/ICR vendors, but those tools are not perfect especially on images with subpar capture quality – using dual entry, with a remediation step if the two entries disagreed. This could be used for post-OCR repair, or for older scanned documents where the quality would not support adequate OCR rates. DST did a test using CrowdFlower for transcribing 1,000 dummy forms containing both machine-printed and handwritten content on a structured form: single entry gave 99% accuracy, while dual entry increased that to 99.6%.

They then did a pilot using one of their own business processes and real-world data from an outsourcing customer, transcribing fund and account information from inbound paper correspondence. Since only 25% of the documents were forms, they used fingerprinting and other recognition technologies to identify where these two fields might be on the page, then provide image slices for data entry. With the automated fingerprinting that they have developed, they were getting 98% classification, with zero misclassifications (the remainder were rejected as unclassified rather that being misclassified). For the microwork participants, they used new offshore hires and US-based part-time employees, so still used DST employees but with almost no training and relatively low skill levels; using single entry, they reduced data entry errors by 50% from their old-style “one-and-done” data entry technique (and presumably reduced the costs). They then rolled this out to handle all transaction types for that customer in a production environment.

They’re piloting other data entry processes for other customers now based on that success, starting with one that is driven purely by standard forms and has highly variable volumes, which is a perfect match for crowdsourced microwork because of the ease of segmenting forms for data entry and the elasticy of the workforce. There are optimizations on the basic methods, such as sending one person only (for example) tax ID fields to transcribe, since it’s faster to do data entry on a single field type due to no context switching.

The result: quality is improved, with more errors caught earlier; and better productivity (and hence cost) using less-skilled workers and a workforce that can increase and decrease in size to match the volume. There was a great question from the audience about what employees feel about this; the responses included both “we’re using this on the training path for higher-skilled workers” and “this separates out transcription and remediation as services (presumably done by someone whose careers are not of concern to the organization, either outsourced or offshore employees) and leaves the high-value knowledge work”: it’s fair to say that most companies don’t expect to be doing low level data entry in a very few number of years, but will have their customers do it for them on the web.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.