Chip VonBurg, senior solutions architect at ABBYY, gave us a look at machine learning in FlexiCapture 12. This is my last session for ABBYY Technology Summit 2017; there’s a roadmap session after this to close the conference, but I have to catch a plane.
He started with a basic definition of machine learning: a method of data analysis that automates analytical model building, allowing computers to find insights in data and execute logic without being explicitly programmed for where to look or what to do. It’s based on pattern recognition and computational statistics, and it’s popping up in areas such as biology, search and recommendations (e.g., Netflix), and spam detection. Machine learning is an iterative process that uses sample data and one or more machine learning algorithms: the training data set is used by the algorithm to build an analytical model, which is then applied to attempt to analyze or classify new data. Feedback on the correctness of the model for the new data is fed back to refine the learning and therefore the model. In many cases, users don’t even know that they’re providing feedback to train machine learning: every time you click “Spam” on a message in Gmail (or “Not Spam” for something that was improperly classified), or thumbs up/down for a movie in Netflix, you’re providing feedback to their machine learning models.
He walked us through several different algorithms, and their specific applicability: Naive Bayes, Support Vector Machine (SVM), and deep learning; then a bit about machine learning scenarios inclunition rulesding supervised, unsupervised and reinforcement learning. In FlexiCapture, machine learning can be used to sort documents into categories (classification), and for training on field-level recognition. The reason that this is important for ABBYY customers (partners and end customers) is that it radically compresses the time to develop the rules required for any capture project, which typically consumes most of the development time. For example, instead of just training a capture application for the most common documents since that’s all you have time for, it can be trained for all document types, then the model will continue to self-improve as verification users correct errors made by the algorithm.
Although VonBurg was unsure if the machine learning capabilities are available yet in the SDK — he works in the FlexiCapture application team, which is based on the same technology stack but runs independently — the session on robotic information capture yesterday seems to indicate that it is in the SDK, or will be very soon.