Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.
The training process for artificial intelligence (AI) algorithms is designed to be largely automated innately. There are often thousands, millions or even billions of data points and the algorithms must process all of them to search for patterns. In some cases, though, AI scientists are finding that the algorithms can be made more accurate and efficient if humans are consulted, at least occasionally, during the training.
The result creates hybrid intelligence that marries the relentless, indefatigable power of machine learning (ML) with the insightful, context-sensitive abilities of human intelligence. The computer algorithm can plow through endless files of training data, and humans correct the course or guide the processing.
The ML supervision can take place at different times:
To a large extent, supervised ML is for domains where automated machine learning does not perform well enough. Scientists add supervision to bring the performance up to an acceptable level.
MetaBeat 2022
MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.
It is also an essential part of solving problems where there is no readily available training data that contains all the details that must be learned. Many supervised ML problems begin with gathering a team of people who will label or score the data elements with the desired answer. For example, some scientists built a collection of images of human faces and then asked other humans to classify each face with a word like “happy” or “sad”. These training labels made it possible for an ML algorithm to start to understand the emotions conveyed by human facial expressions.
In most cases, the same machine learning algorithms can work with both supervised and unsupervised datasets. The main difference is that unsupervised learning algorithms start with raw data, while supervised learning algorithms have additional columns or fields that are created by humans. These are often called labels although they could have numerical values too. The same algorithms are used in both cases.
Supervision is often used to add fields that are not apparent in the dataset. For example, some experiments ask humans to look at landscape images and classify whether a scene is urban, suburban or rural. The ML algorithm is then used to try to match the classification from the humans.
In some cases, the supervision is added during or after the ML algorithm begins. This feedback may come from end users or scientists.
Also read: How to build a data science and machine learning roadmap in 2022
Human opinions and knowledge can be folded into the dataset before, during or after the algorithms begin. It can also be done for all data elements or only a subset. In some cases, the supervision can come from a large team of humans and in others, it may only be subject experts.
A common process involves hiring a large number of humans to label a large dataset. Organizing this group is often more work than running the algorithms. Some companies specialize in the process and maintain networks of freelancers or employees who can code datasets. Many of the large models for image classification and recognition rely upon these labels.
Some companies have found indirect mechanisms for capturing the labels. Some websites, for instance, want to know if their users are humans or automated bots. One way to test this is to put up a collection of images and ask the user to search for particular items, like a pedestrian or a stop sign. The algorithms may show the same image to several users and then look for consistency. When a user agrees with previous users, that user is presumed to be a human. The same data is then saved and used to train ML algorithms to search for pedestrians or stop signs, a common job for autonomous vehicles.
Some algorithms use subject-matter experts and ask them to review outlying data. Instead of classifying all images, it works with the most extreme values and extrapolates rules from them. This can be more time efficient, but may be less accurate. It is more popular when human expert time is expensive.
The world of supervised ML is broken down into several approaches. Many have much in common with unsupervised ML because they use the same algorithms. Some distinctions, though, focus on the way that human intelligence is folded into the dataset and absorbed by the algorithms.
The most commonly cited different types of algorithms are:
Also read: 3 big problems with datasets in AI and machine learning
All the major companies offer basic ML algorithms that can work with either labeled or unlabeled data. They are also beginning to offer particular tools that simplify and even automate the supervision.
Amazon’s SageMaker offers a full integrated development environment (IDE) for working with their ML algorithms. Some may want to experiment with prebuilt models and adjust them according to the performance. AWS also offers the Mechanical Turk that’s integrated with the environment, so humans can examine the data and add annotations that will guide the ML. Humans are paid by the task at a price you set, and this affects how many sign up to work. This can be a cost-effective way to create good annotations for a training dataset.
IBM’s Watson Studio is designed for both unsupervised and supervised ML. Their Cloud Pak for Data can help organize and label datasets gathered from a wide variety of data warehouses, lakes and other sources. It can help teams create structured embeddings guided by human resources and then feed these values into the collection of ML algorithms supported by the Studio.
Google’s collection of AI tools include VertexAI, which is a more general product, and some automated systems tuned for particular types of datasets like AutoML Video and AutoML Tabular. Pre-analytic data labeling is easy to do with the various data collection tools. After the model is created, Google also offers a tool called Vertex AI Model Monitoring that watches the performance of the model over time and generates automated alerts if the model seems to be drifting.
Microsoft has an extensive collection of AI tools, including Azure Machine Learning Studio, a browser-based user interface that organizes the data collection and analysis. Data can be augmented with labels and other classification using various Azure tools for organizing data lakes and warehouses. The studio offers a drag-and-drop interface for choosing the right algorithms through experiment with data classification and analysis.
Oracle’s data infrastructure is built around big databases that act as the foundation for data warehousing. The databases are also well-integrated with ML algorithms to optimize creating and testing models with these datasets. Oracle also offers a number of focused versions of their products designed for particular industries, such as retail or financial services. Their tools for data management can organize the creation of labels for each data point and then apply the right algorithms for supervised or semi-supervised ML.
The startups are tackling a wide range of problems that are important to creating well-trained models. Some are working on the more general problem of working with generic datasets, while others want to focus on particular niches or industries.
CrowdFlower, started as Dolores Labs, both sells pre-trained models with pre-labeled data and also organizes teams to add labels to data to help supervise ML. Their data annotation tools can help in-house teams or be shared with a large collection of temporary workers that CrowdFlower routinely hires. They also run programs for evaluating the success of models before, during and after deployment.
Swivl has created a basic data labeling interface so that teams can quickly start guiding data science and ML algorithms. The company has focused on this interaction to make it as simple and efficient as possible.
The AI and data handling routines in DataRobot’s cloud are designed to make it easier for teams to create pipelines that gather and evaluate data with low-code and no-code routines for processing. They call some of their tools “augmented intelligence” because they can rely upon both ML algorithms and human coding in both training and deployment. They say they want to “move beyond simply making more intelligent decisions or faster decisions, to making the right decision.”
Zest AI is focusing on the credit approval process, so lending institutions can speed up and simplify their workflow for granting loans. Their tools help banks build their own custom models that merge their human experience with the ability to gather credit risk information. They also deploy “de-biasing tools” that can reduce or eliminate some unintended consequences of the model construction.
Luminance helps legal teams with tasks like discovery and contract drafting. Its ML tools create custom models by watching the lawyers work and learning from their decisions. This casual supervision helps the models adapt faster, so the team can make better decisions.
In many senses, supervised ML produces the best combination of human and machine intelligence when it creates a model that learns how a human might categorize or analyze data.
Humans, though, are not always accurate and they often don’t understand the data well enough to work accurately. They may grow bored after working with many data items. In many cases, they make mistakes or categorize data inconsistently because they don’t know the answer themselves.
Indeed, in cases where the problem is not well understood by humans, using supervised algorithms can fold in too much information from the inconsistent and uncertain human. If the human opinion is given too much precedence, the algorithm can be led astray.
A common problem with supervised algorithms is the sheer size of the datasets. Much of ML depends upon big data collections that are gathered automatically. Paying for humans to classify or label each data element is often much too expensive. Some scientists choose random or structured subsets of the data and seek human opinions on just them. This can work in some cases, but only when the signal is strong enough. The algorithm cannot rely on the ML algorithm’s ability to find nuance and distinction in very large datasets.
Read next:Driving smarter customer experiences with AI and machine learning
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.
Join metaverse thought leaders in San Francisco on October 4 to learn how metaverse technology will transform the way all industries communicate and do business.
Did you miss a session from Transform 2022? Head over to the on-demand library for all of our featured sessions.
© 2022 VentureBeat. All rights reserved.
We may collect cookies and other personal information from your interaction with our website. For more information on the categories of personal information we collect and the purposes we use them for, please view our Notice at Collection.