The NIST cloud computing program is a set of best procedures, practices and standards for developing, deploying and maintaining cloud computing architecture. The NIST cloud computing program provides security assessments, procedures and technical guidance for building and purchasing cloud services. View Full Term
Trending Terms
Techopedia Terms
Subscribe To Our Newsletters
By clicking sign up, you agree to receive emails from Techopedia and agree to our Terms of Use and Privacy Policy.
Don't miss an insight. Subscribe to Techopedia for free.
Most machine learning project failures can be traced back to lack of business alignment, poor training, data quality issues and deployment complexities. You can avoid these pitfalls by using best practices for risk mitigation.
Machine learning is one of the hottest topics in technology today—and for good reason.
It has tremendous potential to automate or semi-automate some of the most tedious tasks knowledge workers face—and leading tech companies are already starting to realize much of that potential.
For example, machine learning can help reduce the manual toil on the following tasks by 50% or more:
We are on the precipice of unlocking this value as machine learning applications become more widespread. A study by Algorithmia uncovered 76% of enterprises prioritized artificial intelligence (AI) and machine learning (ML) over other IT initiatives in 2021.
Yet, most machine learning initiatives fail. (Also read: The Promises and Pitfalls of Machine Learning.)
While there are myriad reasons ML pilots never take off, the most pressing issues can be traced back to four main pitfalls:
Let's explore each of these and suggest some solutions for data teams and organizations to avoid them.
The original sin of machine learning is in how most of these projects are born.
Too often, a group of data scientists conceive machine learning projects thinking, “This data is interesting; wouldn’t it be cool if…”
And it's that line of thinking that turns ML projects into science experiments.
It may still be possible for the model in this type of project to produce something valuable—but if the project doesn’t address an urgent and painful need, it will not get the time or attention it needs from business stakeholders. Or worse, it could become something closer to blockchain: a cool technology in search of a problem. (Also read: An Introduction to Blockchain Technology.)
Machine learning projects should start by looking at the most urgent business priorities and then assess what resources are required to solve them—rather than starting with the clean data at hand and then trying to find a problem they can solve.
Good questions to ask before launching a machine learning project include:
Let’s say your project has a really hard and valuable business problem in its sights. The next step is collecting enough clean data to train the model.
Therein lies the data scientist’s paradox: To eliminate toil for others, they must wallow in it.
According to Anaconda, data scientists spend about 45% of their time on data preparation tasks, including loading and cleaning data.
There is a sizeable chance that, after all this work, there may just not be enough suitable or representative training data. And, like any other manual task, the risk of human error is introduced. (Also read: Automation: The Future of Data Science and Machine Learning?)
Fine-tuning your ML model can be challenging as well. It can be overfitted, where it learns too much, and underfitted, where it learns too little.
How can a machine learning model learn too well, you ask?
There is a famous example of a model that was trained to differentiate between huskies and wolves. It was very accurate during training, but started failing in production. The problem? All the pictures of wolves had snow in the background and the huskies did not. It was a snow-detection model—not a wolf-detection model.
Unfortunately, machine learning training is perhaps the one test on which you don’t want to score 100%.
Whether in training or in deployment, it’s impossible to have an effective machine learning model with bad data. Garbage in, garbage out, as they say.
The challenge is that machine learning models are data-hungry. They always want more data—as long as it’s reliable.
However, bad data can be introduced into good data pipelines in a nearly infinite number of ways. Sometimes it might be a noisy anomaly where the error is caught quickly; other times it might be a gradual case of data drift that decreases your model’s accuracy over time. Either way, it’s bad.
That’s because you built this model to automate or inform a painful business problem—so when accuracy drops, so does trust and the consequences are severe. For example, one of my colleagues spoke to a financial company that was using a machine learning model to purchase bonds that met certain criteria. Bad data took it offline and it was weeks before it was trusted to move back into production. (Also read: Fintech’s Future: AI and Digital Assets in Financial Institutions.)
The data infrastructure supporting machine learning models needs to be constantly tested and observed—ideally in a scaled, automated way.
It turns out it takes a lot of resources to deploy and maintain machine learning in production. Who knew?
Well, Gartner did. It projects that, by 2025, AI will be the top category driving infrastructure decisions, due to the AI market's maturation resulting in a tenfold growth in compute requirements.
This requires a lot of support from business stakeholders, which is why business alignment is so important. For example, former Uber data product manager Atul Gupte led a project to improve the organization’s data science workbench that data scientists used to make it easier to collaborate.
Data scientists were currently automating the process of validating and verifying worker documents that were required when applying to join the Uber platform. This was a great project for machine learning and deep learning, but the problem was data scientists would routinely hit limits of the available compute.
Gupte researched multiple solutions and identified virtual GPUs (then an emerging technology) as a possible solution. While there was a high price tag, Gupte justified the expenditure with leadership. The project was not only going to save the company millions, but supported a key competitive differentiator.
Another example is how Netflix never moved its award-winning recommendation algorithm into production, choosing instead to go with a simpler solution that was easier to integrate. (Also read: How AI Is Personalizing Entertainment.)
Don’t let these challenges dissuade you from launching your machine learning initiative.
Mitigate these risk factors by:
As Tom Hanks says in "A League of Their Own," “If it wasn't hard, everyone would do it. It's the hard that makes it great.”
Share This Article
Written by Lior Gavish | Chief Technology Officer
Lior Gavish is the CTO and co-founder of data reliability company Monte Carlo and the creator of the industry’s first end-to-end data observability platform.
Tech moves fast! Stay ahead of the curve with Techopedia!
Join nearly 200,000 subscribers who receive actionable tech insights from Techopedia.
Latest Articles
By: Kaushik Pal | Contributor
By: Vaibhav Kakkar | Founder and CEO
By: Kuntal Chakraborty | IT Engineer
By: Justin Stoltzfus | Contributor, Reviewer
Techopedia™ is your go-to tech source for professional IT insight and inspiration. We aim to be a site that isn’t trying to be the first to break news stories, but instead help you better understand technology and — we hope — make better decisions as a result.
Copyright © 2022 Techopedia Inc. – Terms of Use – Privacy Policy – Editorial Review Policy
By clicking sign up, you agree to receive emails from Techopedia and agree to our Terms of Use & Privacy Policy.
Techopedia is a part of Janalta Interactive.