In this Q&A interview, Aible and Intel provide insights not only on how to avoid AI project failures but how they deliver guaranteed results from AI within 30 days.
AI projects are often highly complex and take too long to complete. Common problems include models being trained on the wrong data or results that come too late or provide insights into the wrong business problem.
RTInsights recently sat down with Arijit Sengupta, Founder and CEO of Aible, and Arijit Bandyopadhyay, CTO – Enterprise Analytics & AI and Head of Strategy – Cloud and Enterprise, Data Center and AI Group at Intel Corporation, to talk about why AI projects fail and take so long, what can be done about these issues, and how Aible and Intel working together can help. Here is a summary of our conversation.
RTInsights: AI projects have a high failure rate, do not scale well, and consume great amounts of resources. Why is that the case?
Arijit Sengupta: You might even be understating the problem. Ninety percent of AI projects are failing, according to BCG MIT. And in any enterprise situation, imagine if I told you that the project is 90% unlikely to generate economic value. Would you want to do that?
The reason AI projects are failing is they have been turned into these really complex things where you have to get everything perfect. And you are not starting from a point of understanding what the business is trying to achieve. We looked at some of the underlying causes for why these projects fail, and they often stem from really bad data. A big issue is: how do I very quickly figure out whether my data is good enough?
The high failure rate also stems from creating a wonderful predictive model that makes sense in a lab, but then you show it to a business user, and they say, “that’s not what I was trying to do.” For example, a predictive model can tell you the likelihood of a customer churning, but what a business user also needs to consider is what is the value of retaining the customer? What’s the cost of retaining the customer? How much capacity do I have to try to retain a customer? If I don’t understand these things, I can’t act upon that predictive model.
So, the combination of that disconnect between AI and what business is trying to achieve and the length of time and complexity of AI projects is why so many AI projects are failing.
Arijit Bandyopadhyay: We have been seeing this for quite some time in terms of why AI projects are failing. There are a couple of things that we find consistently. One is basically the time delay between when a particular project or effort gets defined to when it’s executed.
Given it takes many months to simply launch an AI project, scenarios change, and in many cases, even the goals change. And so, when you look at that from the perspective of when the AI project was envisioned to when it got executed, to when you are using the results, things pretty much need to be done again to make sure that it is adhering to the top standards of results and insights.
The second one, I think Arijit Sengupta from Aible touched on, is that data keeps on changing; either it is not “ready,” or it has data quality issues. Whatever AI can do is very dependent on the data, on what you feed the AI Engine. That impacts the finesse and accuracy of the model and the overall opx.
In larger companies, there is also the aspect of quality and consistency of people involved with an AI project. The person who started the project at the outset often does not end the implementation. There is reduced consistency on the technical execution of the project or even on the aspects of the vision of a project from how it was defined to achieve success to begin with.
So, the issues causing the high failure rate of AI projects are related to the changing goals and scenarios, the fact that data changes, and the people and associated talent issues. These are some of the reasons why we see companies ending up with lower ROI, some of the projects failing, or the projects not meeting the expectations of the CIOs.
Arijit Sengupta: We have faced this in enterprises before. If you think about the shift from the waterfall approach to programming to a much more agile approach, it was the same problem. Any time you make these things into really complex systems that depend on expertise in the heads of people, and it takes a long time to execute, you end up increasing risk. You end up with the likelihood that the project will fail.
You need to turn this into something collaborative, quick, and iterative, where you are building institutional knowledge instead of this individual knowledge in the heads of people. That’s how you take it from an arts and crafts project to something that can actually create value at scale.
RTInsights: How is the work Aible is doing addressing these issues?
Arijit Sengupta: Very early, we decided that we would give customers a 100% satisfaction guarantee, where if they didn’t see value in 30 days, we wouldn’t charge them. And the reason we set that goal up is it forces the entire organization to focus on building software that works to solve the problem. It is kind of like you’re burning the bridge behind you to make sure you have to win the war.
Once we did that, we figured out we had to solve three fundamental problems very quickly. The first is to very quickly figure out how to deal with data as it exists. Because everybody’s got dirty data; nobody’s got clean data. So, how can I rapidly determine which is the 20% of data that is gold, which is the 80% that may not be gold today, and how do we get to value very quickly?
Aible Sense enables us to completely and automatically, in a matter of minutes, do what expert data scientists would spend weeks doing to figure out whether there is value in the data. That was step one: automating the value evaluation of the data.
Step two was how do we involve business stakeholders early in the process? How do we get them involved so that they can start providing feedback on what we are trying to do and if it will actually create value for them? Aible Explore is a collaborative way to look at that data and the insights in the data. We are giving them something of value and insights that make sense, and they get involved early in the process as a result. And if you look at the case studies that we have done jointly with Intel, you’ll find in almost every project, the data had issues, and we had to iterate and improve the data sets.
In many of these cases, you’ll see business users get involved halfway through the project and change their minds about what the use case is supposed to be and what they want to solve. In one case, we thought we were going to predict the quantity of goods to be sold. But it turned out that the business stakeholder didn’t want to predict the quantity of goods sold; they were more concerned about whether they had food being wasted due to overstocking of perishable food. The problems are related, but the solution is different.
Aible Explore brings customers in early and gives them valuable insights. And then, Aible Optimize starts by asking that business user, what are your business goals? Are you trying to maximize profit or maximize revenue? What are your constraints? Do you have a capacity problem? Do you have too few salespeople? And what are your expectations about the future?
All that gets considered in Aible Optimize to create a predictive model that delivers to those business realities and expectations. This whole thing gets done in a matter of days, quickly iterating and quickly involving multiple stakeholders. That’s how we solve the problem. We do it by turning this into a software-oriented solution that enables rapid iteration and collaboration at scale.
RTInsights: How does Intel come into play?
Arijit Bandyopadhyay: As part of the work that we’ve been doing with Aible and their value proposition of guaranteeing ROI from AI to the customer in 30 days, there have been various elements – that Intel helped from an infrastructure and holistic software perspective to the joint effort.
Intel is collaborating with Aible bringing – innovations in data management and data ops, augmented data analytics, AI model training and inference, and the entire augmented analytics workflow and LCM – leveraging current and upcoming Intel platforms, products, and capabilities like Intel® AVX512, DLBoost, Intel SGX, and Project Amber, Intel® OneAPI Deep Neural Network Library and many other in-silicon accelerators, software and infrastructure adjacencies. Running Analytics and AI on our latest generation processors accelerates the results, improves performance and overall time to value, and TCO for our partners and customers.
In addition to Intel hardware infrastructure, there are also various AI and analytics software assets from Intel, like Intel Distribution of Python with its associated optimized libraries like SciPy, NumPy, Pandas, Intel Distribution for Modin, and the ML Frameworks like Scikit-learn, XGBoost, plus Intel optimized Frameworks like TensorFlow, PyTorch, and many other augmented open-source software products, which are greatly helpful for all data, augmented analytics and AI projects.
There is a new heterogenous computing initiative known as OneAPI at Intel – which is a software layer that is a unified application programming interface used across different compute processors and accelerators – across the Intel product line (CPUs, GPUs, FPGAs, etc.). It is one programming paradigm that can work everywhere, and it’s becoming very popular amongst developers, engineers, and enterprises at large.
These optimized software libraries and frameworks are integrated under the – Analytics and AI Toolkit inside oneAPI – (Intel oneAPI Data Analytics Library and Deep Neural Network Library). Intel Neural Compressor accelerates AI Inference without sacrificing accuracy, plus Intel is investing a lot in optimizing the latest Transformer models for NLP and CV on Intel Architecture. As a segment of our joint work with Intel and Aible – we also shared some of our upcoming roadmap elements in Sapphire Rapids, like next-gen advanced matrix extensions and others, which significantly improve classical machine learning performance, as well as deep learning training and inference on Intel Architecture.
We package all these innovations ubiquitously, facilitating various deployment models across cloud hyperscalers, on-prem, and hybrid offerings and extending some even to latest gen serverless architectures, which as an example – was very useful for Aible.
There are also other adjacencies like our Optane memory, a new breakthrough in non-volatile memory technology, providing memory-like performance at storage-like capacity and cost. If you have very large models or a significantly large amount of data, Optane memory can be of great benefit.
Arijit Sengupta: What is fascinating about working with Intel is the innovation that Intel has across a wide variety of capabilities. It was extremely easy for us to adopt it. Take SciPy and NumPy, for example. We just put in the Intel optimized version of SciPy and NumPy instead of the version we were using, and the code sped up. And the beautiful part of this has been the collaboration with Intel, where they helped us get these technologies working perfectly with our platform.
You mentioned, for example, cnvrg.io would be one way to get Aible working on companies’ on-prem servers, as well. But what was beautiful about it was, once we got this technology up and working, our customers started getting benefits from all that hard work Intel had done, all that innovation that Intel had built, without having to do any extra work.
So, Aible and Intel working together help solve the problems for the end customer without them having to deal with any complexity. And that is one of the crucial attributes of any really successful partnership. Are you making the customer’s life better? Are you affecting the customer’s experience? And that’s what Intel is doing, and that’s what Aible is trying to do. And together, I think we do an even better job.
Arijit Bandyopadhyay: We have this great collaboration between these two companies that multiplies the benefits and results for our mutual customers. From a process standpoint, customers bring their use cases, level set the expectations, and make sure that the business and the technical stakeholders are aligned. Then, they validate the data with Aible to ensure sufficient data quality is met above a threshold. After that, the AI solution for enterprises with Aible running on Intel brings value and actionable insights to the customer by basically fast mapping and delivery to the target.
RTInsights: What are the benefits delivered by the Aible/Intel technological differences (such as being based on a serverless approach) compared to traditional methods?
Arijit Sengupta: There are technical benefits to the serverless approach. For example, we were seeing two to three times, sometimes four times improvement in speed and cost of model training and in the total cost of ownership for serverless compared to servers.
That’s a technical benefit. The fundamental benefit is changing the art of the possible. People are concerned about 90% failure rates and projects taking nine months. Now, projects can be done in less than 30 days. In one project we did jointly, a multinational CPG company found ways to impact their revenue by $10 million in 17 days. In another case, the food waste project that I mentioned, there was a 10% food waste reduction, and we were able to accomplish this in 27 days. There was also a Fortune 500 company, where we helped them find material sales insights, which they did not have before, in 20 days. And there was a logistics use case where we got $4 million of benefits from expediting shipments in 17 days.
In all these cases, people had been working on these problems for a while. Although the problems were front of mind, these projects were going to take nine months and have a 90% failure rate. That’s where the art of the possible comes in. We are jointly changing that art of the possible.
We’ve been working on this for a long time. With the partnership, we have a major company like Intel stepping up and helping us take these stories to the market and help people understand that you don’t have to think of AI as this really complex thing you spend years doing. You can take your biggest problems today and have a very focused approach that’ll get you results in 30 days.
With our joint approach, all the machine learning happens in your own environment. Aible doesn’t get to see the data; Intel doesn’t get to. It runs on Intel processors, but Intel as a company doesn’t get to see the data. Your data is your own.
Model training is being done using very well-understood techniques like TensorFlow, GBM, XGBoost, etc., and you are getting results in 30 days.
This happens over and over again if you look at the history of technology. Big shifts in the market happen when the art of the possible changes. When you go from having to manually assemble a PC to having a PC that you can plug into a wall, and it just works, those are the big shifts. Or when you go from having to have experts do word processing to everybody being able to use the word processor themselves. That’s what is happening here. The art of the possible in AI is changing because of the Intel/Aible immediate impact program that we’re working on.
Arijit Bandyopadhyay: I’ll take this opportunity to also elevate the value of the Intel Disruptor Initiative. It’s an innovation program from Intel working with various important, cutting-edge, and high-growth ISVs and partners. The collaboration with Aible and Intel started in that realm – under the Intel Disruptor Program.
Through this umbrella initiative, we help various partners with a 360-degree approach, from technical optimization and acceleration of Data, Analytics, and AI assets on Intel Architecture to the reduction of the total cost of ownership for the ISV and for the customer across various deployment models. The program also provides some business and go-to-market benefits. You can learn more about the program here.
Through this program, we had the opportunity of engaging with Arijit Sengupta and Aible. The joint effort brought in various learnings. And to sum it up, packaging our solutions together for customers let us provide outstanding value to them.
RTInsights: Can you give us some examples of customer successes?
Arijit Sengupta: One of the best parts of this project is it’s very fact-driven. We have been collecting case studies, capturing exactly what the customer’s experience was. Let me talk you through a couple of them.
One was a manufacturer where we were trying to look at the impact of late shipments. Remember earlier when I said how the data set is never perfect? We started the project and got the initial analysis done by the second day. Then they went in and reviewed the results with the business team and came back and said, “no, our assumptions were wrong .”We had to go and talk to business and change our assumptions. With business feedback, we had to change the data set itself. The business case changed, so the data set changed. They then got us a new version of the data set. Despite all of that, the entire project finished in 17 days. So, three changes to the dataset and two changes in use cases, but in 17 days, you get to an AI that can help them figure out where to expedite because a shipment will be late. That’s one example.
Another one is the food overstock case I mentioned earlier. Again, it’s one of my favorite case studies because they ended up changing their use case twice, and they ended up changing their data sets a few times.
In another case, we helped a consumer CPG company find $10 million of additional value by helping them identify which people they should focus on for training if they want a salesperson to get their first order as quickly as possible.
But what was funny was the Chief Technology Officer and the Chief Data Officer, in that case, just couldn’t believe that we were doing the work that fast.
He stopped the meeting and said, “No, no, no. I want you to do this in front of me. Start from raw data, run the training in front of me.” And I’m now reading his quote. It says, “the first set of models completed training in less than a minute. And several hundred models were completed in less than three minutes. This was for a significant-sized business data set that had taken us much longer to analyze manually.”
What you’re seeing in every one of these cases is that the reality of dirty data, business users changing their minds, and speed and collaboration being crucial doesn’t go away. Technology doesn’t change reality, but by speeding it up, by turning something that would’ve been a matter of months into a matter of minutes, you fundamentally change the art of the possible and get customers to value in less than 30 days.
See case studies here
What was fascinating here was that even well-meaning executives were trying to give us more time to complete projects. But it is possible to get to value in 30 days if you use the right technology, if you collaborate across the stakeholders, and if you fail fast and iterate. If the data set is the wrong data set, fail fast. Move to a second data set. Iterate, iterate, iterate, and get value in 30 days.
Think of the world that will open up as businesses become convinced that they can transform themselves by using AI in 30 days. That’s the shift. That’s like the browser being on everyone’s desktops. That’s like everyone having access to their own PC at work. That shift is what transforms productivity. The art of the possible is not changed by theoretical things that work in labs but take nine months to do with 90% failure rates.
Once a technology is predictable, accessible for everyone, and results-oriented, the world changes. Intel has been at the technological forefront of innovating for AI. And now they’re helping the whole market shift towards that kind of productivity.
Arijit Bandyopadhyay: In summary, I want to reiterate this to anyone investing in an AI project. Yes, it is a challenging space, but have high goals. AI has come a long way. Get things arranged properly in your organization to take advantage of it. Data is one, talent is another, and expectations are another. Stay current with the usage of the latest tools and technologies on Intel Architecture for AI. Engage, and you will see this magic happen. Once the results come out, you’ll only want to scale further. We’ve seen very happy customers. In terms of what they initially thought to what they got is mind-boggling. So, I really wanted to let our joint customers know about the partnership and the value that Aible on Intel brings to the table.
Arijit Sengupta: Arijit, to the point you just made, this is a case study that hasn’t been written up yet, but it was really funny because the client thought it was going to be really hard. They got started on the first day. Thankfully the data was really good. So, they went from data to a model being deployed in the first working session. They stopped, and they said, “Did we just do the project already? Did we just do it, is it done?”
We told them they still might want to get feedback from their business stakeholders because their assumptions may not be right. “You’re going to improve this. But yes, if you’re asking me, did I just go from raw data to a model created to a model being deployed in my own environment in one hour? Yes. We just did that.”
That’s a magic show, right? Because until these business stakeholders come in and change the assumptions and make the AI more realistic, that’s just a magic show. But it is possible today to go from raw data to a deployed model in less than an hour if you really want.
To see case studies, visit: https://www.aible.com/casestudies.
Salvatore Salamone is a physicist by training who has been writing about science and information technology for more than 30 years. During that time, he has been a senior or executive editor at many industry-leading publications including High Technology, Network World, Byte Magazine, Data Communications, LAN Times, InternetWeek, Bio-IT World, and Lightwave, The Journal of Fiber Optics. He also is the author of three business technology books.
Your email address will not be published.
The FogROS 2 platform leverages cloud computing to speed processing and could allow next-gen robotic applications to take advantage of advanced algorithms.
Research shows that observability in an AIOps environment provides early and ongoing paybacks to an organization.
Artificial intelligence and real-time analytics are driving three core technology concepts.
Research highlights how the observability tools market is fragmented and how user implementations may still be in the early stages.