Ray, the open-source machine learning framework, has advanced to Ray 2.0. It is utilized by businesses like OpenAI, Shopify, and Instacart, and has become one of the most popular solutions for organizations to scale and manage increasingly vast and complicated artificial intelligence (AI) workloads over the past two years.
Table of Contents
Machine learning (ML) models may now scale across hardware resources thanks to Ray, which can also be used to support MLops workflows involving several ML tools. Ray 1.0 was released in September 2020 after going through several revisions over the previous two years.
At the Ray Summit in San Francisco today, Ray 2.0’s general availability marked the next significant milestone. With the addition of the new Ray AI Runtime (AIR), which serves as a runtime layer for ML service execution, Ray 2.0 expands the technology. Additionally, Ray 2.0 has features that make it easier to create and manage AI workloads.
With Ray 2.0, our goal is to make #RLlib even more intuitive, easy to extend, and performant for various research and production use cases.
Check out what we have in store at #RaySummit: https://t.co/0ENS8cB9kV pic.twitter.com/zdZvzMVqhT
Anyscale unveiled a new enterprise platform to operate Ray along with the new release. Additionally, Anyscale disclosed a fresh $99 million round of fundraising that was led by Additional and Intel Capital, with participation from Foundation Capital.
Back in Berlin! Data Natives 2022, in person and online – tickets available now!
“Ray started as a small project at UC Berkeley and it has grown far beyond what we imagined at the outset,” said Robert Nishihara, cofounder and CEO at Anyscale, during his keynote at the Ray Summit.
The underlying significance and scope of Ray in the current AI domain are difficult to overstate.
During his keynote, Nishihara ran through a long list of well-known figures in the IT sector who use Ray. Shopify, a provider of e-commerce platforms, is one of the businesses he named. Shopify scales its TensorFlow and PyTorch-based machine learning platform with the aid of Ray.
Another Ray customer, grocery delivery firm Instacart, uses the technology to assist in the training of countless ML models. Nishihara mentioned that Amazon uses Ray for a variety of workloads.
Ray is also a key component of OpenAI, a pioneer in AI research and the team behind the DALL-E image creation system and GPT-3 Large Language Model.
Here is your own personal Salvador Dali, thanks to DALL-E 2 AI
“We’re using Ray to train our largest models. So, it has been very helpful for us in terms of just being able to scale up to a pretty unprecedented scale,” stated Greg Brockman, CTO and cofounder of OpenAI, at the Ray Summit.
In addition, the fact that Ray is a third-party tool that OpenAI does not have to maintain is advantageous, according to Brockman, who sees it as a developer-friendly tool.
“When something goes wrong, we can complain on GitHub and get an engineer to go work on it, so it reduces some of the burden of building and maintaining infrastructure,” explained Brockman.
One of Nishihara’s main objectives with Ray 2.0 was to make it easier for more users to take advantage of the technology while delivering performance improvements that help both large and small users.
According to Nishihara, organizations can become dependent on a specific framework for a given workload but eventually discover they would prefer to use other frameworks. As an illustration, a company might initially only utilize TensorFlow before realizing they also want to use PyTorch and HuggingFace in the same ML job. Users will now find it simpler to combine ML workloads across various tools thanks to the Ray AI Runtime (AIR) in Ray 2.0.
With the Ray Serve deployment graph functionality, Ray 2.0 seeks to address another prevalent problem: model deployment.
Using ML to cope with cryptojacking attempts
“It’s one thing to deploy a handful of machine learning models. It’s another thing entirely to deploy several hundred machine learning models, especially when those models may depend on each other and have different dependencies. As part of Ray 2.0, we’re announcing Ray Serve deployment graphs, which solve this problem and provide a simple Python interface for scalable model composition,” said Nishihara.
By making it simpler to create and manage ML workloads, Nishihara hopes to enable wider use of AI in the future with Ray.
“We’d like to get to the point where any developer or any organization can succeed with AI and get value from AI,” Nishihara added.
We are looking for contributors and here is your chance to shine. Click the button below to learn more!