Red Hat Inc.

05/07/2024 | Press release | Distributed by Public on 05/07/2024 08:43

Red Hat and Run:ai Optimize AI Workloads for the Hybrid Cloud

DENVER - RED HAT SUMMIT 2024- May 7, 2024 -

Red Hat, Inc., the world's leading provider of open source solutions, and Run:ai, the leader in AI optimization and orchestration, today announced a collaboration to bring Run:ai's resource allocation capabilities to Red Hat OpenShift AI. By streamlining AI operations and optimizing the underlying infrastructure, this collaboration enables enterprises to get the most out of AI resources, maximizing both human- and hardware-driven workflows, on a trusted MLOps platform for building, tuning, deploying and monitoring AI-enabled applications and models at scale.

Icon-Red_Hat-Media_and_documents-Quotemark_Open-B-Red-RGB

Through our collaboration with Run:ai, we're enabling organizations to maximize AI workloads at scale without sacrificing the reliability of an AI/ML platform or valuable GPU resources, wherever needed.

Steven Huels

Vice President and General Manager, AI Business Unit, Red Hat

GPUs are the compute engines driving AI workflows, enabling model training, inference, experimentation and more. These specialized processors, however, can be costly, especially when being used across distributed training jobs and inferencing. Red Hat and Run:ai are working to meet this critical need for GPU resource optimization with Run:ai's certified OpenShift Operator on Red Hat OpenShift AI, which helps users scale and optimize wherever their AI workloads are located. Run:ai's cloud-native compute orchestration platform on Red Hat OpenShift AI helps:

  • Address GPU scheduling issues for AI workloads with a dedicated workload scheduler to more easily prioritize mission-critical workloads and confirm that sufficient resources are allocated to support those workloads.
  • Utilize fractional GPU and monitoring capabilities to dynamically allocate resources according to pre-set priorities and policies and increase infrastructure efficiency.
  • Gain improved control and visibility over shared GPU infrastructure to provide easier access and resource allocation across IT, data science and application development teams.

Run:ai's certified OpenShift Operator is available now. In the future, Red Hat and Run:ai plan to continue building on this collaboration with additional integration capabilities for Run:ai on Red Hat OpenShift AI. This aims to support more seamless customer experiences and further expedite moving AI models into production workflows with even greater consistency.

Red Hat Summit

Join the Red Hat Summit keynotes to hear the latest from Red Hat executives, customers and partners:

Supporting Quotes

Steven Huels, vice president and general manager, AI Business Unit, Red Hat

"Increased adoption of AI and demand for GPUs requires enterprises to optimize their AI platform to get the most out of their operations and infrastructure, no matter where they live on the hybrid cloud. Through our collaboration with Run:ai, we're enabling organizations to maximize AI workloads at scale without sacrificing the reliability of an AI/ML platform or valuable GPU resources, wherever needed."

Omri Geller, CEO and founder, Run:ai

"We are excited to partner with Red Hat OpenShift AI to enhance the power and potential of AI operations. By leveraging Red Hat OpenShift's MLOps strengths with Run:ai's expertise in AI infrastructure management, we are setting a new standard for enterprise AI, delivering seamless scalability and optimized resource management."

Additional Resources

Connect with Red Hat

In short

Red Hat is bringing Run:ai's resource allocation capabilities to Red Hat OpenShift AI to help streamline AI operations and optimize the underlying infrastructure.

Mentioned in this article

Red Hat OpenShift AI

Subscribe to the feed