10/21/2019 | Press release | Distributed by Public on 10/21/2019 18:05
AI is no longer just a research project. It's solving real-world problems for organizations, which now need to figure out where to deploy their AI models to make faster decisions.
With the convergence of AI, the Internet of Things and the approaching 5G infrastructure, the opportunity is ripe for companies to push their models beyond the data center to the edge, where billions of sensors are streaming data and making real-time decisions is a reality.
Enterprises deploying AI workloads at scale are using a combination of on-premises data centers and the cloud, bringing the AI models to where the data is being collected. Deploying these workloads at the edge, say in a retail store or parking garage, can be very challenging if IT expertise is not available as one might have with data centers.
Kubernetes eliminates many of the manual processes involved in deploying, managing and scaling applications. It provides a consistent, cloud-native deployment approach across on-prem, the edge and the cloud.
However, setting up Kubernetes clusters to manage hundreds or even thousands of applications across remote locations can be cumbersome, especially when human expertise isn't readily available at every edge locale. We're addressing these challenges through the NVIDIA EGX Edge Supercomputing Platform.
NVIDIA EGX is a cloud-native, software-defined platform designed to make large-scale hybrid-cloud and edge operations possible and efficient.
Within the platform is the EGX stack, which includes an NVIDIA driver, Kubernetes plug-in, NVIDIA container runtime and GPU monitoring tools, delivered through the NVIDIA GPU Operator. Operators codify operational knowledge and workflows to automate lifecycle management of containerized applications with Kubernetes.
The GPU Operator is a Helm chart deployed, cloud-native method to standardize and automate the deployment of all necessary components for provisioning GPU-enabled Kubernetes systems. NVIDIA, Red Hat and others in the cloud-native community have collaborated on creating the GPU Operator.
The GPU Operator also allows IT teams to manage remote GPU-powered servers the same way they manage CPU-based systems. This makes it easy to bring up a fleet of remote systems with a single image and run edge AI applications without additional technical expertise on the ground.
The EGX stack architecture is supported by hybrid-cloud management partners, such as Canonical, Cisco, Microsoft, Nutanix, Red Hat and VMware, to further simplify deployments and provide a consistent experience from cloud and data center to the edge.
Today at Mobile World Congress Los Angeles, we announced the expansion of the NGC-Ready program with NGC-Ready for Edge systems to support edge deployments. These systems undergo additional security and remote system management tests, which are fundamental requirements for edge deployments. Qualified systems like these are ideal for running the EGX stack, providing an easy onramp to hybrid deployments.
Validated NGC-Ready for Edge systems are available from the world's leading manufacturers, including Advantech, Altos Computing, ASRock RACK, Atos, Dell Technologies, Fujitsu, GIGABYTE, Hewlett Packard Enterprise, Lenovo, MiTAC, QCT, Supermicro and TYAN.
NGC now offers a Helm chart registry for deploying and managing AI software. Helm charts are powerful cloud-native tools to customize and automate how and where applications are deployed across Kubernetes clusters.
NGC's Helm chart registry contains AI frameworks, NVIDIA software including the GPU Operator, NVIDIA Clara for medical imaging and NVIDIA Metropolis for smart cities, smart retail and industrial inspection. NGC also hosts Helm charts for third-party AI applications, including DeepVision for vehicle analytics, IronYun for video search and Kinetica for streaming analytics.
With NGC-Ready Support Services, developer and operations teams get access to a private Helm registry for their NGC-Ready for Edge systems to push and share their Helm charts. This lets the teams take advantage of consistent, secure and reliable environments to speed up continuous cycles of integration and deployment.
To easily provision GPU-powered Kubernetes clusters across different platforms and quickly deploy AI applications with Helm charts and containers, go to ngc.nvidia.com.