10/22/2019 | News release | Distributed by Public on 10/22/2019 16:12
Edge computing and doughnuts share something in common: the closer they are to the consumer, the better.
A trip to the corner doughnut shop may be deliciously quick. But a box of doughnuts within reach of your desk is instant gratification.
The same holds true for edge computing. Send data to an AI application running in the cloud, and it delays answers. Send it to a nearby edge server, and it's like grabbing directly from that pink box of glazed raised and rainbow sprinkles.
Chances are, you can get a small taste of edge computing right from your pocket: The latest smartphones live at the 'edge' of telecom networks, processing smarter voice responses and snazzier photos.
Edge computing - a decades-old term - is the concept of capturing and processing data as close to the source as possible.
Edge computing requires applying processors at the points where gigabytes to terabytes of streaming data is being collected - autonomous vehicles, robots on factory floors, medical imaging machines at hospitals, cameras or checkout stations in retail stores - for quick operations in autonomous applications.
The unveiling of 5G networks - expected to clock in 10x faster than 4G - only increases the possibilities for AI-enabled services, requiring further acceleration of edge computing.
New Google, Apple and Samsung smartphones pack more AI processing to better interpret users' questions and polish images in milliseconds using computational photography.
Yet the vast sums of data streamed from Internet of Things devices are orders of magnitude more than that produced by people using smartphones.
A bonanza of connected cars, robots, drones, mobile devices, cameras and sensors for IoT, as well as medical imaging devices, has tipped the scales toward edge computing. The surge of data used in these compute-intensive workloads is demanding high-performance edge computing to deploy AI.
Split-second AI computations today require edge computing to reduce latency and bandwidth problems caused by shuttling data back and forth when processing on remote servers.
Data centers are centralized servers often situated where real estate and power is less expensive. Even on the zippiest fiber optic networks, data can't travel faster than the speed of light. This physical distance between data and data centers causes latency.
Edge computing closes the gap.
Edge computing can be run at multiple network nodes to literally close the distance between data and processing to reduce bottlenecks and accelerate applications.
At the periphery of networks, billions of IoT and mobile devices operate on small, embedded processors, which are ideal for basic applications like video.
That would be just fine if industries and municipalities across the world today weren't applying AI to data from IoT devices. But they are, developing and running compute-intense models, and they require new approaches to traditional edge computing.
Fortune 500 companies and startups alike are adopting AI at the edge for municipalities. For example, cities are developing AI applications to relieve traffic jams and increase safety.
Verizon uses NVIDIA Metropolis, the IoT application framework that, combined with Jetson's deep learning capabilities, can analyze multiple streams of video data to look for ways to improve traffic flow, enhance pedestrian safety, optimize parking in urban areas, and more.
Miovision and others' work in this space can be accelerated by edge computing from the NVIDIA Jetson compact supercomputing module and insights from NVIDIA Metropolis. The energy-efficient Jetson can handle multiple video feeds simultaneously for AI processes. The combination delivers an alternative to network bottlenecks and traffic jams.
Edge computing scales up, too. Industry application frameworks like NVIDIA Metropolis and AI applications from third parties run atop of the NVIDIA EGX platform for optimal performance.
Edge computing for AI has many benefits, like bringing AI computing to the environments where data is being generated across industries, including smart retail, healthcare, manufacturing, transportation and smart cities.
This shift in the computing landscape offers businesses new service opportunities and can unlock business efficiencies and cost-savings.
In place of traditional edge servers, the NVIDIA EGX platform offers compatibility across NVIDIA AI, from the Jetson line of supercomputing modules to full racks of NVIDIA T4 servers.
Businesses running edge computing for AI gain the flexibility of deploying low-latency AI applications on the tiny NVIDIA Jetson Nano, which takes just a few watts to deliver a half-trillion operations per second for such tasks as image recognition.
A rack of NVIDIA T4 servers delivers more than 10,000 trillion operations per second for the most demanding real-time speech recognition and other compute-heavy AI tasks.
Also, updates at the periphery of the AI-driven edge network are a snap. The EGX software stack runs on Linux and Kubernetes, allowing remote updates from the cloud or edge servers to continuously improve applications.
And NVIDIA EGX servers are tuned for CUDA-accelerated containers.
The world's largest retailers are enlisting edge AI to become smart retailers. Intelligent video analytics, AI-powered inventory management, and customer and store analytics together offer improved margins and the opportunity to deliver better customer experiences.
Using the NVIDIA EGX platform, Walmart is able to compute in real time more than 1.6 terabytes of data generated a second. It can use AI for a wide variety of tasks, such as automatically alerting associates to restock shelves, retrieve shopping carts or open up new checkout lanes.
Connected cameras numbering in the hundreds or more can feed AI image recognition models processed on site by NVIDIA EGX. Meanwhile, smaller networks of video feeds in remote locations can be handled by Jetson Nano, linking with EGX and NVIDIA AI in the cloud.
Store aisles can be monitored by fully autonomous and capable conversational AI robots powered by Jetson AGX Xavier and running Isaac for SLAM navigation. All of this is compatible with EGX or NVIDIA AI in the cloud.
Whatever the application, NVIDIA T4 and Jetson GPUs at the edge provide a powerful combination for intelligent video analytics and machine learning applications.
Factories, retailers, manufacturers and automakers are generating sensor data that can be used in a cross-referenced fashion to improve services.
This sensor fusion will enable retailers to deliver new services. Robots can use more than just voice and natural language processing models for conversational interactions. Those same bots can use video feeds to run on pose estimation models. Linking the voice and gesture sensor information can help robots better understand what products or directions customers are seeking.
Sensor fusion could create new user experiences for automakers to adopt for competitive advantages as well. Automakers could use pose estimation models to understand where a driver is looking along with natural language models that understand a request that correlates to 7-Eleven locations on a car's GPS map.
Snap your fingers, point to a 7-Eleven and say 'pull over for doughnuts,' and you're in for a ride to the future, with your autonomous vehicle inferring your destination, aided by sensor fusion and AI at the edge.
Gamers are notorious for demanding high-performance, low-latency computing power. High-quality cloud gaming at the edge ups the ante. Next-generation gaming applications involving virtual reality, augmented reality and AI are an even bigger challenge.
Telecommunications providers are using NVIDIA RTX Servers - which deliver cinematic-quality graphics enhanced by ray tracing and AI - to gamers around the world. These servers power GeForce NOW, NVIDIA's cloud gaming service, which transforms underpowered or incompatible hardware into powerful GeForce gaming PCs at the edge.
Taiwan Mobile, Korea's LG U+, Japan's SoftBank and Russia's Rostelecom have all announced plans to roll out the service to their cloud gaming customers.
With edge AI, telecommunications companies can develop next-generation services to offer their customers, providing new revenue streams.
Using NVIDIA EGX, telecom providers can analyze video camera feeds using image recognition models to help with everything from foot traffic to monitoring store shelves and deliveries.
For example, if a 7-Eleven ran out of doughnuts early in the morning on a Saturday in its store display, the convenience store manager could receive an alert that it needs restocking.
So in the future, when you bite into a fresh doughnut, you might have edge computing to thank.
High-performance deep learning inferencing happens at the edge with the NVIDIA Jetson embedded computing platform, and through NVIDIA EGX platform servers and data centers with NVIDIA Tesla GPU accelerators.