Watch all the Transform 2020 sessions on-demand right here.
Kicking off the third and final day of VentureBeat’s Transform 2020 digital conference, Nvidia VP and GM of embedded and edge computing Deepu Talla offered a fireside chat on the increasing role of edge AI in business computing — a topic that has been widely discussed over the past year but has remained somewhat amorphous. Talla presented a clear thesis: Edge AI exists to solve specific business problems that demand some combination of in-house computing, high speed, and low latency that cloud AI can’t deliver.
As of today, most state-of-the-art AI runs in the cloud, or at least generates AI-powered answers in the cloud, based on spatially and temporally aggregated data from devices with some edge processing capabilities. But as Talla and Lopez Research founder Maribel Lopez explained, some AI answer processing is already moving to the edge, in part because sensors are now generating an increasing volume of data that can’t all be sent to the cloud for processing.
It’s not just about handling all that data, Talla explained; edge AI located within or close to the point of data gathering can in some cases be a more practical or socially beneficial approach. For a hospital, which may use sensors to monitor patients and gather requests for medicine or assistance, edge processing means keeping private medical data in house rather than sending it off to cloud servers. Similarly, a retail store could use numerous cameras for self-checkout and inventory management and to monitor foot traffic. Such granular details could slow down a network, but can be replaced by an on-site edge server with lower latency and a lower total cost.
Over the past year, Talla said, AI has benefited from the availability of great hardware and architectures, including GPUs with tensor cores for dedicated AI processing, plus secure, high-performance networking gear. Unlike smartphones, which get replaced every 2-3 years, edge servers will remain in the field for 5, 10, or more years, making software-focused updates critical. To that end, Nvidia’s EGX edge computing software brings traditional cloud capabilities to edge servers and will be updated to improve over time. The company has also launched industry-specific edge frameworks, such as Metropolis (smart cities), Clara (health care), Jarvis (conversational AI), Isaac (robotics), and Aerial (5G), each supporting forms of AI on Nvidia GPUs.
It’s possible to combine features from multiple frameworks, Talla explained, like using Clara Guardian to help hospitals go touchless, with Jarvis monitoring cameras in patient rooms and then automatically handling spoken requests such as “I want water.” Using Metropolis smart city tools, the same system could handle AI processing for the hospital’s entire fleet of cameras, dynamically counting the number of people in the building or in rooms. Some of these tasks can happen today with cloud AI, but moving much or all of it to the edge for faster responsiveness makes sense — for certain businesses.
Talla didn’t suggest that cloud AI is either on the way out or antiquated, however. In fact, he noted that answers generated by cloud AI are currently fantastic and said edge AI’s appeal will depend on its ability to solve a business’ specific problem better than a cloud alternative. It remains to be seen whether an in-house edge AI system will have an equal, lower, or higher total cost of ownership for businesses compared with cloud platforms, as well as which approach ultimately delivers the best overall experience for the company and its customers.
Even so, Talla said during a Q&A session that a significant amount of processing will shift from the cloud to the edge over the next five years, though an answer generated by edge AI may also just be one component of a larger AI system fusing edge and cloud AI processing. Also, he noted that edge servers will increasingly become useful for multiple functions simultaneously, such that a single edge computer may handle 5G communications, video analytics, and conversational AI for a company, rather than just being dedicated to one purpose.
Credit: Source link