11/08/2019 | News release | Distributed by Public on 11/08/2019 15:46
Recently, Lattice was a sponsor at The Linley Fall Processor Conference. This year's conference focused on the latest processor and system technologies that enable AI experiences, from the datacenters providing cloud-based analytics to the billions of AI-enabled client devices operating at the Edge.
For Lattice, it was perfect opportunity to share the latest news about our award-winning solutions stack for enabling next-generation smart devices with extremely low power consumption, Lattice sensAI™. Hoon Choi, a Lattice Fellow who also leads the development of machine learning technology and solutions on our FPGAs, delivered a presentation titled, 'Machine Learning on Tiny, Low Power FGPAs', which included information about our recently announced sensAI performance enhancements and new and improved reference designs that make it easier than ever to support low power Edge AI applications.
Hoon Choi discusses low power Edge AI and Lattice sensAI at The Linley Fall Processor Conference
The latest performance enhancements to the Lattice sensAI solution stack include support for deeper quantization to take advantage of the internal memory of Lattice's iCE40 UltraPlus™ FPGA. This allows customers to double their neural network model's size for more accurate AI performance.
Support for 8-bit quantization during training yields better accuracy during NN model training
Furthermore, sensAI running on Lattice ECP5™ FPGAs now supports layers used in MobileNet and ResNet neural network models, which can process higher resolution images to deliver more accurate AI performance with no increase in the FPGA's power consumption.
But perhaps most exciting news about the latest updates to the sensAI solutions stack are the new application reference designs for fast implementation of popular Edge AI applications. These applications let sensAI developers easily integrate key phrase detection or human face recognition, with some truly compelling features: