07/19/2021 | Press release | Distributed by Public on 07/19/2021 06:18
With the exponential increase of connected devices, and migration of data to the cloud, the data deluge is a real impediment as it consumes a lot of time and power. For Edge-IoT, the current challenge is to drive the Decision Making as close as possible to the sensors, to reduce the amount of data exchanged with the Cloud and therefore improve overall Power Efficiency, data privacy and response time.
To do this, chip designers must embed AI computing in their chip to perform the decisions, and they must find inventive MCU architectures that minimize power consumption and maximize the device's battery lifetime.
With this perspective, Dolphin Design comes with processing platforms to help face the data deluge challenge:
Both these platforms are delivered with software tools: drivers, tool suite and virtual platform.
To match their customers' stringent Time-to-Market requirements, Dolphin Design has strengthened its developments by joining forces with the expert teams of CEA-List in a joint lab.
The joint lab will draw on both partners' solutions and know-how to bring a flexible new computing platform, with AI capabilities, to the embedded electronics markets. Dolphin Design has integrated several hardware IP, developed by CEA-List, into its Chameleon and Raptor product portfolio. CEA-List researchers continue to deploy and expand their N2D2 deep learning platform to improve processing efficiency and reduce the power consumption of systems that integrate the PNeuro® hardware accelerator.
PNeuro® is a single instruction, multiple data (SIMD), low footprint, programmable accelerator from CEA-List that will enable Dolphin Design to add AI capabilities to the low power Chameleon and Raptor products. Chameleon is an event-based MCU subsystem platform embedding several standard peripherals, an autonomous DMA, a fined-grained power management unit and a PNeuro® with 32 processing elements. Raptor is a programmable hardware accelerator specialized in NN (Neural Network) inference and vision processing, which includes a host core, a DMA, and a PNeuro® with 128 processing elements. With Chameleon and Raptor, Dolphin Design will cover a wide range of AI low power applications.
Moreover, by expanding CEA-List's N2D2 tool features to PNeuro® architecture support, CEA provide a powerful and independent deep neural network platform, dedicated to embedded applications design for Chameleon and Raptor product users. N2D2 is able to generate the optimized PNeuro® program directly from the user's high-level neural network application and is compliant with the ONNX exchange format readily available in all major deep learning frameworks. It also integrates advanced quantization technics that fully leverage the lower-precision PNeuro® core to keep the best applicative performance with reduced circuit area and power footprint. For a complete interoperability of the Deep Learning models, developments are planned to improve the standard ONNX support in N2D2 in order to add it to the list of ONNX compliant Machine Learning frameworks to increase the market reach for Raptor and Chameleon with mini-Raptor.
Finally, by combining both CEA's PNeuro® Virtual model and SESAM / VPSim Virtual prototyping tool, Dolphin Design can improve design quality, performance and reliability, explore designs faster, and start Software development earlier in the design process.