10/28/2021 | News release | Distributed by Public on 10/28/2021 09:10
One size doesn't fit all. The need for intelligent, personalized experiences powered by AI is ever-growing. Our devices are producing more and more data that could help improve our AI-powered experiences. How can we learn and efficiently process all this data from edge devices? On-device learning rather than cloud training can address these challenges. In this blog post, I'll describe how our latest research is making on-device learning feasible at scale.
In the past, AI was primarily associated with the cloud. We have moved from a cloud-centric AI where all the training and inference occurs in the cloud, to today where we have partially distributed AI with inference happening on both the device and cloud. In the future, we expect not only inference but also the training or adaptation of models to happen on the device. We call this fully-distributed AI, where devices will see continuous enhancements from on-device learning, complementing cloud training. Processing data closest to the source helps AI to scale and provides important benefits such as privacy, low latency, reliability, and efficient use of network bandwidth.
On-device learning modifies the AI model, which was originally trained in the cloud with a global dataset, based on the new local data that a device (or set of devices) encounters. This can improve performance by personalizing to an individual user or adapting a shared model among a group of users and their environment. The AI can also continuously learn over time, adjusting to a world and personal preferences that are dynamic rather than static.
The benefits of on-device learning are clear. So why aren't we doing it? The reality is that that there is a number of challenges to overcome to make on-device learning feasible at scale. For example:
To overcome some of the feasibility and deployment issues for on-device learning, Qualcomm AI Research is actively investigating several topics and seeing very encouraging results.
Federated learning provides the best of both worlds from on-device training and cloud training. It maintains the privacy benefit and scale of on-device learning by keeping the data local, while also getting the benefit of learning from diverse data across many users by having the cloud aggregate many different locally-trained models. Raw data is never sent to the cloud, and model updates can be shared in ways that still preserve privacy.
Of course, the device must be capable of on-device learning for federated learning to work, and there are also many other system challenges for producing a good, federated model. Our research is largely focused on solving these challenges. Consider user verification, where a person is authenticated through biometrics. Deep learning is a powerful tool for user verification but typically requires large amounts of very personal data across a diverse set of users. Federated learning enables the training of such models while keeping all this personal data private. In fact, our user verification models trained with our federated learning methods are able to achieve competitive performance compared to state-of-the-art techniques, which do not preserve privacy. And our research is not all theory. We have developed a federated learning framework for research and application development on mobile, and we will be demonstrating this user verification use case at upcoming conferences. We want to make federated learning available at scale and address deployment challenges.
Low-complexity on-device learning is key to making all three of the learning techniques I described above possible. Backpropagation, which is the technique used during training of AI models, is computationally demanding and challenging to do on most power and thermal-constrained devices.
We're investigating several approaches to efficiently adapt the AI model on a device, such as quantized training, efficient models for backpropagation, and adaptation based on inference results.
Overall, we're very excited about the potential of on-device learning since it can lead to better models that leverage all the data available from edge devices while providing benefits such as privacy, low latency, reliability, and efficient use of network bandwidth. We have a broad range of research directions for on-device and federated learning to ease deployment, make them efficient, and create better AI models. Be sure to tune into my webinar where I will go into much more detail on our research. If we get this right, you can expect enhanced experiences that are continually improving and be able to trust that your personal data is kept safe. We can't wait to make this a reality.