05/31/2023 | Press release | Distributed by Public on 05/31/2023 17:53
Microsoft 365 Copilot inherits all of Microsoft's cloud security controls, but these were not designed for new AI capabilities. Security and risk management leaders must implement verifiable controls for AI data protection, privacy, and filtering of large language model content inputs and outputs.
We just published Quick Answer: How to Make Microsoft 365 Copilot Enterprise-Ready From a Security and Risk Perspective where my colleagues Matt Cain, Jeremy D'Hoinne, Nader Henein, and Dennis Xu and I explore this topic.
At the time of writing, Microsoft 365 Copilot is not, in Gartner's view, fully "enterprise-ready" - at least not for enterprises operating in regulated industries or subject to privacy regulations such as the EU's GDPR or forthcoming Artificial Intelligence Act.
Microsoft might, however, add more security and privacy controls to Copilot before it becomes generally available, because of its dealings with participants in its current preview program.
Our note includes recommendations for supporting Microsoft 365 Copilot readiness for enterprise use in terms of security and risk: (These recommendations can be applied to other enterprise applications that use third-party-hosted Large Language Models).
Coincidentally, Microsoft CTO Kevin Scott was one of the signatories on the Center for AI Safety's statement on AI Risk:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
Certainly, there are existential risks that come with new generations of AI. Those risks are well beyond the scope of this research, which instead addresses the risks of using what Matt Cain calls "Everyday AI" .