Fair Isaac Corporation

05/25/2021 | News release | Distributed by Public on 05/25/2021 06:14

It’s 2021. Do You Know What Your AI Is Doing?

Responsible AIhas been one of my big topics for a few years now, the subject of many articles, blogs and talks I've given to audiences around the world. So how are companies faring in adopting Responsible AI, making sure they are using artificial intelligence ethically, transparently, securely and in their customers' best interests?

The short answer: not great. A new report released today by FICO and market intelligence firm Corinium, entitled The State of Responsible AI, finds that most companies are deploying AI at significant risk. Here are a few topline findings that illustrate why:

  • 65% of respondents' companies can't explain how specific AI model decisions or predictions are made
  • 73% have struggled to get executive support for prioritizing AI ethics and Responsible AI practices
  • Only one-fifth (20%) actively monitor their models in production for fairness and ethics

The report, the second annual executive research effort by FICO and Corinium focused on Chief Analytics, Chief AI and Chief Data Officers, examines how global organizations are applying artificial intelligence technology to business challenges, and how responsibly they are doing so. In addition to a troubling widespread inability to explain how AI model decisions or predictions are made, the study found that 39% of board members and 33% of executive teams have an incomplete understanding of AI ethics.

With worldwide revenues for the AI market (including software, hardware and services) forecast to grow 16.4% year over year in 2021to $327.5 billion, companies' reliance on AI technology is heading in only one direction: up. The report's findings point to an urgent need to elevate the the importance of AI governance and Responsible AI to the boardroom level; organizations are increasingly leveraging AI to automate key processes that, in some cases, are making life-altering decisions for their customers. Not understanding how these decisions are made, and whether they are ethical and safe, creates enormous legal vulnerabilities and business risk.

Who's Responsible for Responsible AI?

Despite the embrace of AI, what is driving the lack of awareness of its responsible use? The study showed that there is no consensus among executives about what a company's responsibilities should be when it comes to AI. As an example, almost half (43%) of respondents say they have no responsibilities beyond regulatory compliance to ethically manage AI systems that make decisions which may indirectly affect people's livelihoods. In my view, this speaks to the need for more regulation, if the designers of AI largely don't see their responsibility as being more than what existing regulation enforces-or, in most cases, don't enforce.

To drive the responsible use of AI in their organizations, senior leadership and boards must understand and enforce auditable, immutable AI model governance. They need to establish governance frameworks to monitor AI models to ensure the decisions they produce are accountable, fair, transparent, and responsible. Executive teams and Boards of Directors cannot succeed with a 'do no evil' mantra without a model governance enforcement guidebook and corporate processes to monitor AI in production. In their capacity, AI leaders need to establish standards for their firms where none exist today, and promote active monitoring. Only 20% of respondents actively monitor AI in production today.

The Urgent Need to Fight Bias

What can businesses do to help turn the tide? Combating AI model bias is an essential first step, but many enterprises haven't fully operationalized this effectively; the study found that 80% of AI-focused executives are struggling to establish processes that ensure responsible AI use.

Businesses recognize that things need to change, as the overwhelming majority (90%) of respondents agree that inefficient processes for model monitoring represent a barrier to AI adoption. Thankfully, almost two-thirds (63%) believe that AI ethics and Responsible AI will become a core element of their organization's strategy within two years.

It's clear that the business community is committed to driving transformation through AI-powered automation. However, senior leaders and Board of Directors need to be aware of the risks associated with the technology and the best practices to proactively mitigate them. AI has the power to transform the world, but in my view, as the popular saying goes, with great power comes great responsibility.

Get your complete copy of the FICO sponsored report, The State of Responsible AI here. Keep up with my latest AI insights, opinions and data science breakthroughs by following me on Twitter @ScottZoldi and on LinkedIn.