Fair Isaac Corporation

03/29/2024 | Press release | Distributed by Public on 03/29/2024 06:10

Responsible AI and Why Governance Matters

I recently attended the World AI Cannes Festival (WAICF) in Cannes, France, where I heard urgent calls for generative artificial intelligence (GenAI) governance. Data scientists were clamoring for the technology to be far more effectively governed, with transparency and responsible guard rails. At the same time, I saw an irrational exuberance for GenAI, with presenters claiming it will unlock trillions of dollars in annual global productivity.

While the debate was happening, I was able to share the power of FICO's controlled, pragmatic approach to AI deployment. I'm proud that FICO takes the application of AI and machine learning (ML) very seriously through a structured approach to Responsible AI governance that emphasizes interpretable ML, Ethical AI, palatability, Auditable AI and accountability.

"LLMs Suck!"

On the first day of the conference, Meta's VP and chief AI scientist Yann LeCun delivered a standing-room-only keynote, "Objective-Driven AI: towards AI systems that can learn, remember, reason, and plan," offering a highly informed technical perspective on why, in his words, "LLMs suck!" (And so does machine learning, in his view.) His perspective isn't surprising as we hear about epic GenAI fails daily, such as large language model (LLM) chatbots that hallucinate about justice or airfare refund policies, or image generators that vividly illustrate the worst in racial stereotypes, I am seeing a counterculture rise up, LeCun netted out his negative view on LLMs with:

  • Supervised learning (SL) requires large numbers of labeled samples.
  • Reinforcement learning (RL) requires insane amounts of trials.
  • Self-supervised learning (SSL) works great but generative prediction only works for text and other discrete modalities …. [leading up to]
  • LLMs have limited knowledge of the underlying reality - they have no common sense; no memory and they can't plan their answer.

LeCun continued his presentation by outlining an objective-driven AI architecture that can "learn, reason, [and] plan, yet is safe and controllable" affording a much more governable path "toward autonomous machine intelligence."

Sitting in the audience, I was glad to hear another senior data scientist acknowledge the limitations and risks of AI and GenAI technology. I am in complete agreement with LeCun's closing point, that if we don't respect these limitations, we risk harming ourselves or our applications, and the advances needed in GenAI may be slowed, restricted, or even outlawed.

Irrational Exuberance at Scale

On the other end of the spectrum, Nayur Khan of McKinsey's QuantumBlack AI practice painted the future of GenAI with unmitigated rosiness. He presented a statistically rich argument that GenAI can unlock up to $4.4 trillion in annual global productivity, and help organizations achieve a competitive advantage, sharing statistics including:

  • 2x the mentions of AI in S&P 500 earnings calls, indicating a surge in interest and/or deployments.
  • 400+% increase in global VC investment in Generative AI.
  • Potential to automate 60%-70% of employees' work.

A New York Times article further elaborates on the automation element, citing a recent report by McKinsey Global Institute:

Half of all work will be automated between 2030 and 2060, the report said.

McKinsey had previously predicted that A.I. would automate half of all work between 2035 and 2075, but the power of generative A.I. tools - which exploded onto the tech scene late last year - accelerated the company's forecast.

"Generative A.I. has the potential to change the anatomy of work, augmenting the capabilities of individual workers by automating some of their individual activities," the report said.

But - and this is a big but - Khan's presentation included an important footnote: less than 10% of companies can generate AI impact at scale. In my mind, this is the number to focus on, and indeed the problem to solve. I believe the inevitable backlash against AI, for its perceived "failure to deliver business value," is the crash where today's irrational exuberance is headed. But this is in fact a people problem, not a technology problem.

FICO Has Achieved Strong Results with Responsible AI

Although most of the hype at WAICF was around Generative AI, the session FICO led, "Using blockchain, Responsible AI and open banking to expand credit access", explored an application operationalized under the tenets of Responsible AI, and available to improve credit decisions in Brazil. I presented with Uri Tintore, founder and co-CEO of our partner Belvo, and together we covered:

  1. The status of Open Finance in Brazil and data availability.
  2. The need for Responsible AI that is robust, explainable, ethical, and auditable.
  3. The need for interpretable machine learning to address the 'black box' of machine learning, creating transparency and accountability.
  4. Using blockchain in AI model development to ensure auditability.

Uri dove deep into how leveraging Open Finance data requires deep processing and complex data enrichment, and why Belvo chose FICO to develop and operationalize Responsible AI machine learning models to better understand customers' financial behavior and improve future outcomes through enriched data. I then explained FICO's approach to expanding financial inclusion, using interpretable machine learning to address the "black box of ML." Figure 1 presents a simplified version of that process.

Figure 1. Belvo uses FICO's behavioral transaction profiling technology to gain contextual insight into customer financial behaviors, and interpretable machine learning, to produce an Open Finance score that can improve financial inclusivity in Brazil.

The entire development and operation of the resulting Belvo Open Finance Score, powered by FICO, is immutably codified in an AI model governance blockchain. The blockchain ensures that machine learning models, and the relationships in the data that drive those models, are explainable and justifiable - the foundation of successful AI governance. Furthermore, codification allows advanced techniques such as transaction analytics and interpretable machine learning to leverage this customer data to improve outcomes.

Power of Transaction Analytics: 6x More Loans with 3x Fewer Losses

To demonstrate the potential of the Belvo Open Finance Score we shared a "before" scenario as a case study of a Brazilian financial services provider. This institution had a very high loan reject rate (~84%) and a 20% default rate on those approved, which indicated a strong opportunityto improve business results.

Using the Open Finance score we showed a simple strategy to extend credit to 20% of the highest scoring rejected applications and decline 50% of the lowest scoring approvals, which would translate into just a 6% bad rate but substantially more access to credit. The result? Six times more loans with three times' lower loss rates- providing a life-altering opportunity for Brazil's financially underserved.

Figure 2. The Belvo Open Finance Score powered by FICO allows 6x more loan applicants to receive loans, at 3x less risk of loss to the bank.

This Is What Responsible AI Is All About

FICO believes it's fundamentally important to not jeopardize the opportunity to improve financial futures either by sloppily applying non-interpretable AI, or by not responsibly leveraging AI, ML and the data that can power change. This requires discipline and taking the time and care to implement a Responsible AI governance structure.

To work successfully long-term, and stand up to regulatory scrutiny, an AI governance strategy must apply Auditable AI constructs such as AI governance blockchains to establish compliance with those Responsible AI standards.

How FICO Can Help You with Responsible AI