Fair Isaac Corporation

06/01/2023 | Press release | Distributed by Public on 06/01/2023 08:55

Artificial Intelligence: From Hollywood to the Mainstream

Up until just a few years ago, artificial intelligence (AI) was something you mainly heard about in movies like 2001: A Space Odyssey, Terminator and Chappie. Today, ChatGPT and other new AI-powered chat and search tools are getting a ton of coverage in the media, social sites and around the workplace.

While there are some concerning outputs from generative AI, like when folks get chat AIs to say scary things, what many fail to realize is that AI is pretty darn useful in ways Hollywood would not bother to predict.

Image created by Dall:E2 - Oil painting in the style of Van Gogh of robot HAL from 2001 movie, reading a book about AI

Use Your AI Responsibly

Giving credit where it's due, John Oliver (not in Hollywood, in New York) recently chased the AI headlines as only a skilled satirist can, making you laugh at the wacky stories about peoples' dark AI encounters while wondering why they are reported as news at all.

But Oliver hit a crucial point that's worth reiterating: There is a big difference between explainable AI and black box AI, or artificial intelligence that arrives at conclusions, makes decisions or gives an output that is unexplainable. As AI applications are put to work at more complex tasks like managing capacity on power grids and monitoring financial transactions for fraud, all of us at FICO firmly believe that the only type of AI to apply is the responsible, explainable, ethical kind.

Unexplainable AI

When we hear stories about generative AI systems that declare love or a desire to die, we're often quick to equate it with a sentient being crying for help, rather than a mathematical model or algorithm regurgitating information from the data used to train it.

Even though AI systems are not, in fact, sentient beings, there are some significant concerns when we can't point to an explanation for their output. The dangers with unexplainable AI don't need to go as far as Skynet, WOPR, or VIKI to be problematic. AI that is biased, unexplainable and/or producing incorrect or inconsistent results can result in regulatory and compliance concerns for any financial institution unlucky enough to use it.

Training Properly to Avoid AI Bias

Training is a vital component to many facets of life. Whether it's spending time at the range to try and cure my slice (still working on that) or hitting the gym to work off holiday excesses, training allows us to incrementally improve. But we've all heard the adage - practice makes permanent, not perfect.

Anytime you are training, you have to ensure you are training on the right things or you'll be learning the wrong thing. In golf practice, that's often separating what you feel vs. what is real by using feedback aids like cones and pool noodles to help you really understand what your swing is doing and not what you think it is doing.

In that same way, any sort of AI has to be built up and trained correctly to avoid creating AI bias or potentially injecting human bias into the AI system. Using the right data to train an algorithm to achieve a specified and defined outcome is crucial. Any data used needs to be understood extremely well, and there needs to be a clear understanding of what types of outcomes or results are expected in order to avoid bias.

Oliver mentions in his rant that there are plenty of stories about AI training gone wrong, like when AI determined that the small rulers used in dermatological photos are malignant - because in the training data, the rulers appear in every photo with a malignant tumor. This kind of unintended outcome is not unique to AI, but it is an existential consideration for any sort of explainability.

Interpretable AI

This is why training interpretable AI with the correct types of data, with a representative and sufficiently large dataset, and understanding of what the rules are (we are looking at tumors, not rulers) are some necessary baselines for applying AI.

But equally important, especially in the context of fraud, is to ensure you are concrete on the objectives of the model and ensure you have proper training data to support those objectives. Models need to generalize and ensure that one behavior doesn't dominate.

Another key is to know when predictions and outcomes are drifting out of range. One solution is to apply constant updates, using both structured and unstructured data, which can help ensure that model performance continues to produce the expected results, without drifting towards bias in one direction or another. However, the gold standard is to ensure you have monitoring in place to see if there is drift in the data or scores over time.

Where AI Helps Financial Institutions

For decades now, financial institutions (FIs) and fintechs have been deploying AI for a number of specific use cases, but only recently has the concept of generative AI begun to generate significant headlines. Here are some of the more popular use cases for AI & machine learning (ML), and notes on why interpretability is so important in each case.

  • Transactional monitoring for fraud: Transaction monitoring has become a hot topic in the news and in fraud circles because of the rise of real-time payments fraud and highly publicized schemes by criminal cryptocurrency exchanges. What AI-powered transaction monitoring can do better, faster, and cheaper than a large group of humans is pull vast data together from many sources to find patterns fast and help with fraud detection in real-time. With the proper training, the data will signal a variety of transaction frauds immediately, helping mitigate losses and protect both customers and banks.
  • Customer experience improvement: Just as AI can crunch a load of disparate fraud-related data faster than K.I.T.T. can calculate a Turbo Boost launch, it can do the same for customer experiences. A properly trained AI will quickly connect the dots between events, like a failed transaction, a call to a contact center that ended on hold, and a series of logins to the mobile app. The AI can spot this as trouble, and trigger proactive outreach to the customer by the method they prefer, turning a negative customer experience into a positive one. Human decisions can still help resolve any outstanding issues, but AI systems can help move things along much more quickly.
  • Risk management: While covering all of a large bank's risk management challenges might sound like an impossible task, AI's ability to digest enterprise-wide data, find insights, and suggest or automate decisions can make it both a compelling and feasible proposition. AI-driven tools that help improve models and generate real-time insights are likely to become the norm for teams assessing risks, detecting fraud, and assuring compliance.

Interpretable AI, deployed in an enterprise-class Platform, can make a whole range of financial, fraud, and customer experience use cases better. Though the general public seems obsessed with what they imagine a rogue AI might do, smart businesses realize the reality of AI is more like Dr Theopolis from Buck Rogers (friendly professorial AI) than MCP from Tron (evil mastermind AI).

How FICO's Explainable AI Can Unlock Incredible Value for Your Enterprise

For more of my latest thoughts on fraud, financial crime and FICO's entire family of software solutions, follow me on Twitter @FraudBird.