Fair Isaac Corporation

10/25/2021 | News release | Distributed by Public on 10/25/2021 10:40

Ethical AI: A Breakdown with Dr. Scott Zoldi

There are many data science topics that Dr. Scott Zoldi, FICO's Chief Analytics Officer, is passionate about, but responsible, ethical artificial intelligence (AI) is surely near the top. Scott has blogged extensively and been in the newstalking about Ethical AI, one of four critical components of Responsible AI, along with Explainable AI, Robust AI and Audtable AI. He recently drew a worldwide audience with his LinkedIn Live session, "AI and Regulation: Is Your AI Ethical? The Answer May Surprise You." Here's a snapshot of Scott's conversation with Dr. Ganna Pogrebna, Professor of Business Analytics and Data Science at Columbia University, and Lead for Behavioral Data Science at The Alan Turing Institute.

Self-regulation does not work

Research shows that there's no consensus among executives about what a company's responsibilities should be when it comes to AI. In Dr. Pogrebna's opinion, "At the moment, companies decide for themselves as to what is ethical and unethical, which is extremely dangerous. Self-regulation does not work." Dr. Zoldi noted that only 22% of companies recently surveyed by Corinium Global and FICO have an internal ethics board; "What's your opinion on the current state of AI self-regulation?" he asked.

Ganna identified three aspects to the problem:

  • "First, companies want to deliver responsible technology, and not just to be good corporate citizens. Being ethical and responsible with technology is good for business; companies recognize that delivering Ethical AI in their best interest.
  • "But there's too much uncertainty in unleashing AI, there can be unanticipated consequences and risk.
  • "The third problem, as highlighted in the Corinium Responsible AI report, is skillsets. It's very difficult to work with the concept of Responsible AI, and hard to find the skilled people to do so."

Scott echoed, "I've had conversations in which some executives feel that AI applications don't need to be ethical, they just need to be classified as high or low risk. Others struggle with not having the tools to determine what is 'fair enough,' and what is biased? There is a lot of striving, of wanting to be ethical, but not as much support for defining what that is."

What about industry self-regulation?

"Since there isn't a consensus on how to measure what is ethical, what do you think about industries coming together to self-regulate and impose standards?" Scott posed, using the cybersecurity industry as an example.

"The core of the problem is the heterogeneity of standards," Ganna replied. "There's ethics and then there's law. Most companies try to comply with law. Achieving Ethical AI would be nice, but self-regulation is complex and what is the motivation?" She referenced nuclear power and airspace as two examples of industries that self-regulate well. "All participants have a common goal, which is to prevent safety incidents. But with AI, it's about data, which is about money. It's not in a company's best interest to share it." She believes there's a need create infrastructure in order to self-regulate AI, a context established in which companies will share information freely, and incentives to do so.

"It's true that companies compete on intellectual property, and AI is part of that advantage," Scott agreed, "so self-regulation might not work." He also noted the rise of AI advocacy groups, driving more awareness of the impacts AI has on consumers' lives. "The risk exposure here is important and should be a Board-level topic," he said. "If a Chief Risk Officer is not tracking AI risk, they should be. This needs to be part of a company's DNA" in order to avoid the backlash from lawsuits, advocacy groups and, eventually, more widespread government regulation.

What about government regulation?

On the subject of government regulation, Ganna said, "Some regulation is due or past due. But with GDPR in Europe and similar laws in California, my research shows that people do not very often exercise these rights. Citizens know something about our rights to be forgotten and other data privacy rights, but don't exercise them very often.

"I'm a great believer in not only a regulatory approach but also a different approach, from the bottom up: trust," Ganna continued. She foresees that consumers will nominate organizations to "hold the digital keys from their digital lives," such as a bank. "Banks already have a lot of information about us, including our transaction history." This is a practical arrangement that is beneficial for both consumers and industry; "there is a body to negotiate with to get consumer information, and users benefit because an entity is looking out for them, an active participant in the multi-sided market for data," she explains. "But still, we are far from a 'declaration of digital rights.'"

What can companies do?

Still, Scott said, "regulation is inevitable. What are the steps companies can take to get themselves ready to respond to Ethical AI concerns and regulations?"

Not surprisingly, Ganna said there's no easy answer to this important problem. But, "the first thing companies need to realize is that trying to do the right thing is not enough anymore. What they need to be asking when developing a piece of AI technology is, 'Can this technology harm anybody?' This question will lead to a lot of interesting thinking." She cited as an example a mobile app to help people in need find food banks; "The downside is that the app could create a dependency."

Secondly, Ganna believes in the importance of developers and data scientists speaking with others outside of their own organizations. "We have developers sitting in one department and domain experts in another, and they rarely communicate. So, if you are a tech company developing technology, talk with ethicists and people in the arts. This will help you identify the risks. You will get completely different perspective."

Scott heartily agreed. "Communication across silos is critical. I'm big on establishing standards, but there's too much pressure on data scientists alone to build fair models. Mottos of 'do no harm' don't cut it."

You can watch the full conversation between Scott and Ganna here. And don't forget to follow Dr. Scott Zoldion LinkedIn and Twitter @ScottZoldito keep up with his latest thoughts on AI, data science and more.