Artefact SA

04/29/2024 | News release | Distributed by Public on 04/29/2024 05:02

Keeping the Promise of AI: The Role of Ethics in developing Responsible, Unbiased Models

The Bridge - Data Leader series

Jean-Marie John-Mathews, PhD, Co-founder and scientific director of Giskard AI, and Caroline Goulard, journalist and CEO of data visualization companies Dataveyes and Modality, discuss the ethical and political challenges of AI development. They explore how categories are constructed in a society, how they politicize issues, and how AI turned categorization on its head.

Watch the Data Leader video

Listen to the podcast

The role of categories in AI yesterday and today

The arrival of AI and machine learning has challenged traditional ways of categorizing people and content, leading to new ways of thinking about problems and potentially new issues of bias or discrimination.

Traditional AI systems were built on expert knowledge and rules. For example, if a bank was considering granting someone a loan, they might categorize candidates based on attributes like gender, profession, or other demographic data. But with today's machine learning and deep learning AI models, there's a move away from rigid categories. Instead of applying categories such as "woman of a given age and socioeconomic background," the focus has shifted to gathering vast amounts of behavioral data. This data may include things like banking transactions or browsing history. The idea is that by looking at this granular data, decisions can be made without relying on predefined categories which may reinforce discrimination.

This shift is not just a technical one: it also raises philosophical questions about how categories are constructed, how they impact society, and the ethical considerations in using them within AI systems.

The AI Act: Personal data and responsibility

The current landscape of AI legislation, exemplified by the AI Act, reveals a fundamental tension between traditional legislative categories and the evolving nature of technical tools. The AI Act legislates AI systems based on usage rather than sorting them into rigid categories, recognizing that the risks and implications of AI are closely tied to context of use. For instance, the Act introduces versatile AIs capable of multifaceted applications. The challenge is to estimate their risk, but how can the risk of something be assessed when its uses aren't yet clearly defined?

The shift from traditional machine learning to more advanced generative AI further blurs the lines between usage and responsibility. For instance, a retail chatbot using generative AI may use internal tools and product information to offer recommendations or answer FAQs. But it could also face misuse, such as offensive inquiries aimed at eliciting toxic responses for social media posts. This raises critical questions for businesses: where does the responsibility lie for managing these risks? Is it solely the attacker's responsibility if they exploit the system, or does the onus also fall on the creators and deployers of the AI?

As the number of players within the AI ecosystem increases, responsibility becomes increasingly dispersed. It's not just the deployer of the AI system who may be accountable; it may also be the entity that produces the foundational model. When incidents occur, determining who bears the brunt of responsibility becomes a complex task, especially when the players are scattered across different organizations and even geographical locations.

AI regulation and ethical business applications

Frameworks like the AI Act must address the tension between conventional regulatory paradigms and today's dynamic AI landscape. Adaptive regulatory strategies are essential to accommodate the multifaceted nature of AI systems and their implications.

Companies can harness lessons from categorization to refine customer segmentation strategies, tailor marketing campaigns, and personalize customer experiences. Moreover, by incorporating ethical considerations into AI development processes, businesses can mitigate risks of bias and discrimination, fostering trust and transparency with stakeholders.

The importance of categories lies in their role as our social framework for understanding and evaluating the world. Finely tuned categories are essential for representing diverse voices, especially those affected by AI biases. Engaging a wide range of stakeholders, including AI users and affected individuals, is critical.

"Categories act as a lens through which we can evaluate and address the nuances and potential biases in AI systems. They're crucial for providing a common language and understanding among stakeholders, and ensuring that AI continues to be developed and deployed responsibly and ethically."
Jean-Marie John-Mathews

Visit thebridge.artefact.com, the media platform that democratizes data & AI knowledge, in videos & podcasts.