04/30/2024 | News release | Distributed by Public on 04/30/2024 11:56
The AI hype cycle has moved faster than any other in human history. From the introduction of ChatGPT, the proliferation of AI applications in the last year, to investor reticence because of heavy GPU usage, the pace has been unprecedented. In addition, discussions on ethics have led to agreements like the recent AI Act and the Bletchley Declaration.
But what is 'responsible AI'? How can developers ensure that their concept, development and the ongoing application and future of their AI apps are handled responsibly? These questions pose something of a rabbit hole for most startups and founders, who may even begin to question what the very idea of 'responsibility' means!
This isn't simply a philosophical question: given that the AI Act in Europe can currently enforce penalties of up to €35m for violations. Other regions are sure to follow, so startups should be considering this from the outset. Thankfully, several industry heavyweights have already made their views clear on what 'responsible AI' should look like.
How to create responsible AI in practice
There's lots of guidance to follow when it comes to responsible AI. Three of the key institutes in this area are the World Economic Forum (WEF), the UK's Alan Turing Institute, and the US's National Institute of Standards and Technology (NIST) - all of whom broadly agree on the most important points. There was even an ISO standard (ISO 42001) set in 2023, specifying standards that AI providers and users should follow to ensure responsible development and use of AI systems. For clarity, our recommendations follow the Alan Turing Institute's easy 'FAST' framework (Fair, Accountable, Sustainable and Transparent) and also contain content from all three organisations.
Fair
Startups and scaleups should always ensure that their AI systems are fair. This means avoiding both human bias and unfair harm.
From a process perspective, this means:
Accountable
As well as fairness and minimising harm, AI systems must be auditable so that they're trustworthy. The three organisations have the following advice for startups:
Transparency and more
There are a few other challenges for growing companies that the global organisations have highlighted, from ensuring good transparency to broader social issues. These include:
As AI gains pace and systems become increasingly sophisticated, it's more important than ever that we're mindful of creating responsible AI systems that embody the principles of fairness, accountability, sustainability and transparency. However, this often seems far from straightforward, but by embracing the simple FAST principles, startup leaders can make sure that they're on the right path to better, fairer, and ultimately more responsible AI.
To read the reports in-depth, you can download the NIST AI Framework, the WEF Presidio Recommendations, the Alan Turing Institute's guide to understanding AI ethics and safety, and the ISO/IEC 42001 standard.
+ posts
Start-Up Program Manager
UK & North Europe Cluster
OVHcloud