The National Academies

04/01/2024 | News release | Distributed by Public on 04/01/2024 09:00

The Human Implications of AI with Brian Christian

Share

The Human Implications of AI with Brian Christian

Feature Story| April 1, 2024

By Olivia Hamilton

Brian Christian is an acclaimed author and researcher whose work explores the human implications of computer science. He is best known for his bestselling series of books: TheMost Human Human, Algorithms to Live By, and The Alignment Problem, which was recently named among the five best books about artificial intelligence by TheNew York Times. He is also a 2022 award winner of the National Academies' Eric and Wendy Schmidt Awards for Excellence in Science Communications, which honor exceptional science communicators, science journalists, and research scientists who have developed creative, original work to communicate issues and advances in science, engineering, or medicine for the general public.

With ongoing advances in artificial intelligence and machine learning, Christian answered some questions about the state of AI, how it could affect writing and storytelling, and whether alignment is possible.

Brian Christian

We last spoke in March 2023, and a lot has happened since then. How has your perspective on AI shifted or grown? Are you more hopeful, less hopeful, more worried, etc. about the future of AI?

Christian: It's true that the pace of technical development has been relentless, most recently with OpenAI's Sora, which is able to generate amazingly lifelike videos from a text description. At the same time, governments in particular are waking up to the importance of AI policy, and we are seeing the encouraging emergence of groups like the U.K. AI Safety Institute and the U.S. AI Safety Institute. It's clear that managing the development and effects of this technology will be one of the key projects of our time.

Your book The Alignment Problem examines the disconnect between intention and results when it comes to machine learning. Do you think alignment is still possible?

Christian: Techniques like RLHF and Constitutional AI, which have been used to align today's models like ChatGPT and Claude, have proven more successful than I would have imagined even a couple years ago. We don't know how well they will scale up to more capable models, and so more work remains. Along with those technical challenges, we of course also have one of the oldest normative questions in human civilization: Whose values are the ones being aligned to? This fundamentally political dimension of AI is inextricable.

As an author, what worries you about AI, and what benefits do you think it can bring to your craft?

Christian: As an author, it would certainly be disruptive if language models like ChatGPT drive the value of being good with words down to zero. On the other hand, a world in which more and more text comes from a single default voice, a sort of "consensus" prose style, is likely to be a world in which unique, personal, artistic expression actually comes to be more valuable.

How can people working in science, engineering, and medicine use your book to shape how they think about AI and tell stories about it?

Christian: There are a lot of professions in which AI was never part of one's professional training, and yet it is suddenly becoming a vital everyday part of the job. Consider a judge who's been on the bench for decades and is now almost overnight having to use risk-assessment scores, or a doctor making sense of new diagnostic models that are incredible but unreliable, or a policymaker suddenly having to grapple with extremely technical issues. I hope The Alignment Problem can give people in these positions a kind of machine-learning 101 - that it can equip them with the foundational concepts, vocabulary, and intuition needed to give them a sense of both clarity and empowerment as they manage this technology in their roles going forward.

"I believe that a broad literacy with how AI systems work, and what their limits are, is vital to the health of our society. Many professionals without training in computer science - from lawyers to doctors to judges - are finding themselves quite suddenly thrust into a position where interacting with AI systems is a central part of their job. Our democracy is going to need to make some consequential choices about how to regulate this technology, and so everyone from policymakers to civil society organizations to the voting public will benefit from a deeper and sharper understanding of how this technology really works." - Brian Christian

Related Resources

Recent News

In Remembrance of Daniel Kahneman

U.S. State Department Announces New Science Envoys

Coordinating Economic Data on U.S. Households

Improving Smallpox Medical Countermeasures

  1. Load More...