University of California

03/28/2024 | News release | Distributed by Public on 03/28/2024 19:18

‘The UC Effect’: Shaping the future of AI

"The future of AI has not really been written yet. The question is, what are the drivers that will push us in one direction or another? How much control do we have over where AI takes us?" - UC Provost Dr. Katherine Newman

From science, engineering and enterprise to medicine, warfare and the environment, artificial intelligence is starting to infiltrate virtually every aspect of society, including higher education. With its ability to bolster and expand human capability, AI holds the promise of improving health care, taking the drudgery out of work, helping find solutions to climate change, and so much more. Yet if not designed and deployed correctly, it might also destroy jobs, increase inequity and endanger democracy on a global scale. And with the technology expanding at an astounding pace, the stakes could not be higher.

So, what's UC's role in the brave new world of AI? A gathering in late February brought together minds from across the University of California's 10 campuses, three national labs and six academic health centers to talk and think about what role the university can play in shaping AI in service of the public good. Convened by UC Provost and Vice President of Academic Affairs Katherine Newman and Chief Information Officer Van Williams, the first-of-its-kind Academic Congress on Artificial Intelligence cut across disciplines to explore the perils and promise of AI.

AI is nothing new to UC. In fact, UC researchers and inventors have been working on AI since the 1960s. In 2021, UC became the first university in the nation to develop and adopt a set of recommendations to guide the ethical development and implementation of AI across the university, along with guidance on how to operationalize the principles in areas that pose the highest risk to individual rights, such as health services, human resources, admissions and grading.

"AI can potentially unravel complex systems and amplify our capacity to better understand interrelated phenomena around some of the world's greatest challenges." - Dr. Theresa Maldonado, UC Office of the President

ChatGPT is perhaps the most widely visible example of the rapid adoption of AI - more than 100 million users signed up in its first two months - offering vivid examples of how it might change our daily tasks, work and education. But despite the hype - like the fervor over students "prompt engineering" their college admissions essays - generative AI systems like ChatGPT and Midjourney are just the tip of the technological iceberg. AI is already at work all around us, reading medical imaging and driving cars, detecting bank fraud, powering chatbots and helping you select your next Netflix pick.

Dr. Theresa Maldonado, who oversees UC research and innovation, reflected on the bigger picture of what's possible. "When we discuss research frontiers, we focus on the rich possibilities of AI, how AI can amplify the human potential rather than substitute it. How AI can be used to unravel complex systems such as the earth's terrestrial, ocean and atmospheric ecosystems to better understand interrelated phenomena caused by climate change and to explore meaningful approaches to mitigation and adaptation."

Learn about some of UC's recent AI innovations

But beneath the buzz, there has long been concern. At the AI Congress, questions of bias, ethics and equity quickly rose to the fore. Who gets to take advantage of the new AI tools, and who is left behind? Who profits and who suffers? Who controls the vast sets of information that undergird AI, and what happens when AI starts to spin out its own erroneous "data"? The concerns spill into data security, research integrity, privacy, copyright and misinformation. Threats to democracy and global security. Machines run amok, or deployed in the hands of malefactors. And many thousands of unintended consequences.

"Right now AI is writing its own story," said Dr. İlkay Altıntaş, a research scientist at UC San Diego and chief data science officer at the San Diego Supercomputer Center, who spoke at the Congress. "How do we make sure we write the story of AI? How do we make sure there is positive impact at a societal scale?"

Ramesh Srinivasan (left), a professor of information studies and media arts/design at UCLA, speaks at the Academic Congress on Artificial Intelligence, while Katherine Newman (center-right), the UC provost and external vice president of academic affairs, and Daron Acemoglu (right), an economics professor at the Massachusetts Institute of Technology and the Congress keynote speaker, look on. Photo: Nicolas Greamo/Daily Bruin

The keynote speaker for the Congress, Institute Professor of Economics at MIT, Daron Acemoglu, noted that the impact of technological innovation is sensitive to the political economy of its surroundings and the intentions or constraints that motivate the leaders of firms. The first hundred years of the Industrial Revolution, he argued, provided for massive increases in productivity, accompanied by grinding conditions in factories and exploitative child labor. It took a labor movement to push back and promote decent working conditions alongside the economic growth. Acemoglu asserted that AI has tremendous potential, but there are worrying signs that it may be harnessed to outcomes that could be job destroying rather than enhancing.

"The question is not 'machine good' or 'machine bad.' It is, 'How can we ensure these machines serve humanity?'" - Dr. Ramesh Srinivasan, professor of information studies and design/media arts, UCLA

The recurring question at the Congress might be summarized this way: AI will change our lives, but how do we make sure it changes them for the better? How can we as a society develop trustworthy AI that is deployed in the service of the common good?

AI research is dominated by big tech at present. But constrained by profit and short-term gain, the private sector isn't equipped to lead in developing trustworthy AI. Government can set regulatory guardrails but doesn't have the technical capacity to develop the technology. Public universities, on the other hand, have the expertise, the long-range vision and the commitment to social benefit that position them to lead in this space. Universities can tackle AI research questions that aren't being taken up by industry, along with developing protocols and policies to ensure its ethical use.

Provost Newman, whose research has focused on the sociology of labor movements, noted that European universities were completely unprepared for earlier technological innovations: "It just happened and fell on them. This time, we in academia and at the University of California have an extraordinarily important role to play in determining how this will unfold."

"I see the University of California as the University of Collaboration, and that's how we need to lead the world, by bringing us together." - Dr. Ashish Atreja, CIO and chief digital health officer, UC Davis Health

While each UC campus by itself is a dynamo for innovation, UC as a system is a singular global powerhouse. UC's combined research and scholarly resources represent a concentration of thought leadership, ingenuity, policy influence and scale as an economic engine for California that could give it outsized influence in shaping AI for the public good.

"We know about the California Effect, where we set higher emission standards that reshaped the automobile industry across the U.S. and internationally," remarked Dr. Brandie Nonnecke, director of the CITRIS Policy Lab at UC Berkeley and former co-chair of UC's AI Council. "We should also think about 'the UC Effect,' because we have a lot of purchasing power."

Congress attendees agreed that leadership at the pace and scale of AI will necessitate significant investment in the computer power necessary to scale up research within the UC system. It will demand the kind of interdisciplinary collaboration that is already a hallmark of UC. And it will require the university to work collaboratively with government, industry and through innovative public-private partnerships.

Provost Newman summed up the sentiment in the room: "When you put together all of the extraordinary talent at the University of California, you simply cannot beat it. Our obligation is to make use of that incredible human capital for the betterment of our whole society and the world around us. This technology isn't going away. It can't be put back in the bottle. It's up to us to figure out how to lead the way to make it the most productive, equity-enhancing technology we have on the planet."

Danh Nguyen and Andrew Kubica/The Boston Headshot

Katherine S. Newman, University of California provost and executive vice president of academic affairs

What's next?

Eagerness to explore the possibilities of AI is growing rapidly among UC innovators of all stripes. In step with that demand, UC's AI Council is working to put the university's AI Principles into practice with a set of parameters that can be used to facilitate the responsible use of AI across UC's many campuses and enterprises.

A training and awareness program is set to roll out first, in mid-2024: A new website will be a source for all things AI, and training modules will allow faculty and administrative staff to develop a shared baseline knowledge of the opportunities and risks of various tools and technologies.

Slated for a fall debut, risk management tools and procedures currently in development include a framework that campus decision-makers can use to assess risk and evaluate AI services at the procurement stage, before they are brought in. Before the methodologies are finalized, they'll be stress-tested on AI tools that are already in the system. Case studies on how to apply ethical considerations in AI procurement will give concrete illustrations of how to fold in risk management before an AI product is adopted.

"I'm excited about the concrete benefits our work will bring to the entire UC system," says AI Council Co-Chair Alexander Bustamante. "By focusing on training programs, risk assessments and transparent documentation, we're empowering faculty, staff and students to leverage the power of AI responsibly. This will not only mitigate risks but also solidify UC's position as a frontrunner in responsible AI implementation."