Northwestern University

05/15/2024 | Press release | Distributed by Public on 05/15/2024 15:58

AI expert available: Google’s AI-integrated search

AI expert available: Google's AI-integrated search

'Google is essentially turning the entire world into beta testers for its products'

Media Information

  • Release Date: May 15, 2024

Media Contacts

Amanda Morris

EVANSTON, Ill. - Yesterday, Google unveiled plans to integrate its search engine with artificial intelligence (AI). Kristian Hammond, an AI expert at Northwestern University, says it's a great idea but needs further validation.

Hammond is available to explain how large language models work, to discuss the problems with Google's new AI-integrated search engine and to comment on how he expects the revamped search will influence other tech companies. He can be reached directly at [email protected].

Hammond is the Bill and Cathy Osbourn Professor of Computer Science at Northwestern's McCormick School of Engineering, director of the Center for Advancing Safety of Machine Intelligence and director of the Master of Science in Artificial Intelligence program. An AI pioneer, he also cofounded tech startup Narrative Science, a platform that used AI to turn big data into prose. Narrative Science was acquired by Salesforce in late 2021.

Comments from Professor Hammond on readiness:

"Integrating AI with search is a stunningly great idea, but it's not ready. Given that it's not ready, Google is essentially turning the entire world into beta testers for its products. Search is at the core of how we use the Internet on a daily basis, and now this new integrated search is being foisted upon the world. Running too fast might be bad for the products, bad for use and bad for people in general.

"In terms of the technology at the core of the model, it has not yet reached a point where we can definitively say that there are enough guardrails on the language models to stop them from telling lies. That still has not been tested enough or verified enough. The search will block users from content or give users content without allowing them to make decisions about what is a more authoritative or less authoritative source."

On blocking content:

"With language models like Gemini and ChatGPT, developers have put a lot of work into excluding or limiting the amount of dangerous, offensive or inappropriate content. They block content if they feel it might be objectionable. Without us knowing the decision-making process behind labeling content as appropriate or inappropriate, we won't know what is being blocked or being allowed. That, in itself, is dangerous."

On content creators:

"The new search will provide information from other websites without leading users to those sites. Users will not visit the source sites, which provide the information and allow their content to be used. Without traffic, these sites will be threatened. People, who provide the content that is training the models, will not gain anything."

On competing companies:

"We're in the midst of a feature war. Tech companies like Google are integrating new features that are not massive innovations. It's not that technology is moving too fast; it's the features that are being hooked onto these technologies that are moving fast. When a new feature comes along, we get distracted until the next feature is released. It's a bunch of different companies slamming their features against each other. It ends up being a battle among tech companies, and we are the test beds. There is no moment where we can pause and actually assess these products."

Interview the Experts

Kris Hammond

Bill and Cathy Osborn Professor of Computer Science
Director, Center for Advancing Safety for Machine Intelligence
Director, Master of Science in Artificial Intelligence