ServiceNow Inc.

12/06/2021 | Press release | Distributed by Public on 12/06/2021 15:44

The man in search of human-level AI

As a growing number of enterprises incorporate artificial intelligence into their operations, leading AI researchers are focused on the holy grail of computing: creating a machine capable of human ability and consciousness. Despite exponential advances in recent years, deep learning, and in fact any of the other approaches to AI, still have a long way to go before understanding our world better than, say, a toddler.

Yoshua Bengio, one of the world's leading AI researchers, is working to improve both the reliability of the technology and how humans interact with it, exploring a path that could address both aspects. A winner of the A.M. Turing Award, considered the Nobel Prize of computing, Bengio, a professor at the University of Montréal, sat down with Workflow to discuss his progress, his interests in eliminating potential biases in the technology, and why he believes humans, not AI, present the greater danger to society.

What appears below has been edited for length and clarity.

Q
How would you describe the purpose of your research to senior executives? What's your elevator pitch?
A

AI has made amazing progress in the last couple of decades, but we're still very far from human intelligence in ways that matter for business. They can make mistakes that no human would do, or that no 2-year-old would do, in some cases. So we need to better understand that gap and that's what I'm trying to do-design a new generation of AI systems that can bridge that gap.

Q
To senior executives focused on building resilient companies, making customers happy, and meeting quarterly earnings, why is your work important?
A

We're starting to use strategies that human brains use to improve what we call robustness. Robustness means you want your system to not fail as much, especially when the data changes but the underlying causal nature of the environment didn't change.

For example, you want your 11th customer to have the same good experience that the 10 customers on whom you trained your data did.

The other thing that our work aims at improving is the interaction between humans and machines. We want the system that interacts with a human to be able to explain what they're doing in a way that's going to be easy for the human to make sense of and accept.

So if you want to have an interaction between automated systems and humans that solves problems, you have to change the nature of this machine learning system from a pure "black box" to something that is more structured, in a way similar to how humans consciously conceive of and communicate things.

Q
Such as intuition?
A

Humans can make decisions that involve intuition, which is kind of like a black box, but we also can break down those decisions into some high-level reasoning that we can explain and that other humans could understand.

Whether you're a doctor or you're a businessperson, you would like to get a sense of why the AI is making or recommending those decisions. But that ability is still out of reach for the current state of machine learning.

Q
Fair to say you're trying to understand the nature of consciousness, in order to find a way to train a machine to gain it?
A

You have to be a little bit careful with the word consciousness, because it means different things to different people, even in science; a clear and generally accepted definition does not exist because science does not yet sufficiently understand it. The word also has emotional and religious connotations, and it's easy to get lost in philosophical debates that may be fun but could also be controversial.

To put things in perspective, in the 19th century, people thought of life, like people think of consciousness right now, as something very mysterious and that probably involves a God-like origin. We know now that life is a bunch of molecules and chemical reactions and all of that. We've boiled it down to things we mostly understand, but there's still a lot of things in the details that we don't understand. However, we get the big picture.

This is the direction we're going for consciousness. It might take 20 years before we figure it out. But it's still an inspiration and we still have science that's going on around it.

Q
Hyperbole about a robot apocalypse aside, should we be worried about creating a world in which machines and AI control systems, without human oversight?
A

Before we get to machines that have this kind of power, there will be machines that have enough power that can be misused by other human beings. And that's what prevents me from sleeping at night. Because humans, we know they can be crazy or be persuaded to do crazy things. We've seen that a lot recently, and social networks sometimes make it worse.

They are able to do things that damage our society now, but that damage is limited because they don't have such powerful weapons in their hands. But if they get access to super powerful computers that can become weapons, then I'm scared.

Q
Do you feel like a young Robert Oppenheimer who's potentially creating a system that may be used for destructive purposes? Do you have an ethical obligation to develop AI in a certain way?
A

You know, there's a real connection to Oppenheimer. In the 1940s and 1950s, physicists became engaged in global peace discussions and the dangers posed by nuclear weapons. And for good reason, because they understood that this was really dangerous stuff. Nuclear technology could be very useful, but it could be dangerous. So I think AI right now is the new physics.

I absolutely have an ethical obligation to think about these questions, to discuss them with my peers and with the rest of society, to work with governments who have to put in place the rules and favor the emergence of sufficient collective wisdom to make sure we don't eventually shoot ourselves fatally with the toys that we're building.

Any scientist should ask themselves if their work will eventually be used for bad. But the problem is any knowledge can be misused. So should we stop doing science?

Q
Are you concerned about governments passing laws that impact AI research?
A

I don't think the laws being designed about AI will have had much negative impact on most AI research. The legislation is going to be mostly about how AI is deployed, for what applications, for what purpose, and in what social context would there be consequences to avoid.

Q
One criticism about AI is that the underlying algorithms and data sets often contain biases, which lead to unfair outcomes. How can we counter that?
A

For these systems to be able to reason like humans, they also need to understand causality to some extent. I'm hoping we can build AI that's not just going to propose decisions based on correlation but based on the estimated causation.

[Read also: AI for the people]

When we use the system to make a decision, it could be completely wrong because we are assuming the input causes the output, but it might be the other way around, or it might be there is a third variable that confounds these conclusions. The kinds of machine learning I'm talking about requires the existence of interventions and seeing the consequences.

Q
You care deeply about climate change. How will AI help address this issue?
A

There are already a lot of research groups that are using current deep learning systems to help fight this problem on many fronts. Think about more efficient power grids for example.

One thing that I'm involved in is material discoveries like new batteries or carbon capture. If we could do much cheaper carbon capture, it would be a game changer. As it turns out, these discovery problems involve lots of iterations of taking decisions, obtaining observations, and retraining the learner. Humans do experiments with different things to see what works and what doesn't. For example, AI can help us in doing more efficient searches of the huge space of possible molecules to create these technologies.

Another example is climate modeling. There's huge potential there. What makes climate models trustable is that they are based on laws of physics that are supposed to be the same everywhere and are not going to change. Climate scientists are starting to use machine learning to develop models that are simpler, easier to understand, and much cheaper than physical models.

If I give you a climate model that involves a billion parameters, it will be hard to explain it to politicians to put billions of dollars behind the research. Whereas if I can boil climate change down to a few equations, most people will trust it more.

Q
What accomplishments are you most eager to realize now?
A

First of all, I don't want to feel that kind of pressure. After all, I won a Turing Award, so what more could I ask for? I'm having fun thinking about complex and appealing new directions.

That being said, I'm also thinking about society. The only way that we are going to make it through these challenges is by having stronger coordination. As a species, we can't afford to continue doing things on our own without coordinating sufficiently with others.

As you know, we already are very organized-that's why we have societies and governments, but that's apparently not sufficient. We need to become collectively and individually wiser and much more empathetic. If we don't, the whole thing might just explode in our face. A crazy country could get hold of very powerful technologies and destroy us all.

So how do we avoid that? Well, we need to make sure every human being, 20 years from now, is well fed, well educated, and in good health, physically and mentally with strong abilities at empathy, at reasoning, and at critical thinking. It's a tall order, but I don't see any other solution.