ENSPIRING.ai: Godfather of AI on AI exceeding human intelligence and it trying to take over

ENSPIRING.ai: Godfather of AI on AI exceeding human intelligence and it trying to take over

The video discusses the potential of artificial intelligence (AI) to surpass human intelligence and the existential risks that might follow. Jeffrey Hinton expresses concern over AI's rapid development, which could lead to it exceeding the mental capacities of humans in 5 to 20 years. He cautions against the dangers of AI making autonomous decisions, especially in military applications, and emphasizes the need for international regulations similar to the agreements on chemical weapons.

Currently, many experts, including Hinton, believe that AI could either remain subservient or potentially take control, making the situation unpredictable. While some countries have started recognizing these risks and discussing regulations, there is still a lack of stringent measures, particularly regarding the military. Hinton warns that AI's role in job displacement could exacerbate economic inequality, advocating for solutions like universal basic income to address these societal shifts.

Main takeaways from the video:

💡
AI is advancing rapidly, likely surpassing human intelligence in the near future, thus posing a significant control challenge.
💡
There is a need for global agreements to regulate AI usage, especially in potentially lethal applications.
💡
The societal impact of AI will be profound, necessitating political and economic reforms such as the adoption of universal basic income.
Please remember to turn on the CC button to view the subtitles.

Key Vocabularies and Common Phrases:

1. existential [ˌɛɡzɪˈstɛnʃəl] - (adjective) - Relating to existence, especially human existence. - Synonyms: (pertaining to existence, experiential, being)

So, in particular, they're beginning to take the existential threat seriously, that these things will get smarter than us, and we have to worry about whether they'll want to take control away from us.

2. autonomous [ɔːˈtɒnəməs] - (adjective) - Having the freedom to govern itself or control its own affairs. - Synonyms: (independent, self-governing, self-determining)

I'm most concerned about is when these things can autonomously make the decision to kill people.

3. subservient [səbˈsɜːviənt] - (adjective) - Prepared to obey others unquestioningly. - Synonyms: (obedient, compliant, docile)

There's a few experts, like my friend Jan Lecan, who think, it'll be no problem, we'll give them the goals, it'll be no problem, they'll do what we say, they'll be subservient to us.

4. reciprocity [ˌrɛsɪˈprɒsɪti] - (noun) - The practice of exchanging things with others for mutual benefit. - Synonyms: (exchange, mutuality, interchange)

They could share what they learned efficiently, and then we'd all be. Have 10,000 degrees, we'd know a lot then.

5. inequality [ˌɪnɪˈkwɒlɪti] - (noun) - Difference in size, degree, circumstances, etc.; lack of equality. - Synonyms: (disparity, imbalance, disparity)

I know, the benefit system, inequality, universal basic income

6. rogue [rəʊɡ] - (adjective) - Behaving in ways that are not expected or not normal, often in a dangerous way. - Synonyms: (uncontrolled, erratic, unpredictable)

That's a quite separate risk from the risk that the AI itself will go rogue and try and take over.

7. prospect [ˈprɒspɛkt] - (noun) - The possibility or likelihood of some future event occurring. - Synonyms: (expectation, likelihood, anticipation)

There's other experts who think absolutely they'll take control.

8. competent [ˈkɒmpɪtənt] - (adjective) - Having the necessary ability, knowledge, or skill to do something successfully. - Synonyms: (capable, proficient, adept)

So, playing with the large chatbots, particularly one at Google, before GPT four, but also with GPT four, they're clearly very competent.

9. repercussion [ˌriːpəˈkʌʃən] - (noun) - An unintended consequence occurring some time after an event or action, especially an unwelcome one. - Synonyms: (consequence, backlash, outcome)

Could they not be contained in certain areas?

10. implement [ˈɪmplɪmɛnt] - (verb) - To put (a decision, plan, agreement, etc.) into effect. - Synonyms: (execute, apply, carry out)

We may need to rethink the politics of.

Godfather of AI on AI exceeding human intelligence and it trying to take over

Almost everybody I know who's an expert on AI believes that they will exceed human intelligence. It's just a question of when. In between five and 20 years from now, there's a probability of about a half that we'll have to confront the problem of them trying to take over.

I began by asking Jeffrey Hinton whether he thought the world is getting to grips with this issue or if he's concerned. As ever, I'm still as concerned as I have been, but I'm very pleased that the world's beginning to take it seriously. So, in particular, they're beginning to take the existential threat seriously, that these things will get smarter than us, and we have to worry about whether they'll want to take control away from us. That's something we should think seriously about, and people now take that seriously. A few years ago, they thought it was just science fiction. And from your perspective, from having worked at the top of this, having developed some of the theories underpinning all of this explosion in AI, that we're seeing that existential threat is real.

Yes. So some people think these things don't really understand. They're very different from us. They're just using some statistical tricks. That's not the case. These big language models, for example, the early ones, were developed as a theory of how the brain understands language. They're the best theory we've currently got of how the brain understands language. We don't understand either how they work or how the brain works in detail, but we think probably they work in fairly similar ways.

What is it that's triggered your concern? It's been a combination of two things. So, playing with the large chatbots, particularly one at Google, before GPT four, but also with GPT four, they're clearly very competent. They clearly understand a lot. They have a lot more knowledge than any person. They're like a not very good expert at more or less everything.

So that was one worry, and the second was coming to understand the way in which there is superior form of intelligence, because you can make many copies of the same neural network, each copy can look at a different bit of data, and then they can all share what they learned. So, imagine if we had 10,000 people, they could all go off and do a degree in something. They could share what they learned efficiently, and then we'd all be. Have 10,000 degrees, we'd know a lot then. We can't share knowledge nearly as efficiently as different copies of the same neural network can.

Okay, so the key concern here is that it could exceed human intelligence, indeed, the mass of human intelligence. Very few of the experts are in doubt about that. Almost everybody I know who's an expert on AI believes that they will exceed human intelligence. It's just a question of when. And at that point, it's really quite difficult to control them. Well, we don't know. We've never dealt with something like this before.

There's a few experts, like my friend Jan Lecan, who think, it'll be no problem, we'll give them the goals, it'll be no problem, they'll do what we say, they'll be subservient to us. There's other experts who think absolutely they'll take control. Given this big spectrum of opinions, I think it's wise to be cautious. I think there's a chance they'll take control and it's a significant chance. It's not like 1%, it's much more.

Could they not be contained in certain areas? I know scientific research, but nothing. For example, the armed forces, maybe. But actually, if you look at all the current legislation, including the european legislation, and there's a little clause in all of it that says that none of this applies to military applications. Governments aren't willing to restrict their own uses of it for defence. There's been some evidence, even in current conflicts, of the use of AI in generating thousands and thousands of targets.

Yes. I mean, that's happened since you started warning about AI. Is that the sort of pathway that you're concerned about? I mean, that's the thin end of the wedge. What I'm most concerned about is when these things can autonomously make the decision to kill people.

So, robot soldiers. Yeah. And those are clones and the like, and it may be we can get something like Geneva conventions to regulate them, but I don't think that's going to happen until after very nasty things have happened. And there's an analogy here with the Manhattan project and with Oppenheimer, which is if we restrain ourselves from military use in the g seven advanced democracies, what's going on in China, what's going on in Russia?

Yes, it has to be an international agreement, but if you look at chemical weapons, the international agreement for chemical weapons has worked quite well. I mean, do you have any sense of whether the shackles are off in a place like Russia? Putin said some years ago that whoever controls AI controls the world, so I imagine they're working very hard. Fortunately, the west is probably well ahead of them in research. We're probably still slightly ahead of China, but China's putting more resources in. And so in terms of military uses of AI, I think there's going to be a race.

Sounds very theoretical, but this argument, this thread of argument, if you follow it, you really are quite worried about extinction level events. So we should distinguish these different risks. The risk of using AI for autonomous lethal weapons doesn't depend on AI being smarter than us. That's a quite separate risk from the risk that the AI itself will go rogue and try and take over. I'm worried about both things.

The autonomous weapons is clearly going to come. Whether AI goes rogue and tries to take over is something we may be able to control or we may not, we don't know. And so at this point, before it's more intelligent than us, we should be putting huge resources into seeing whether we are going to be able to control it.

What sort of society do you see evolving? Which jobs will still be here? Yes, I'm very worried about AI taking over lots of mundane jobs, and that should be a good thing. It's going to lead to a big increase in productivity, which leads to a big increase in wealth. And if that wealth was equally distributed, that would be great, but it's not going to be in the systems we live in. That wealth is going to go to the rich and not to the people whose jobs get lost. And that's going to be very bad for society, I believe.

So it's going to increase the gap between rich and poor, which increases the chances of right wing populists getting elected. So, to be clear, you think that the societal impacts from the changes in jobs could be so profound that we may need to rethink the politics of. I know, the benefit system, inequality, universal basic income. Yes, I certainly believe in universal basic income. I don't think that's enough, though, because a lot of people get their self respect from the job they do.

And if you put everybody on universal basic income, that doesn't. That solves the problem of them starving and not being able to pay the rent, but it doesn't solve the self respect problem. So what you just try to. The government needs to get. I mean, it's not how we do things in Britain, tech, you know, we tend to sort of stand back and let the economy decide the winners and losers. Yes. Actually, I was consulted by people in Downing street and I advised them that universal basic income was a good idea.

And this is. I mean, you said ten to 20% risk of them taking over. Are you more certain that this is going to have to be addressed in the next five years? Next parliament, perhaps next parliament. My guess is in between five and 20 years from now, there's a probability of about a half that we'll have to confront the problem of them trying to take over.

Are you particularly impressed by the efforts of governments so far to try and rein this in? I'm impressed by the fact that they're beginning to take it seriously. I'm unimpressed by the fact that none of them is willing to regulate military uses. And I'm unimpressed by the fact that most of the regulations have no teeth.

Do you think that the tech companies are letting down their guard on safety because they need to be the winner in this race for AI? I don't know about the tech companies in general. I know quite a lot about Google because I used to work there. Google was very concerned about these issues and Google didn't release the big chatbots. It was concerned about his reputation if they told lies. But as soon as OpenAI went into business with Microsoft and Microsoft put chatbots into Bing, Google had no choice. So I think the competition is going to cause these things to be developed rapidly. And the competition means that they won't put enough effort into safety.

People, parents, talk to their children, give them advice on the future of the economy, what jobs they should do, what degrees that they should do. It seems like the world's being thrown up in the air by this, by the world that you're describing. What would you advise somebody to study now to kind of surf this wave? I don't know, because it's clear that a lot of mid level intellectual jobs are going to disappear. And if you ask which jobs are safe, my best bet about a job that's safe is plumbing, because these things aren't yet very good at physical manipulation. That'll probably be the last thing they're very good at.

Driving, that's the not driving, that's hopeless. I mean, that's been slower than journalism. Journalism might last a little bit longer, but I think these things are going to be pretty good journalists quite soon and probably quite good interviewers, too.

Artificial Intelligence, Technology, Inspiration, Existential Threat, Autonomous Weapons, Universal Basic Income, Bbc Newsnight