ENSPIRING.ai: How to avoid catastrophe. The future of life - Nobel Week Dialogue 2022

ENSPIRING.ai: How to avoid catastrophe. The future of life - Nobel Week Dialogue 2022

The video dives into serious global threats concerning catastrophes and potential doomsday scenarios, yet offers discussions around the possibility of avoiding such crises. The panel features experts from different fields such as nuclear weapons, climate change, and artificial intelligence. They detail how catastrophes could unfold in their respective domains while emphasizing the interconnectedness of these threats with factors like pandemics and ecological changes.

This is an engaging discussion because the experts not only provide insights into various catastrophic scenarios but also highlight the measures humans can take to avert these grim futures. Emphasizing the human-made nature of many threats, they champion the potential for society to enact change to control or prevent these risks. The session also touches on the importance of collaboration across nations and disciplines to address the fragmented approaches often seen to these universal threats.

Main takeaways from the video:

💡
Catastrophes like nuclear warfare or climate change are human-influenced and can be controlled or mitigated.
💡
Interconnectedness of global threats requires cooperative and collaborative solutions beyond national borders.
💡
Importance of awareness and strategic planning in fields like AI development to deter unintended detrimental outcomes.
Please remember to turn on the CC button to view the subtitles.

Key Vocabularies and Common Phrases:

1. catastrophe [kəˈtæs.trə.fi] - (noun) - An event causing great and usually sudden damage or suffering; a disaster. - Synonyms: (disaster, calamity, cataclysm)

So it's Friday afternoon, and we're going to talk about catastrophes and dooms

2. misalignment [ˌmɪs.əˈlaɪn.mənt] - (noun) - The incorrect arrangement or positioning of something in relation to something else. - Synonyms: (imbalance, misdirection, disorganization)

And so if you have misalignment, if the AI system doesn't understand something about what humans want the world to be like, and they're pursuing this misaligned objective, then it's kind of like setting up a chess match between us and the machines

3. symbiotic [ˌsɪm.baɪˈɒt.ɪk] - (adjective) - Involving interaction between two different organisms living in close physical association, typically to the advantage of both. - Synonyms: (mutualistic, interdependent, cooperative)

In terms of the evolution of viruses that facilitate more virulence, are there ways that we can make viruses artificially symbiotic in some way?

4. virulence [ˈvɪrələns] - (noun) - The severity or harmfulness of a disease or poison. - Synonyms: (deadliness, toxicity, lethality)

In terms of the evolution of viruses that facilitate more virulence, are there ways that we can make viruses artificially symbiotic in some way, the way that there are symbiotic bacteria in the gut, for example?

5. transmissible [trænzˈmɪs.ə.bl̩] - (adjective) - Capable of being transmitted from one person or organism to another. - Synonyms: (contagious, infectious, communicable)

The silver lining in this pandemic is that this particular virus has evolved to be extremely transmissible between people.

6. retaliation [rɪˌtæliˈeɪʃən] - (noun) - The action of returning a military attack; counterattack. - Synonyms: (reprisal, revenge, retribution)

But then we go on. If there's retaliation, if there's more, you have full scale nuclear war.

7. progeny [ˈprɒ.dʒə.ni] - (noun) - A descendant or the descendants of a person, animal, or plant; offspring. - Synonyms: (offspring, descendants, children)

Well, that's an interesting question. I'm not sure we can direct the evolution of the virus because it has huge millions and trillions of progeny.

8. lethal [ˈliː.θəl] - (adjective) - Sufficient to cause death. - Synonyms: (deadly, fatal, mortal)

Same thing with killer robots, for example, lethal autonomous weapons.

9. humanitarian [ˌhjuː.məˈnɪ.tər.i.ən] - (adjective) - Concerned with or seeking to promote human welfare. - Synonyms: (compassionate, benevolent, philanthropic)

And obviously, using that in a populated area would be a catastrophe with no possibility to provide meaningful humanitarian aid in the region.

10. alignment [əˈlaɪn.mənt] - (noun) - Arrangement in a straight line, or in correct or appropriate relative positions. - Synonyms: (arrangement, positioning, coordination)

And Anthony mentioned this idea of alignment, the fact that we need AI systems to be pursuing objectives that are aligned with human objectives.

How to avoid catastrophe. The future of life - Nobel Week Dialogue 2022

So it's Friday afternoon, and we're going to talk about catastrophes and dooms. Sorry about that. But in the title there is the word avoid. So I think we can cling to that and see if there are any ways out of this dark field. You are so very warm welcome, and we will have a little discussion here, and hopefully there will be some minutes at the end where you can ask questions.

We have representatives here from different and various areas. There are nuclear weapons, diseases and pandemics. It's the universe, this little matter. It's climate changes and whatever then artificial intelligence might bring. If we were to define catastrophe within your expert field, just to kick off with a very feel bad question briefly, what would the worst scenario? Beatrice Finn, we can start with you, and I think, working with nuclear weapons, that we might guess where you are heading at. Cheerful topic. Yes. No, I mean, the use of nuclear weapons would be catastrophic, even if it's just one. So it's basically a question about how bad would it be, ranging from one of the smaller nuclear weapons that exist, that would be detonated by accident, perhaps, or miscalculation, or just a one off.

It's something that we've talked about so much this year, right after watching Russia's illegal invasion of Ukraine with the following, with the concrete threats to use nuclear weapons in that conflict. And a lot of people talked about almost trying to minimize the impact. Don't worry, a small one wouldn't be too bad. But I think that we're really, our view of what a small nuclear weapon is, is a bit skewed. Russia's tactical nuclear weapons, most of them range from ten kilotons to up to 100 kilotons. The bomb in Hiroshima that killed, I think, around over 100,000 people, was predicted to be, I think, 15 kilotons on the smaller scale of the small tactical nuclear weapons. And obviously, using that in a populated area would be a catastrophe with no possibility to provide meaningful humanitarian aid in the region.

But then we go on. If there's retaliation, if there's more, you have full scale nuclear war. It could end humanity as we know it, not meaning that humans would be extinct. It could be still, of course, people surviving, but the way the world would operate and work would be completely changed. So it would really be a full scale catastrophe on the unprecedented level. In that way, Paula, from your perspective, in your field, what would be the worst scenario?

Well, my field, being an astrophysicist, astrochemist, one could think naively that maybe we have a big asteroid impacting us, but I think the probability for that is low. And actually in the past, if you think 65 million years ago, when dinosaurs were in our place, actually that was good for us to come in. So who knows? If something like that happens and maybe something better, we come back anyway, I don't want to think about that. I think it doesn't sound right from my perspective. I think it's really. It's our self. We really have to be. This is what I think. And I also say to my daughter and my students, we really need to be super careful. And we are in, and we have heard this already today, we are in this such a special privileged situation.

We are alive, we can see the universe, we can understand part of the universe, we can understand how the universe originated. There are still lots of questions, but it doesn't matter. So we are here with big brains, and we want to use these big brains to do a better future and not to let us destroy ourselves, because that could really be terrible. Of course, life will not be exterminated, as you said, in fact. And even if human beings will not be around, there will be smaller or say, simpler forms of life who probably maybe will evolve if something happened because of us. I hope the next stage of evolution will be more intelligent. But I think we have to watch out ourselves and make sure that we continue our studies and not just be in a box.

This is sometimes also what I feel that the people are like a scientist doing something, just do that. This is also what I see in my students. They just do their things and they think that that is the universe. We have to stretch out. Scientists should talk to artists. This is what, in fact, Helga also mentioned before. And I think that the reverse is also true. Scientists should talk to scientists, and of course, politicians should talk to everybody.

Anyway, that is my thank you, thought, Stephen. We know a little bit about climate change, but uncertainty seems also to be a critical factor within this issue. What are the scenarios that you see? Well, I can roughly divide into three bad, really bad catastrophe. I would consider, again, we don't really know what's going to happen, what the temperature will be and what would be the consequences. But certainly from what we see, if we go above three degrees, that this would be in the very bed to catastrophe. So what defines a catastrophe? I would consider a real catastrophe, a breakdown of social order, that there would be enough failed agricultures in nation states, fragile nation states, that cities and civilizations begin to disappear.

Historically, we've had that. We've had around the year 1000 in the golden age in the Middle east, where Persia and Iraq, we saw bad weather for ten years lead to a collapse of society, a collapse of the byzantine empire and incoming marauders like these dystopian things we see in Hollywood. And all of a sudden, you're roaming around, you're no longer having a stable system. And so a real catastrophe in my mind would be if enough nation states, fragile nation states, fail, it spills out from the boundaries. And so that's in this scale of things. Now, let me tell you some not so good news, but almost a certainty that cannot be avoided. We physicists think we understand how our sun evolves, will evolve. We think we understand the classifications of our suns and what we're going to find. 4 billion years from today. It becomes a red giant. But long before it becomes a red giant, it will slowly heat up, and it will heat up to the point where earth is not going to be earth. This is not three degrees, this is not ten degrees, this is hundreds of degrees.

And so this you can actually bet on, based on our understanding of nuclear physics and solar physics, that's a real catastrophe. The worldwide nuclear war would be very, very bad. The world came very close to this in the cuban missile crisis. We know now from historical records, the small nuclear bombs is actually the worst. When I was secretary of energy, I was arguing very strongly. We had the technology to go down below five kilotons, just a few kilotons. I said, we can't do this, because if we do this, the temptation to use it will be much higher. And so on. Our secretary of energy, we didn't. Now, then, what has happened is the Russians have done it, and now the US has followed, and now the ability to think, oh, it's only a few kilotons, five kilotons, and it becomes part of the usual horrible parts of the other instruments of war.

Is it a catastrophe? No, it's a sliding downhill that's very, very dangerous. And so it's a catastrophe in the making, I would say. But mostly it's the people I would have to worry about. And a very bad world War one piece and agreed, which started in the United States, if euphoric investment led to a stock market crash in the United States, led to a depression around the world, led to the seeds of world War two, that and the bad peace in world War one, that was a catastrophe. Hopefully we can avoid it. Yes, hopefully we'll continue this parade of joyous.

What about diseases and pandemics? That's not a news story. That's old. Okay, so in terms of pandemics, I think a really horrible catastrophe would be a pandemic on the scale that we have right now, except that there was a much higher fatality rate. So if you can imagine what the world has been experiencing over the past almost three years, but let's say there was a 30% to 40% fatality rate. So I would say that it's what's happened with the spread of SARS. Cov two has truly been horrible for the world. But the silver lining in this pandemic is that this particular virus has evolved to be extremely transmissible between people so that you can transmit it before you even know you're infected.

And as a result, a lot of other people get infected, and yet most people don't get seriously ill and die. I mean, unfortunately, many people around the world have died of this, but not as many as if it had the fatality rate of, say, MERS, another beta coronavirus that emerged as an epidemic in 2012. So, I mean, the reason that we can make a. That people have been able to make good vaccines against this virus is the fact that our immune system can handle it. So that means we can mimic the properties of the virus to trigger that immune system, to offer us some protection against serious disease and death in the form of vaccines. So that has been. That has been fantastic. But what if we get another spillover event from an animal reservoir? Because this is happening all the time.

Like, people just aren't really that aware, I suppose most of, and I certainly wasn't before this pandemic. You know, it's estimated that probably tens of thousands of people are infected every year with bat coronaviruses, and you never hear about them, and they don't know they're infected, and they don't transmit between humans. And all you need is some kind of infection from an animal source that has that the virus is able to transmit between humans. And if it's one that our immune system can't handle, and it causes a lot of death, we're going to be in a much worse place than we are now. Now, I hope we've learned so that we can prepare from this. But with going along with what, Steven, you were saying? This spillover into the human population of animal viruses is a direct result of climate change. And unless we can figure out how to stop that, we are in for a lot more horrible.

Well, they aren't surprises, but this is going to keep happening. And there's direct examples. For example, this. If you've ever heard of Hendra virus. This is something that horses get from actually flying foxes, a form of a bat in Australia. And it's been directly demonstrated that this is caused by climate changes, change, so that the habitats of the flying foxes, they move and then they're near the horses and they excrete and the horses eat things underneath where the flying foxes are roosting. And then they get sick. And it is transmitted few times into humans and it can be deadly. But right now it doesn't transmit between humans, but that's a direct result of changing the habitat of these animals.

And so unless we get really serious about trying to prevent animals from going into urban environments where they can spread their viruses and interacting with humans in ways that they should have their own habitats, and the humans should be separate and we should make sure we're not doing deforestation and other ecological disasters. We need to fix this or we're in for another pandemic. Yes, I'm just thinking. Can you just skip? You want a quick comment on that? Quick comment on that? Yeah. There's an avian flu called h five n one. It has jumped into people, but very rarely. And it's not very contagious among people. But the people who get it have a two thirds chance of dying. The fear is that if it jumps to humans and becomes easily transmissible and through air, and we're in a totally different place than Covid starts to, it's just very different.

We're not talking about two thirds of people over 90 dying, we're talking to two thirds of everybody dying. Then the question is, are we now practicing on chickens to develop a vaccine for chickens or not? Or we're just waiting for it to happen because it may not be targetable as the SARS spike. That was easy. And so I don't know, but we should be doing this, Stuart, within your field. So I briefly mentioned that at the end of my presentation, the idea that we would lose control. And Anthony mentioned this idea of alignment, the fact that we need AI systems to be pursuing objectives that are aligned with human objectives.

And the difficulty is that we don't know how to write down our objectives completely and correctly. And so if you have misalignment, if the AI system doesn't understand something about what humans want the world to be like, and they're pursuing this misaligned objective, then it's kind of like setting up a chess match between us and the machines. It's not that the machines have spooky emergent consciousness and hate human beings like in all the movies. It's just that the machines are extremely competent at achieving the objectives we set for them, and we set a mistaken objective. And this is an old story, right? So King Midas specifies his objective. I want everything I touch to turn to gold. And of course what happens? His food turns to gold, his drink turns to gold, his family turns to gold, and he dies in misery and starvation, right? So this is an old, old story, and it's happening again.

It's happening in social media, where we define the objective of the algorithm as to maximize clicks or engagement. And the consequences are that the algorithms learn to manipulate people, actually turning them into more extreme versions of themselves, in order that the algorithms can better predict what they're going to consume and then feed it to them, sort of like drug dealers, if you like. And so that's an early warning. And these are very, very simple algorithms. When we actually have general purpose AI that will be much more capable than human beings across the board, we simply cannot continue to build our AI systems this way, where we have to specify an objective, and then the machine goes off and does whatever it's going to do to achieve the objective, because some people say, oh, well, we can just switch it off. But of course, if the machine has an objective, it understands that being switched off would prevent it from achieving the objective.

So the first thing it's going to do is find ways to prevent itself from being switched off. It's going to replicate itself across the Internet, it's going to disable the various off switches and so on and so forth. So it's sort of like saying, oh, well, if you want to beat Alphago, just put the stones on the right squares. Yes, but it's not possible for human beings to do that. And I was asked by a film director who was developing a new plot on superintelligent machines take over the world. And he says, okay, I want you to be a consultant on the movie, and you have to help me figure out how the humans are going to outwit the super intelligent machine. And I said, well, I'm sorry, I can't help you. So I think the answer is we have to stop building AI systems the way we're building them, which is to specify a fixed objective as if that objective was correct and perfect.

So we need new kinds of AI systems that operate on different principles, where in some sense they know that they don't know what the objective is. And I think there's some promise in that direction. But aren't there other threats already? I mean, I'm a professor, I teach undergraduates. And now I can see students saying, well, I don't need to really write the essay or I don't need to write the essay to get into Stanford. The computer will write a really good essay. And so there's a higher reliance on. And you see this, and I'm terribly afraid this generation of high school students, junior high school students and college students are developing habits that they may not be able to shake. I think that's another catastrophe, and that's the wall e world.

Well, but it's happening today. Yeah, yeah, no, I think that's real. And it's a cultural problem. You know, I'm planning to deal with it by having all of the grade in my course based on exams that students have to write themselves with no access to a computer, so they better learn the skills. Can Microsoft say, no, no, no, I'm not going to give you the answer. You have to write it yourself? That would be good. So if we go back to the word avoidance again, because there are perhaps alternatives, ways to go for us, and you have been talking about it a bit, and you also mentioned, Pamela, the fact that things can be intertwined. Because when we speak like this, it's like these are separate fields, but of course it's all related. The climate changes will for sure make new conflicts where nuclear weapons might be used, for example, or diseases might not be cured because of huge migrations.

So how do we avoid this control you mentioned? That is perhaps what we are looking for here. Beatriz. I'm thinking on control. I mean, that is, of course, something that was spoken a lot about with the terror balance. So it can be a false control as well. Well, I think. I mean, I think we have to differentiate a little bit of certain things. I think that an asteroid, for example, we can't control that in the same way a nuclear weapon, we built them, we can take them apart. So I think it's really important. And sometimes people talk in particular with nuclear weapons, almost like it would be a natural disaster, like it's something that we can't control. They just hear magically. They just hear and nobody really.

How can we get rid of them when there's really, there's people, companies, governments involved, building them, putting them together, practicing with them, targeting with them, for example. So of course, many of these things, same thing with killer robots, for example, lethal autonomous weapons. I mean, these will be companies building them, scientists working to research them, and governments deciding to utilize them. So of course we can control it. We can stop it. And I think also nuclear weapons is a catastrophe. That's very unique because basically two individuals can make this happen in 30 minutes. Putin and Biden can end humanity as we know it right now. If they decide within a few hours they can start this.

And that's quite unique in one way. And I think that. So when we talk about control, I think we also have to understand that we have control from some of these threats, not all of them, but some of them, and we can actually do things about it. The treaty on the prohibition of nuclear weapons that, for example, IcaNn has worked on was a way to do that. Negotiating a treaty prohibiting lethal, autonomous weapons, for example, and regulating that. There's many other ways of doing it, of course, but I think we also, as society, have to start understanding our own power here, that we do not have to be helpless, we do not have to be passive. Just hearing these things and like, oh, it's too much. I can't handle it. I think it's important to break down these massive threats to humanity. They're really daunting. I mean, I often think about it myself, like climate change. I feel absolutely terrified about it. One feels very hopeless and what can I do?

So I think we have to unpack them into smaller pieces, and particularly the ones that are human made. We have to kind of pull away the curtains and see who's involved in it to make it less intimidating and less overpowering. Any comment? Yeah, well, yeah, first of all, I just wanted to give a little hope because about our son becoming a red giant. So this will happen about 4 billion years in the future. So by that time, I hope we will find probably the way, if we will survive until then. Yes, we have about a billion. It's made shorter. No, but first of all, we have to arrive at 4 billion years from now, which is a long time we are here since. How long? A couple of million years as human beings, more or less.

I mean, I don't know exact numbers, but just to let you know that most of the progress has been done in the past few years, decades. And so we have a long way to go to make sure that by that time, probably we will find a way also to travel and find a better place to be. But we really need to keep preserving a planet, be a planet. And one more comment just to follow up on what you said. So there is also, for the asteroid part, there is also a lot of effort from the astronomers who are really trying to detect the near orbit Earth orbit objects. And in fact, there was a recent mission that was sent a probe to splash on top of one small satellite of an asteroid, and it was successful. So, in that sense, this means that, again, human beings can be capable of diverting trajectories for asteroids, and maybe we will save ourselves.

But that's the thing. We really need to focus on the important things. And one thing that always comes to my mind when I think about our planet is this pale blue dot that was seen by this voyager a long time ago, in the seventies, when the Voyager just turned back toward us and saw this little thing, and that was our planet. And Carl Sagan at that time, he was really emotional about this. I also get emotion when I see these pictures, and this just show the fragility of our world, and we are here, the only thing. So the only beings that could be capable to save this little blue dot. So, please, let's try to do it. I'm thinking. I'm looking for my colleagues. Yeah. Questions? Yeah. Perfect. Go and hear.

Hello. My name is Starro Coley. I'm from Nigeria, but I flew in from Dallas, Texas, in the United States, and I'm currently a microbiologist before starting med school next fall. And my question is more of a biology question. And in terms of the evolution of viruses that facilitate more virulence, are there ways that we can make viruses artificially symbiotic in some way, the way that there are symbiotic bacteria in the gut, for example? And can we sort of harness the catastrophe rather than, like, if we are unable to avoid it in the future? Well, that's an interesting question. I'm not sure we can direct the evolution of the virus because it has huge millions and trillions of progeny, and the ones that actually win are the rare ones that have some kind of survival advantageous. And often, as viruses evolve, in some cases, they become less virulent as they evolve.

But we can't control that. So the example is, in the 1950s, there were all these rabbits in Australia that were introduced, and the Australians didn't want them there, so they brought in a virus that infected european rabbits, some kind of paramyxovirus. They bring them in to kill all. All the rabbits. And so the first year, like, 99.7% of the rabbits died. But some of the rabbits were not affected by the virus, and the virus kept evolving itself with mutations. And in the end, the virus that infected the few rabbits that survived was less virulent. And so, within one or two years, there were just as many rabbits, because rabbits reproduce not as fast as viruses, but they have a lot of progeny because they breed like rabbits.

Right, but viruses breed even faster. And so I don't think we can direct the evolution at all of the virus that's gonna happen with darwinian selection. But what we can do is, as we develop vaccines and things that can protect us, we can start by distributing them, making absolutely sure that the distribution is worldwide, because it is idiotic for countries to think that they should vaccinate their own population, as if the virus will know the difference between someone in their country and someone from a neighboring country. And we live in a world where we all travel. And so it makes no sense not to have complete global distribution of available vaccines and treatments. And even if you don't want to do it for an ethical reason, it doesn't make sense from a health, a public health perspective to do it any other way. Thank you.

And we are. We're running out of time. In short, please. I just think that a lot of these things have that same problem. Right? The nation states ability to protect us against these things is very limited right now because they're so interconnected. A country can't fix climate change on its own. Nuclear weapons is not a national security issue. It's for everyone. If you threaten nuclear war, it impacts everyone. Same thing with artificial intelligence, pandemics, climate. I mean, all of these things are so connected. So how to avoid it is like we have to cooperate better.

Good words to end with. Thank you. Thank you for a great question, and thank you for your insights on a yet unwritten future. Thank you so much. Thank you. Thank you.

Nuclear Weapons, Pandemics, Artificial Intelligence, Climate Change, Global, Prevention, Nobel Prize