The conversation focuses on the contemporary impact and challenges of artificial intelligence. Key topics include the evolving role of AI in society, job automation, and the growing necessity for inclusive conversations around AI's implications on employment and education. The panelists discuss how AI, such as ChatGPT and AlphaFold, has expanded beyond developers to impact general society, challenging our notion of human roles while emphasizing a collaborative approach between humans and AI.
The discussion then shifts to AI's implications for scientific research and privacy concerns. The example of protein folding with AI underscores the potential for AI to complement human efforts in research, highlighting the symbiotic relationship between AI and human ingenuity. The speakers stress the importance of leveraging AI as a tool while acknowledging the practical and infrastructural readiness challenges faced by various domains.
Main takeaways from the video:
Please remember to turn on the CC button to view the subtitles.
Key Vocabularies and Common Phrases:
1. consciousness [ˈkɒnʃəsnəs] - (n.) - The state of being aware of and able to think and perceive one's surroundings. - Synonyms: (awareness, cognizance, mindfulness)
consciousness emerges in sufficiently complicated systems.
2. disruptor [dɪsˈrʌptər] - (n.) - An entity that causes significant change or interruption to a stable system. - Synonyms: (disturber, interrupter, changer)
AI is potentially going to replace what they do. And that's a huge societal disruptor.
3. mediators [ˈmiːdieɪtərz] - (n.) - Individuals or entities that facilitate negotiation and dialogue between conflicting parties. - Synonyms: (intermediaries, negotiators, facilitators)
AI mediators are now quite good at getting people with opposing views to come to see each other's view.
4. autonomy [ɔːˈtɒnəmi] - (n.) - The state of being self-governing; independent control over one's actions. - Synonyms: (independence, self-rule, sovereignty)
Because the debate about whether these things will want to take over is all about do they have desires and intentions.
5. symbiotic [ˌsɪmbaɪˈɒtɪk] - (adj.) - Involving interaction between two different organisms living in close physical association. - Synonyms: (mutualistic, interdependent, cooperative)
Highlighting the symbiotic relationship between AI and human ingenuity.
6. triangular [traɪˈæŋɡjələ] - (adj.) - Having the form or shape of a triangle. - Synonyms: (three-cornered, tripartite, trilateral)
So it's a relation of objective function or purpose. So we can design that. And in the human society we sometimes have the Triangle control system so that any one agent should not have much power over others.
7. accountability [əˌkaʊntəˈbɪləti] - (n.) - The obligation or willingness to accept responsibility for one's actions. - Synonyms: (responsibility, liability, answerability)
Transparency accountability issue.
8. evolve [ɪˈvɒlv] - (v.) - To develop gradually, especially from a simple to a more complex form. - Synonyms: (develop, progress, unfold)
We evolved with small warring bands of chimpanzees.
9. philosophical [ˌfɪləˈsɒfɪkl] - (adj.) - Relating to the study or moral wisdom and behavior, often involving abstract reasoning. - Synonyms: (theoretical, rational, speculative)
We need to philosophically or maybe in the conceptual level first of all discuss about the safety and what kind of safety actually we wanted to have.
10. regulation [ˌrɛɡjʊˈleɪʃən] - (n.) - A rule or directive made and maintained by an authority. - Synonyms: (rule, directive, ordinance)
Also we recently had a group called AI Institutional Study group that discuss about the regulation, legislation about the AI.
The impact of AI - Nobel Prize Dialogue Tokyo 2025 - The Future of Life
There's been much discussion about the far future possibilities of AI and we will discuss that. But it's also good to focus on the here and now and the current impact, challenges and opportunities of artificial intelligence. So let's start there and then move into the further future as we go along. And this is a session in which I would love to have some of your contributions. So at a couple of points during this conversation I will be reaching out to you to get you to make questions and comments. So we should move quickly on. Right. If we let me just make sure my telephone's turned off. That would be bad, wouldn't it, if the moderator telephone went off.
So let's focus on current possibilities with AI. Let's start with you Arissa, if we may. What do you see as the major challenges for AI now that it's presenting to us as humans? Thank you, Adam. So, yeah, challenges. I think there's various challenges like the privacy accountability issue, transparency, but I think the biggest challenge is the challenge is to what is a human being. I think this is kind of related to the morning session. So when I think many of you actually are using the ChatGPT or the Gemini or even Deep Seq and maybe you think it's really very effective, that supports your work or maybe your lives, however, or maybe the research as well, however. So it kind of questions us what is our human being's role or what we can do to use that in proper way. So that actually challenges what is our human being's role or what actually we expect for the future society.
So that's the biggest question. And under that I think there also exists some kind of very domain based discussion about the privacy, security issues, safety issues. But I think we need to tackle on this bigger issue first and I think lovely, you start there. And I mean one clear example of that being a problem is the automation of many people's jobs. And so many people who are not in any way involved in the development of AI or thinking about AI are of finding AI is potentially going to replace what they do. And that's a huge societal disruptor. And I guess that point that they are not involved, they're just experiencing this is a big change. I think so too. So this whole big chatgpt or the LLM issue actually changed because prior in 2015 or 16 something the many discussion about this AI challenges are mostly for the AI developers. But right now what we are talking is like the employment, education or future work or maybe what's the human role will be. So the challenges came to much more the user based challenge, user oriented challenges. So I think this kind of widens the people who need to talk about this topic.
Indeed, indeed. Exactly. It should be a much more inclusive conversation. Yutaka, do you want to comment on this briefly? Yeah, yeah. I want to mention about the recent advancement of AI and as Adam said, the automation is progressing. We have AI agent services a lot recently or we could say agentic AI which can do things like make reservation for hotels or travels or do purchase on the Amazon website or something like that. The past generative AI, that's the conversation. So if we put question, they can give us answer. But what they can provide now is action, the sequence of actions. So they do things for the users. So that's one thing. Another thing is physical AI. So with the advancement of the so called the robotic foundation model, the robots now have the generalized behaviors so they can afford laundries or they can, you know, you know, do the many housekeeping jobs. So now the. From now on maybe the robotic industry will change a lot. And that is happening right now. And in terms of AI agents, of course they can set their own sub goals and that raises questions about what goals they will set themselves.
Yeah, yeah, that's true. But currently it's more like a theoretical question. But the reality is we cannot expand the domain where the AI agent we work. So for example, the you know, doing, you know, making a reservation or the purchasing something for the user is a good starting point. But if we expand that then the agent does not work anymore. So now we don't have the level of, you know, discussing that should be an issue for the human or not. Maybe we are more, you know, in a primitive level. Yeah. Compared to that. Yeah. Thank you very much. So much, much positive and much, much challenge around this ada. Do you want to talk about the impact of AI on research at all? On research, on research? I don't know. Do you? Well, I don't know. It's up to you. You work on folded proteins and one of the, one of the major developments in AI recently has been the solving of the protein fold with AlphaFold 2 from Google DeepMind. But maybe you want to talk about something else. The floor is yours, please. Thank you.
So proteins are performing almost all functions of every living cell in every living animal, human, bird, flower, anything that grows proteins are able to do it or they are designed to do this by their structure. The structure is accommodating those materials that participate in producing new materials. And when I proteins were discovered. I don't talk about Long ago. But when there was already sequence of proteins, sequence of the amino acids that make the proteins, people thought that maybe they can predict what will be the structure of the protein that has to make this and this and this challenges. And I don't want to say they failed. I also don't want to say they succeeded. The success was marginal. Very good for the time about between 15 to 20% of correct prediction of structure based on the structures that were known at that time, which were very, very few. I'm talking about now 60 years ago or 50 years ago. And the level of prediction was more or less constant with a little increase until about 40, 45% of correct a prediction.
So for the prediction, there was a specific organization created called casp. They were focusing on a specific protein which was at that time a subject for research of structure. And in parallel they wanted to predict the structure. And as I said, there was an increase in the correctness of prediction, but not higher than about half the cases. This was every second year called casp, and every second year it was somewhere else in the world. But I thought that 50% correct prediction is fantastic. But Mold that started this initiative thought that it should be better. When AI came into the game, then the story changed, but not only because of AI, but also because more structures were known. It means the basis to this type of research became larger and there was more information from structures that were determined by experiments crystallographically.
It's very nice actually that you emphasize the human component in this, because the story could be told as perhaps I introduced it, that AI came along and AlphaFold2 and suddenly we had the structure. But yes, the human effort of the hundreds of thousands of people who'd been collecting structures and depositing them in the protein data bank and the human success that had happened prior to that goes alongside it. So it's a beautiful example of humans and AI working together, actually. Yeah, that's the only way structures became part of AI. Protein structures and also protein, proteins, duties, performance. Can I just jump into here? So I think what you mentioned is really important.
How AI, the human endeavor to do this research, and then there comes AI as a tool and how we can use this technology in very good manner, or maybe how we can use, you know, correctly as a tool. And my research field is more, you know, like, you know, where, who. How people can use this AI in the workplaces or, you know, or maybe in the research area. And what I see is that kind of collaboration is not working well to some extent. Well in some kind of Field and why I, you know, there's some places where this kind of collaboration works really good, but there are some kind of places that this is actually not working well. And this is not from only from the technical limitation, but it's more like how human beings can adapt or maybe the environment or the readiness toward this AI.
So I think what you mentioned is how human being or the researchers are actually accumulating data and how they can use this AI as a tool. But somehow maybe we think that the AI as a very great tool. But the thing is that even if the technology is good, but the people's awareness, or maybe the place's awareness maybe like infrastructure issues, the networking, the security issue, or maybe, how do you say, the data set. If those things are not ready, and I can say that almost many places are not ready yet, then the implementation of AI is not going well. So I think in the short term what we are actually facing is that maybe we might be expecting too much on technology, but we also need to kind of reconsider our work style or maybe our awareness or maybe we can, you know, maybe we can adjust it to the AI, but also we can also adjust how AI works. So it's more like a collaboration. So it's not like, you know, just implement the technology and everything goes well. It's not like a silver bullet.
Thank you very much indeed. That's a very nice point to open up to the audience. We don't have very long, so if you would like to make a comment or ask a question about this near term AI impact of AI, please do so. I can't believe in. Is there a hand raised? I see a hand raised here. Could we have a microphone come down please? Microphone. I'm told that there are people with microphones running around. They need to run faster. Hello, thank you very much. Over here please. Sorry, all the way here, here, here please. Okay. I don't know, sorry. I've picked somebody. I hope it's the right person. There we go. Thank you very much. Please, if you could make a short comment, that would be great. Okay, thank you.
So you mentioned about the employment and I think it's really about, in current situation, it's the society is about earning the revenue from the one who pays it as the return to the labor we do. But I know like many years ago there was a really idealistic idea that the automated system would reduce our work and do all the laborious work and enjoy our lives. But I think this implicitly stand on the idea that people still have the Properties or earnings or whatever to do the daily life regardless of this reduced labor. But in reality, do you think are we able to accept, especially those who are in power, the idea that people are earning all these earnings for their daily lives, even though they rely on the automated machines? Thank you very much indeed for that.
Very interesting question. So I suppose this really raises the question of things like universal basic income, the idea that we should all the utopian idea that our drudgery should be done by machines and we should be free to enjoy our lives. Would anybody like to tackle that question? It's a big one. Arisa, do you want to have a quick comment? I think when we talking about this, the employment and what the technology can do, one thing is that like you mentioned, a universal basic income. But on the other hand, when we think about especially the Japanese situation, I think we are actually facing the shortage of labor. So I think to some extent we actually are welcoming how to our tasks or our works can be taken by the official intelligence or the robotics. So I think the point is that it's not like our job or our work can be totally replaced by the human being, but the human being's role is to consider what can be done by human being and what can be done by machines. And that kind of management thing I think can still can only be done by human beings.
Thank you. Thank you. Yes, please. Ada, just a little comment to you. What can be done by human being and what can be done by machine, there is in between what can be done by nature. Very true, very true. But I thank you very much for raising this point of how the conversation has become one of fear of machines rather than benefit and idealized, please. Yes, thank you very much for your interesting talk. I was wondering if you could comment about what your opinion is on the dangers of the randomness of AI and the fact that their so called cognitive processes are essentially a normative black box.
Do you feel that this danger is. Is a true danger? Thank you very much indeed. So we will address as we go forward in the discussion a little bit in a minute, the kind of future dangers. But right now, one good example of the black box is Alphafold 2. So the protein folding problem was solved by a process we do not understand. We don't know how Alpha Fold 2 does this. So do you have any quick comment on the fact that in the end what we thought would be solved by understanding is not solved by understanding? You prefer not to. You prefer not to make a comment? Fine. Yutaka, would you like to tackle that Issue of the fact that, yeah, yeah, of course there are many risks and we have to consider about that. Actually in Japan we have the AI Strategy Council with Alisa and also we recently, we recently had a group called AI Institutional Study group that discuss about the regulation, legislation about the AI. And we made a discussion a lot about the AI risks and what can be done toward to mitigate that risk.
And that trend is ongoing globally and we have to have the international collaboration toward that respect. And to some extent there is a misconception, misunderstanding of the recent technology, but to other respect, maybe we have to take more, you know, cautious step toward the, you know, advancement of AI. So. Yeah, but when. Sorry, Ada. Yes. You want to go? You do that? No, no, I'd rather, I'd much rather listen to you. So now, now I can say something. I was waiting. In my opinion, the next or one of the next duties of AI is to predict which proteins can do what. So if we need to do something that is still not done naturally, can we design a protein according to what we understand to do this particular assignment?
And the question relates to this idea of, okay, as humans, if we came to that, if we managed to do that, we'd understand the processes we used to get there. In this case, we're putting information into a large language model basically and getting out an answer which is we don't know how it came to the answer. The answer is right, perhaps, but we don't know about how it got there. Does that matter? It matters by the way we think and the reasonable way we think. But nature came to some incredible designs without studying together with us. So we have to give nature a lot of respect. True enough. Maybe. I have one comment. So I think, you know, we need to, I think that this kind of black boxing issue is very important also like a transparency accountability issue. So one thing is that technically or technologically, we need to, you know, find a way how we can make this kind of black boxing into more like xai explainable AI. That is one issue. But on the other hand, I think it's when we consider about like a society social risks, the problem is not about, you know, the technology is a black box, but we actually don't know who takes the responsibility of these risks.
So, you know, consider about, you know, when something goes very wrong or, you know, some kind of incidents happen and you can, well, the machine tells we made this wrong decision because this actually was the reason, you know, but it's not like the explanation what we want, but we want, wants who takes the responsibility or how to mitigate, how to prevent that kind of risk to happen again. So I think from societal, social points, we need to think about the accountability issue as well. So I think there's a lot of way to tackle on this black boxing issue. For my opinion from a social scientist is that we need. This is also the topic of the morning session, but we need to have some kind of collaborative approach with the engineers and also, like for social scientists, almost for the politician politist to tackle with this, you know, this new challenging, but also, you know, that brings the opportunities. Thank you very much indeed. Thank you for the point.
We have so little time to cover so much. But anyway, we're touching on things. Let's move. We have to move forward a bit. Let's use Geoff Hinton to begin that conversation. So in this podcast recording, if we could put the Geoff Hinton picture up on the screen now. Thank you. If we could play clip two. First of all, this is Jeff talking about whether AI will become conscious. Clip two, please. That's becoming very central, what it is to be human. Because the debate about whether these things will want to take over is all about do they have desires and intentions. And many people think, for example, there's something that will protect us, which is they're not conscious and we're conscious. We got something special that they ain't got and they will never have. And I think that's just gibberish. I'm a materialist. consciousness emerges in sufficiently complicated systems, perhaps systems complicated enough to be able to model themselves. And there's no reason why these things won't be conscious.
Okay. And in fact, let's run straight into clip three as well. And then, Yutaka, I'm going to come to you. So if we play clip three next, please. And this is about if they would. I'm just worried by the fact that there's very few cases of more intelligent things being controlled by less intelligent things. Once they're a lot smarter than us, I don't think they'll put up with that. That's what worries me. At least. Now there's one line of argument that's more promising, which is a lot of the nasty characteristics that people have come from evolution. We evolved with small warring bands of chimpanzees, or our common ancestor was chimpanzees. And that led to this intense loyalty to your own group and intense competition with other groups, being willing to kill members of other groups. That sort of shows up in our politics quite a lot right now. These things didn't evolve. And so maybe we can avoid a lot of that nastiness in things that didn't evolve.
So nice thought that we could learn how to behave from them. Yes. In fact, AI mediators are now quite good at getting people with opposing views to come to see each other's view. So there's a lot of good can be done with AI and if we can keep it safe, it's going to be a wonderful thing. AI mediators. Maybe we should have AI moderators. I don't know. Anyway, the. It was interesting that he actually took a different view from Rich Roberts earlier, who was pointing out that humans kind of know how to behave towards each other. And he was saying that humans don't know how to behave each other. Maybe AI will. But Yutaka, let me come to you. So will AI become conscious? Is one question. And then if it becomes super intelligent, will we be able to control it?
Yeah, thank you for asking. I think with regard to the consciousness issue, my answer is yes, that we can build the AI with consciousness. Because as Hinton Sensei said that I believe consciousness is a mechanism and that can be clarified and revealed in an art shall way. So the AI could be conscious and they can model themselves and yeah, that's the answer. Another question, I think. I believe the less intelligent agent can control the more intelligent agent. That is happening all the time in the human society. I'm sorry, but it depends on how reward is distributed or how the value is created. For example, if some agent is created to maximize some objective function, there could be another agent that could use that agent for their purposes. So maybe the objective function is for example, very narrow one, the other agent can utilize that for their purpose. So it's a relation of objective function or purpose. So we can design that. And in the human society we sometimes have the Triangle control system so that any one agent should not have much power over others.
So that kind of devices could also be possible. Do you think enough work is going in currently to actually looking at how to implement all that? Basically under the title of safety research? Yeah, I don't think it's enough. We have to be more careful about risks of AI, as Professor Hinton says, always. And maybe this is not short term, but middle term, long term issue. But we have to make enough endeavor to realize that kind of safety. Maybe we have more international research on this respect and we have some governmental cooperation to monitor the ongoing development in the, you know, companies globally. Yeah. Arisa, would you like to comment on this? I agree with what Yutaka San said about, you know, we.
We are actually doing, you know, well, collaborative work on the safety security issue. But when I talk with the other colleagues from overseas, I think, you know, we. We need to be more conceptually talk about what does safety means or maybe, you know, what does consciousness mean? You know, because, you know, when we, in Japanese, we. In Japanese, we call safety as anzen. Or maybe, you know, some person or some people may think, you know, the safety can only be applied to, you know, the physical or financial or maybe the national safety or the security, but it might be, you know, dependent, even though, you know, we use the same word. The Japanese safety might be slightly different from, you know, European safety because it depended on the circumstances or situation. We're in the island society, we actually have well educated, but maybe in the places with totally different situational circumstances, what does safety means or safety criteria that they actually want is totally different. So I think even though we wanted to use the artificial intelligence in our daily lives, or maybe in the education or the politics or other way around, I think we need to philosophically or maybe in the conceptual level first of all discuss about the safety and what kind of safety actually we wanted to have. So I think this is not the technical issue or the technological issue, but it's more like ourselves focusing on what kind of society or what kind of future we wanted to live. So that's the starting point of the question. Do you want to call them? I want to add something to you. The question was whether AI can become conscious. Nature is conscious. We can be upset about this point or that point or try to change it, but nature is conscious without any human being making it. It's by nature. So all what we have to do is to follow nature in our AI.
What do you mean by nature is conscious? We are alive. We are alive, and we can take a flight from Tokyo to London. Nature is conscious, otherwise it would fall into the ocean. If Geoff was here, he'd say all of what you just said was all very well, but actually we're in an environment where governments are deregulating AI. They're loosening the fetters. They're saying it'll stifle innovation if we impose any restriction. And that is, we're going in the wrong direction. And also he'd say that government advisors were often people who had vested interests in seeing AI companies succeed. So how do you change that environment? Do you need to change that environment? Changing the politicians? That's another big conversation.
We can talk about that at the end of the meeting. If you like? Like, yes. Changing the politicians is one way, but would you anyone like to comment? And then I'll open it to the floor for just very quick last comments. Well, we might not directly answer to your question, but there's a kind of old saying that I really like is that the road to hell is paid with the goodwill paved with the goodwill. So even though people are not intending to make the the wrong use of the AI or. But people are thinking to make the society good or the better. But if there's a less collaboration or maybe people might think the safe society might be powered by the giant the country or maybe the one or the politician. So I think what we need to do is to consider or maybe to share the viewpoints of what does the society would like and how we can make the governance of the AI system.
And I think the world today is very becoming fragmented and people are actually kind of enter the AI race. However, because it's entering the AI race, this is the actual point. We need to think about the collaboration or maybe to think about how we can make this AI more controllable or maybe to make in the governance framework. So I think what we are doing currently in the governmental level or maybe in the academic level is very important. But I think we also need this kind of public debate as well. Otherwise, you know, it's not only the scientist or the politician who is making these rules or maybe using this technology because I see that lots of people are actually using the ChatGPT today. So we are all in charge with this kind of governance or the control.
Thank you. Thank you. Yutaka, very quickly. Yeah, yeah. If Professor Hinton have attended this meeting, maybe he should take the very, you know, risk side. And maybe I wanted to have the very positive side, but so I have to do both the same. But I think on the one hand the AI advances society a lot. So we are using ChatGPT a lot. And maybe the science would even more advance with the help of AI. That's true. And on the other hand, maybe we have to have more, you know, cautious step. And last, in the last dinner many people mentioned about the Ashieloma conference and which is a very good activity. And in the biology field there's no accident so far. That's fantastic. And maybe same thing should happen in also the AI field. So we have to.
So the Acinema conference, self organized by people starting the biotechnology revolution. They got together and they decided on the safety principles then and there. And that's 50 years ago. And yes, now the same thing. Is. Yeah, that kind of effort should be off. We have just a minute or so, but is there somebody who wants to make a comment or ask a question, please? I'd love to hear from somebody. There's somebody right in the middle. Can we get a microphone to this person? Sorry if I didn't see anyone else. Just here. Thank you very much. So I think that pretty much the last word of this session goes to you. Thank you for microphone and thank you for nice discussion. And I have a positive opinion about the AI.
If the AI is more clever, smart than human, if the government or company is organized or managed by AI, the society could become better. Of course there is a risk, but I think the evolution of AI is positive for human. So what do you think about the organization or management by AI? Ada, would you like to have your institute managed by AI? Because more crypto than human, maybe clever, better at management, better at handling interpersonal relations. As Jeff said, good mediators. What do you think? Very quickly. I'm not sure that I would like to have it managed by AI formally, but in life it is. In real life it is this way or that way. You can look at it at the end. AI is one of the more important are factors in having minds.
In practicality, yes, it does manage you, but actually I don't think anyone should manage you. Ada. Yeah, myself, of course, but the institute is larger than just me in general. Do you think it would either of you like to comment or both of you like to comment in like 10 seconds on the idea of AI management of companies, countries. AI can manage the organization very well if given the proper objective. So the objective, the purpose is the humans role. I think my quick answer is that maybe they can, but I think if you wanted to do that in proper way, we need to change the system because our current system is designed that this organization or maybe the government or company is controlled or maybe governed by human beings. And what human can do is totally different from what technology do. So if we wanted to shift it on that way, we first need to change the system.
Thank you very much indeed. So we all need to learn from each other, basically. Wow. Okay, thank you very much indeed. We got through a lot in a short time with your help. Thank you all very much indeed.
ARTIFICIAL INTELLIGENCE, SCIENCE, TECHNOLOGY, AI SAFETY, JOB AUTOMATION, HUMAN-AI COLLABORATION, NOBEL PRIZE