ENSPIRING.ai: Announcement of the 2024 Nobel Prize in Physics

ENSPIRING.ai: Announcement of the 2024 Nobel Prize in Physics

The Nobel Prize in Physics for 2024 has been awarded to John Hopfield from Princeton University and Geoffrey Hinton from the University of Toronto for their work on artificial neural networks, which mimic the associative memory functions of the human brain. This breakthrough enables machine learning to recognize patterns across extensive datasets. Their innovations have been applied across physics in fields such as particle physics, as well as in everyday technologies like facial recognition and language translation, contributing to advances in healthcare diagnostics.

Please remember to turn on the CC button to view the subtitles.

Key Vocabularies and Common Phrases:

1. associative [əˈsoʊʃiːətɪv] - (adjective) - Connecting or relating ideas or things to one another. - Synonyms: (related, connected, linked)

John Hopfield and Jeffrey Hinton used fundamental concepts from statistical physics to design artificial neural networks that function as associative memories.

2. cognitive [ˈkɑːɡnɪtɪv] - (adjective) - Pertaining to mental processes of perception, memory, judgment, and reasoning. - Synonyms: (intellectual, mental, psychological)

Billions of neurons wired together give us unique cognitive abilities.

3. generative [ˈdʒɛnərətɪv] - (adjective) - Having the ability to produce or create something. - Synonyms: (creative, productive, inventive)

It is a generative model.

4. algorithm [ˈælɡərɪðəm] - (noun) - A precise rule or set of rules specifying how to solve a problem. - Synonyms: (procedure, formula, method)

With David Rommelhardt, we rediscovered the backpropagation algorithm.

5. prohibitively [prəˈhɪbətɪvli] - (adverb) - To a degree that prevents something or makes it impossible. - Synonyms: (extremely, exorbitantly, excessively)

It was prohibitively demanding computationally

6. stochastic [stəˈkæstɪk] - (adjective) - Involving a random variable or chance. - Synonyms: (random, probabilistic, unpredictable)

Another important discovery came soon afterwards by Geoffrey Hinton and Terence Sienowski. They created a stochastic version of the Hopefield network.

7. versatile [ˈvɜːrsətl] - (adjective) - Able to adapt or be adapted to many different functions or activities. - Synonyms: (adaptable, flexible, multi-talented)

However, a version with fewer couplings, called the restricted Boltzmann machine, developed into a versatile tool.

8. elucidated [ɪˈluːsɪˌdeɪtɪd] - (verb) - To make something clear or explain. - Synonyms: (clarified, explained, illuminated)

And in that process he also elucidated the important function of hidden layers.

9. interconnectedness [ˌɪntərkəˈnɛktɪdnəs] - (noun) - The state of being connected with each other. - Synonyms: (linkage, unity, cohesion)

Neurons wired together give us unique cognitive abilities.

10. revolutionary [ˌrevəˈluːʃəneri] - (adjective) - Involving or causing a complete or dramatic change. - Synonyms: (innovative, transformative, radical)

I think it will have a huge influence. It will be comparable with the industrial revolution.

Announcement of the 2024 Nobel Prize in Physics

Welcome until the press confrance novel priests welcome to this press conference and the Royal Swedish Academy of Sciences, where we will present this year's Nobel Prize in Physics. We will keep to our tradition and begin the presentation in Swedish and then continue in English. And you are, of course, welcome to ask questions in either language later on. My name is Hans Erlegriandeh, the secretary general of the Royal Swedish Academy of Sciences, and to my right is Professor Helen Mons, chair of the Nobel Committee for Physics, and to my left is Professor Anders I. Rebeck, member of the Nobel committee in physics and expert in this field. Oras priest Handarom Maschener, Somlasche this year's prize is about machines that learn. John Hopfield, Princeton University Oak Jeffrey Hinton, University of Toronto, Canada the Grundlenkanduk Upfinninger so Molly or machine in learning og artificialla neuron network the Royal Swedish Academy of Sciences has today decided to award the 2024 Nobel Prize in Physics to John Hopfield, Princeton University, USA, and Jeffrey Hinton, University of Toronto, Canada, for foundational discoveries and inventions that enable machine learning with artificial neural networks. Professor Eda Moons will now give us a short summary of the prize. Please.

Thank you. Learning is a fascinating ability of the human brain. We can recognize images and speech and associate them with memories and past experiences. Billions of neurons wired together give us unique cognitive abilities. Artificial neural networks are inspired by this network of neurons in our brains. This year's laureates for the Nobel Prize in Physics, John Hopfield and Jeffrey Hinton, used fundamental concepts from statistical physics to design artificial neural networks that function as associative memories and find patterns in large datasets.

These artificial neural networks have been used to advance research across physics topics as diverse as particle physics, materials science, and astrophysics. They have also become part of our daily lives, for instance, in facial recognition and language translation. The laureate's discoveries and inventions form the building blocks of machine learning that can aidan humans in making faster and more reliable decisions, for instance, when diagnosing medical conditions. However, while machine learning has enormous benefits, its rapid development has also raised concerns about our future. Collectively, humans carry the responsibility for using this new technology in a safe and ethical way for the greatest benefits of humankind.

Thank you, Professor Ibeck, are you ready to give a more detailed presentation? Thank you. Please. Yeah. So this year's Nobel Prize in Physics is about artificial neural networks. Today we know that this is a powerful computational approach. This was not evident 50 years ago, but it was known that we mammals are very good at pattern recognition. By some sort of computation in our brains. And this sparked an interest in understanding the collective properties of networks of simplified neurons connected by couplings with a strength that could become weaker or it could become stronger. And the idea would then be to determine the strengths of the couplings to achieve a certain function, and doing that by training the network on many examples.

A net breakthrough came in 1982, when John Hopefield presented a dynamical network which could store and retrieve memories and associative memories. The memory had simple binary zero one nodes, all nodes pairwise connected. The states that remained unchanged with time were identified as memories. Moreover, it was possible to introduce an energy similar to energies one has in studying magnetic systems in physics, and that energy had a property that it was low in the states corresponding to memories. Metaphorically, the memories were located in valis of an energy landscape. When starting from a distorted pattern with higher energy, it would slide down. The network would slide down in energy to a nearby valley, and by this process, this doped pattern could be corrected in follow up work.

John Hopfield also showed that this network was robust in the sense that the binary nodes could be replaced with analog ones, and he also showed how the network could be used to solve difficult optimization problems. The creation and explorations of this network by John Hopefield was a milestone in our understanding of the computational abilities of artificial neural networks.

Another important discovery came soon afterwards by Geoffrey Hinton and Terence Sienowski. They created a stochastic version of the Hope field network based on statistical physics and called the Boltzmann machine. So here the focus is on statistical distributions of patterns rather than individual patterns. It is a generative model. Once trained, it can be used to generate new instances from the learn distribution. It has the same basic structure as hopeful's network, but there were two types of nodes, hidden and visible ones, and the hidden nodes were there to make it possible for a network to learn more general distributions. So, while theoretically interesting in practice, the Boltzmann machine was initially of limited use. It was prohibitively demanding computationally.

However, a version with fewer couplings, called the restricted Boltzmann machine, developed into a versatile tool, and I will soon mention it again. So far, I talked about current networks with feedback connections. Many of today's deep learning methods involve feed forward networks, where information flow from an input layer to an output layer via hidden layers. In the 1980s, Hinton showed how such a network with hidden layers could be trained, and in that process he also elucidated the important function of hidden layers. Then, in the 1990s, there were applications of multi layer networks. Successful applications for example, for the classification of handwritten digits. But the networks that one could train had relatively few couplings between consecutive layers.

It remained a challenge to train more general deep structures with high connectivity between the layers. And here, in fact, many gave up. But Hinton did not. And Hinton overcame this barrier by using this restrictive Boltzmann machine. He used it to pre train deep structures. And by this method he succeeded in implementing examples of deep and dense structures, which was a breakthrough toward deep learning.

Finally, so now I have talked about physics, how physics has been a driving force behind innovation and development in artificial neural network. It's also interesting to see how physics as a research field is benefiting from these methods. One example, well established example here is data analysis in particle physics and astrophysics. An increasingly important application is in modeling materials, for example, to search for more efficient solar cells. Yet another example is in explicit physics based climate modeling to enable higher resolution. Finally, I want also to mention two applications, successful applications outside the physics area. Protein structure prediction and analysis of medical images. So thank you for your attention.

Thank you, Professor Ilbeck, I think we might have John Hopfield or Jeff Hinton with us on the phone. Good morning, Professor Hinton, good morning. Please accept our warmest congratulations to receiving the Nobel Prize in physics. Thank you very much. How do you feel right now? I'm flabbergasted. I had no idea this would happen. I'm very surprised. I could imagine. I'm sitting here in this beautiful session hall of the Royal Swedish Academy of Sciences here at the press conference. There are many interested journalists from both the swedish and the international press. Would you be ready to take some questions from them? Yes, yes, please.

Zwarister Vikon. Thank you. My warmest congratulations to your achievements and to this year's Nobel Prize in physics. My name is Susan Ritson and my question comes from swedish television. I know many of our viewers also. Lay people are very curious about the discoveries awarded here today. I wonder, do you remember when you realized your breakthrough awarded today, if you can bring us back in time briefly, and what were the reasons for or the inspiration for these revelations?

So I remember a couple of occasions with two of my mentors. So I have an enormous debt to David Rommelhardt and Terry Sinofsky. With David Rommelhardt we rediscovered the backpropagation algorithm, and that was in the beginning of 1982. And with Terry Sinofsky, Terry and I discovered a learning algorithm for Hopfield nets that had hidden units. And I remember very well going to a meeting in Rochester where John Hopfield talked and I first learned about the Hopfield energy function for neural networks, and after that Terry and I worked previously on how to generalize neural networks to have hidden units. And at the beginning of 1982 we came up with a learning algorithm for bottom machines, which are hob field nets with hidden units. So the most exciting times were with David Gromah on backpropagation and Terry Sinofsky on bolster machines.

Thank you. Okay, more questions first here. Hello, Bogny Radevsky from the polish television. Congratulations. The question you have is a little bit about the future, because obviously we are very excited about what neural networks and machine learning can do now, but what we're even more excited is the prospect of what they could do in the future. What are your predictions about the degree of influence that this technology is going to have on our civilization?

I think it will have a huge influence. It will be comparable with the industrial revolution, but instead of exceeding people in physical strength, it's going to exceed people in intellectual ability. We have no experience of what it's like to have things smarter than us, and it's going to be wonderful in many respects in areas like healthcare. It's going to give us much better healthcare in almost all industries. It's going to make them more efficient. People are going to be able to do the same amount of work with an AI assistant, much less time. It'll mean huge improvements in productivity. But we also have to worry about a number of possible bad consequences, particularly the threat of these things getting out of control.

I think first we have here and then there. Hi, Simon Campanello with the congratulations on the prize. My question is, last year you said in an interview with New York Times that you regret part of your life's work because of the risks with artificial intelligence. How do you feel about it today? There's two kinds of regret. There's regrets where you feel guilty because you did something you knew you shouldn't have done, and then there's regret where you did something that you would do again in the same circumstances, but it may in the end not turn out well. That second kind of regret I have in the same circumstances, I would do the same again. But I am worried that the overall consequence of this might be systems more intelligent than us that eventually take control.

Yes, please. Hi, congratulations on the prize. My name is Amelie Mijner Arn and I'm from TV Four, swedish tv channel. I would like to you develop this Bosman machine. And I wondered, what type of AI. What type of AI do you have come out of it like GPT, or how you see breast cancer in x rays, or how you make funny pictures in Dali. What kind of AI do your research do? Build on your research. So there were two different learning algorithms I was involved in. One was the Boltzmann machine, which was a learning algorithm for Hopfield nets with hidden units. And we did eventually find a practical version of that. But that's not what's led to the main progress currently in neural nets. That's the backpropagation algorithm. And this is a way to get a neural net to learn anything. And it's the backpropagation algorithm that's led to the huge surge in AI applications and in the ability to recognize images and understand speech and to deal with natural language. It's not the boxing machine that did that. It's the backpropagation algorithm.

More questions? One here, please. Hi, my name is Bill. I'm from the swedish paper Nitaknique. Do you have any favorite AI tool that you use? I actually use GPT four quite a lot. Whenever I want to know the answer to anything, I just go and ask GPT four. I don't totally trust it because it can hallucinate, but on almost everything, it's a not very good expert and that's very useful. I don't see any more. Yeah, is there one more hand there, please, in the back? Yes. Hello. Congratulations. Paul Reese from Al Jazeera English. Could you just give us a sense of where you were when you got the call, how it affected you? Is this the day you have in your diary, just in case you get that call? Or is it a bolt from the blue? It was a bolt from the blue. I'm in a cheap hotel in California that doesn't have an Internet connection and doesn't have a very good phone connection. I was going to get an MRI scan today, but I think I'll have to cancel that.

Okay. This seems to be the last question from the press for you, Professor Hinton. Thank you. Thank you. And once again, our warmest congratulations. We look forward to see you here in Stockholm in the December for the Nobel Prize ceremony. Thank you. Okay, so let's move on to more questions about the physics prize and the research involved, or if you want to ask the committee members questions about their work. And again, questions are welcome in either English or Swedish, please.

Yes, if these two scientists wouldn't have existed, would we have, like GPT, then, Professor Moore, do you want to address that? That's a very good, difficult question to answer because it's hard to imagine they have contributed, of course, enormously, very early in the progress of this technology. So in the eighties, these first steps were taken, and later on, other scientists have built upon these developments. So in a sense, it may have been difficult without those groundbreaking first discoveries and inventions.

I'm thinking, since John J. Hopefield is not here, I was wondering a little bit about what do you think is the most exciting part of his discoveries? Professor Ebek, do you want to address that when it comes to his network? He was part of, it had been discussed earlier, but he was able to put pieces together to have, create a network with a clear function and clear principles for how it worked, and it meant a lot in the field.

One more question here. Oh, sorry. Yes. I also wonder about the worries that Jeffrey E. Hinton expressed. What are your worries about this technology, Professor Monz, you want to address? Well, these, these type of worries are expressed a lot and discussed in the scientific community, and I think it is very good that they are discussed, and it contributes to the knowledge about machine learning in the society. I think it's important that as many people as possible learn about the mechanisms of machine learning so that it's not just in the hands of a few individuals. So I think, of course, Professor Hinton is one of many who express their views on this and.

Yeah, I think that's very good. Yeah. Maybe I can add that there are, of course, many discoveries and inventions over time that has been potentially possible to misuse. But it's sort of a common responsibility for a society to have regulations to avoid that to happen. And I think that would apply, too, when it comes to artificial intelligence. I think that is it, actually. Time is running out. Thank you for your interest in participating in this press conference. We hope to see you again here tomorrow when we will present the Nobel Prize in chemistry. Thank you.

It's only for them. Anders Irbek, Professor Anders Irbek, member of the Nobel Committee for physics. Please could you summarize this year's physics prize for us? I would say that it is about foundations in the field of artificial neural networks, which has become, well, the main method in machine learning. Foundations that come from physics with built on ideas and methods from physics. Will you develop that a little bit more, what these particular two, the laureates, have done respectively?

Yeah. When it comes to John Hopefield, who was first, should perhaps say that John Hopeful was already before the price. A towering figure in biological physics had looked at other problems, but now he became interested in neural networks, and he created a model with elements of it were similar to what one has in magnetic models in physics. But this was a new thing for neural networks, and it was. It was good because it put together different elements. It was. He gave the network a clear function and it worked after according to clear principles. All right.

And created. What part was this? Sort of the memory part? Yes, yes. This was the associative memory. Exactly, yes. And the other laureate, Jeffrey Hinton. Yeah. Very soon afterwards, he created a model directly based on, or based on the Hope field network. But he changed so that the focus was now not on individual memories, individual patterns, but on statistical distributions of patents. That was one thing in the eighties by Jeffrey Hinton. He also created learning algorithm for what is called field multi layered networks. Feed forward multi layered networks. What could that do? So the whole idea then, in artificial neural network is that I have my system of nodes, or neurons. They are connected by couplings, and those can have different strengths, and one that in order to achieve some function, one has to train the network on many examples. And training means that one tries to determine good values for these couplings. And this is complex course, there are a lot of couplings in such a network. And Hinton created learning algorithms for this Holtzman machine and for multi layer feedforward structures. And that were two very important contributions.

And these are some important contributions for what we today call AI, artificial, artificial intelligence. Will you tell me something about how this. I think we've all heard about it, but what would you say are the most important ways this affect us today? In many, many ways, not least in science, in physics and other scientific fields. And in physics, these are tools based on artificial network. They had been around for quite some time already before it became deep learning. There were many useful tools. And now, as the artificial neural network tools get more and more powerful, we steadily see new applications and material science. Modeling in materials science is one important example. And outside physics, it was pointed out by Hinton himself in the interview. But certainly healthcare is a very important area. It is already a very good tool for analysis of medical images of different kinds.

Now, we just heard Jeffrey Hinton here in the press conference saying he was flabbergasted. Is there anything personal, you know, about the laureates that you would like to share with us? No, maybe not actually. I mean personal. I wouldn't say no. They have both been really, I think, true pioneers and finding new ways to tackle problems.

How would you, would you tell us in a sentence or two just why you are excited about awarding the prize to this particular field this year? I think it is fantastic to create a completely new way of computing and see how it develops into such a powerful tool. Thank you. Thank you very much. Professor Anders Irvek, member of the Nobel Physics committee. Thank you.

Physics, Science, Technology, Nobel Prize, Artificial Neural Networks, Machine Learning, Nobel Prize