ENSPIRING.ai: Coding Consciousness - An Algorithm for Awareness?
The video explores the fascinating intersection of consciousness and computational systems. It begins by drawing an analogy between the brain and computers, examining whether consciousness, the essence of self-awareness, can be effectively replicated through a computational process. The video introduces viewers to Lenore Blum and Manuel Blom, two distinguished computer scientists who propose that consciousness can be instantiated in a computational context. Their views are informed by their illustrious careers and experiences, demonstrating a blend of computer science, cognitive neuroscience, and the principles of Turing machines.
Lenore Blum elaborates on the conscious Turing machine concept, a computational model inspired by Alan Turing's theoretical work. She discusses their conscious Turing machine model, which incorporates elements of Bernard Baars's theater model of consciousness along with advanced computational algorithms to simulate cognitive processes. The idea is to understand consciousness not as inherent brain function but through a theoretical computational lens that solves problems akin to consciousness, including phenomena like change blindness and blindsight.
Main takeaways from the video:
Please remember to turn on the CC button to view the subtitles.
Key Vocabularies and Common Phrases:
1. consciousness [ˈkɒnʃəsnəs] - (noun) - The state of being aware of and able to think about oneself and the surrounding environment. - Synonyms: (awareness, perception, mindfulness)
Can we really imagine that consciousness can be reduced to or replicated by, or created by a computational process?
2. computational [ˌkɒmpjʊˈteɪʃənl] - (adjective) - Relating to the process of mathematical calculation or operation performed by a computer. - Synonyms: (computer-based, algorithmic, numerical)
You know, when we think about consciousness, when we think about the analogy between the brain and a computational system, right, that kind of analogy, I think, is one that we can all readily grasp, right? We can envision our brains taking in sensory data, manipulating the data through incorporating all manner of knowledge, of emotion, of life experience that we have acquired, and from that processing, we get our response
3. replicated [ˈrɛplɪˌkeɪtɪd] - (verb) - To reproduce or make an exact copy or model of something. - Synonyms: (duplicate, copy, clone)
But the question is, can we push this analogy further and imagine that this most precious quality of all brain functions, conscious self-awareness, can we really imagine that consciousness can be reduced to or replicated by, or created by a computational process?
4. cognitive neuroscience [ˈkɒgnɪtɪv ˈnjʊərəʊsaɪəns] - (noun phrase) - An interdisciplinary field of research focused on the mind, cutting across psychology, neuroscience, and computer science to understand cognitive processes. - Synonyms: (brain science, neuropsychology, mental sciences)
And her current research, inspired by theoretical computer science and advances in cognitive neuroscience...
5. homunculus [həˈmʌŋkjələs] - (noun) - A very small or miniature human or humanoid creature; often used in discussions about the mind and philosophical debates. - Synonyms: (dwarf, midget, tiny creature)
And to do this, I'd have to understand what's in the head of that homunculus
6. Theater Model Of consciousness ['ðɪətə ˈmɒdl əv ˈkɒnʃəsnəs] - (noun phrase) - A metaphorical model of consciousness comparing the mind to a theater stage where conscious experiences occur. - Synonyms: (theater metaphor, stage theory, performance model)
And one of the models that also inspired us was Bernard Baars's theater model of consciousness.
7. algorithmic [ˌælgəˈrɪðmɪk] - (adjective) - Relating to or denoting a set of rules to be followed in calculations or problem-solving operations, typically by a computer. - Synonyms: (procedural, systematic, rule-based)
You believe that this kind of algorithmic process is happening in our heads all the time?
8. phenomenological [fəˌnɒmɪnəˈlɒdʒɪkəl] - (adjective) - Relating to the philosophical study of the structures of experience and consciousness. - Synonyms: (existential, experimental, subjective)
And once it starts to do that, I believe. I think, I'm not sure so much of men will, but I certainly believe it will have the kind of phenomenological consciousness.
9. multimodal [ˌmʌltɪˈməʊdəl] - (adjective) - Involving multiple modes or methods of operation or representation, particularly relating to communication that involves different modes (visual, auditory, etc.). - Synonyms: (diverse, varied, versatile)
It's a multimodal language. It's a very rich multimodal language.
10. progeny [ˈprɒdʒəni] - (noun) - Descendants or offspring, often used in genetics and biology to refer to the children or descendants of a living organism. - Synonyms: (offspring, descendants, heirs)
Without these machines, we're not going to survive. They are our only hope that maybe they'll help us to survive. If not, they will be our progeny.
Coding Consciousness - An Algorithm for Awareness?
You know, when we think about consciousness, when we think about the analogy between the brain and a computational system, right, that kind of analogy, I think, is one that we can all readily grasp, right? We can envision our brains taking in sensory data, manipulating the data through incorporating all manner of knowledge, of emotion, of life experience that we have acquired, and from that processing, we get our response. But the question is, can we push this analogy further and imagine that this most precious quality of all brain functions, conscious self-awareness, can we really imagine that consciousness can be reduced to or replicated by, or created by a computational process? A process that might be replicated outside of a biological substrate, of course, can be replicated in a computer. And our next two guests are convinced that this is the case, that consciousness is a computational process and it can be replicated by a computational device. And indeed, they believe that they have laid out a potential pathway for at least starting to head in that very direction.
So I am pleased to invite to the stage Lenore Blum, who is distinguished career professor emerita of computer science at Carnegie Mellon University. She's internationally known for her work in increasing the participation of girls and women in STEM. And her current research, inspired by theoretical computer science and advances in cognitive neuroscience, lays the design for a conscious. Thank you so much for joining us. And we also have Manuel Blom, who is professor emeritus of computer science at Carnegie Mellon and at UC Berkeley. He's one of the founders of complexity theory and received the highest honor in computer science, the Turing award. Thank you for being here.
Alright, so we'd love to, in a couple minutes, get to your ideas of how consciousness can be instantiated in a computational context. But I'd also like to get a sense of your own personal journey to thinking about these kinds of issues. I mean, was this something, you know, five year old Manuel was wondering about consciousness, or is this something that is something that occurred later on in your career began perhaps when I was in second grade? Oh, I was being kind of facetious, but fantastic. Well, you see, my teacher, in the parent teacher situation, my teacher told my mother that he might get through high school, but don't expect he can get to college, really? And this made my mother very unhappy. It didn't make me so unhappy. It's just I wanted to be smarter. So I went home and I kept bugging my dad, what can I do to get smarter? And he had a wonderful idea. He said, you know, if you understood what's in your head, you could be smarter. And I thought, oh, what a wonderful idea. And so that was a second grade.
Wow. And I remember in the fourth grade, when I was ten years old, walking in the garden, trying to figure out what's going on in my head, trying to introspect. I didn't know the word, but I tried to, and it didn't work. I could not. It felt to me like there was a little person inside my head looking out through my eyes. And to do this, I'd have to understand what's in the head of that homunculus. And I understood that that's not going to work. And so did you then head in different directions? I mean, is this something that has stayed with you throughout, or. Yeah, yeah. No, it really stayed with me. I did actually manage to get to college. I went to mitzvah. Thank you.
And what did you study? Yeah, so my parents wanted me to study electrical engineering, so I did. And I thought that was actually good for me because I still was not very smart, actually. But a very wonderful thing happened. First of all, I did learn to how to think at MIT. And in my second year there, I took a course with Richard Schoenwald on Freud. It was first semester. There were three of us. The second semester was just me. We went through Freud's papers, and that was great. That was my second year. And then in my third year, my teacher, Richard Schoenwald, caught me in the hallway and said to me, you know, a wonderful person has come to MIT, a neurophysiologist who actually doesn't believe in Freud. And, in fact, Freud had written a paper called the future of an illusion. Yes. This man, Warren S. McCulloch, a great neurophysiologist, had written a paper, the past of a delusion. So, you know, I thought I'd go. He told me to introduce myself to him, and I thought I'd go down and convince him otherwise. He convinced me. He did a very good job of that.
I went down there, I told him I wanted to work with him. He said, after you read all these books, come back. I wasn't about to read those books, but I did read his papers, and his papers were wonderful, wonderful. They were mathematical. They tried to get at the heart of the problem just through mathematics, and really well written. And so I approved some theorems. And then he took me in and his name, Warren S. McCulloch, is the neurophysiologist who actually defined the formal neuron. He and Walter Pitts defined the formal neurot and proved that you could build computers, the Turing machine, out of these formal neurons. It was really interesting that the neuron, the formal neuron, had inputs that were positive, that were exciting and others that were inhibiting. And the neurophysiologists of the time said, we haven't seen those negative inhibitions. So McCulloch and Pitts could say, it has to be there. The mathematics proves it has to be there.
But then somehow you transitioned to computer science as a dominant focus. So you have to understand that there was no computer science at the time. Class of 59, no computer science. The first computer science course taken in this country was taken by Lenore at Carnegie Tech. There was nothing like that at MIT. I see. What interested me. So you have to understand, McCulloch was very supportive. He was constantly supporting anything I wanted to do. He was positive. Yes, you can do it. Really rooting for me. Until a few months after I'd started working with him, I told him, you know, what I really want to do is understand consciousness. And he looked at me, and for the first and only time in his life, he said, you will not study consciousness. And Walter Pitts came and explained to me, well, you have this much bone between your brain and the outside world. We can't go in and do experiments. But I knew that what they didn't understand is that I didn't want the circuitry of the brain, that that wouldn't help me to understand. What I wanted was what the fourth grader wanted, some kind of understanding of what's going on. And the wonderful thing about working with Lenore on this is that I feel I could now tell the fourth grader.
And so why don't we try to do that to us fourth graders here? So, Lenore, you working together, you have set up a structure, as I understand it. It's inspired by computer science, by Turing machines. Can you just give us a feel for how to think about your way of describing consciousness? Yeah. So we're inspired by Alan Turing's simple but powerful, powerful model of computation. So I don't know if you've ever seen. There's Alan Turing. If you've ever seen a model of a Turing machine, it's so simple, but on the other hand, it's very powerful. Anything you can compute in the cloud or in a supercomputer, you can compute on this universal Turing machine, just reduce to a tape that can go forward and backward, a head that can read it and print. And ultimately, every computation can be reduced to this basic thing.
Exactly. But what's really neat about this is you can't really get your head around the cloud. And what's going on out there. It's really very complicated. But you can look at the Turing machine, and you can prove theorems about it, what it can compute and what it can't compute. And that's universal. So what we were doing with our conscious Turing machine is trying to look for a model. You don't mean a Turing machine that's conscious. You mean a model of consciousness that is resonant with a Turing machine. Is that right? In fact, is inspired by having a single, simple, powerful model. And one of the models that also inspired us was Bernard Barr's theater model of consciousness. And the theater model is, you have a stage like this, and on the stage, then you have an audience out there. And on the stage you have an actor who is sort of projecting to the audience. And that's essentially what the theater model is.
And a good way to explain it is, have you ever experienced that? You've gone to a party and you see somebody, you know, but for the life of you, you can't remember their name? I only have that experience. How about you guys out there? Have you had that experience? Well, let's think about what's happening. What happens is, you know, then you're driving home, and the name sort of pops up into your head when it's too late to do anything about it. And so you want to know what's going on. So what's going on is sort of like this. You have an audience of a large number of processors. These are long term memory processors. And one of those processors gets so embarrassed that you can't remember her name, puts up a question from the audience. There's a question, what's her name? And it gets up on stage. That script gets up on stage. And the moment that script gets on the stage, it gets broadcast down. I would say, what's her name? And broadcast.
You mean broadcast to the brain? We're broadcasting to the long term memory processors, which is like the audience sitting there in the dark. They're very powerful. Each one of them are very powerful. They have their own specialties. And so when it's broadcast, what's her name? One of those processors will say, well, I think I met her at that last world science conference, the big one you had in New York, remember, nine years ago? And that gets with a lot of weight. Gets onto the stage, and it gets broadcast down. And another says, well, I think she was starting to be interested in consciousness. And that comes up to the stage, and then all of a sudden, it pops up. Oh, I think her name was Lenore. And that's the kind of process that's sort of going on with the stage model. And that's another inspiration for our consciousness. Now we have a visual. I don't know if this is the right moment to show it of, effectively, the party example that you were talking about.
Can you sort of talk us through the version of what you just said? But in this more quantitative setting. Right. So what we have to do now is how is that audience going to decide which member or which of their scripts gets up on stage? And either you get together and there's 600 people out there, and you have to make a decision and decide who gets what. We have in our machine is a very well defined process, a competition to get up on stage. And it has a really nice property that the probability of a processor getting its information on stage is independent of the location of the processor. So it's a little bit like a tennis tournament, but better. So let's see what happens here. Here you see this person at the party. What's her name? And here we have in our brain, we have about.
Wait, is that reading me? So all of these processes have little things that they want. These are called gifs that they want to get into the short term memory onto stage to be broadcast out. And they're also going to put a weight on those gists. So there's I'm hungry, and it's going to have a weight here we have a weight of three. So that's the processor. That's what we call the chunk. The chunk is the gist. I'm hungry. The wait three. And that processor on top, and we're going to give it a name. A I'm hungry. So we don't have to say I'm hungry all the time. And what I want to say is that we only have eight processors here in the brain. We have 10 million cortical columns. In our conscious turing machine, we have more than 10 million processors. So a lot. But we're just going to demonstrate it with eight. With 10 million.
Okay. Yeah. Where's the bathroom? It probably is not that urgent because I think it's giving weight one. Okay. Right. And then we're going to see how they're going to compete soon. Then there's another processor and has another query. Should I. Oh, I guess it's getting cold. They want to get a sweater. And that has weight four. And then it's too loud in here. Where's the wine? Five. What's her name? Okay, that's the one that's getting a lot of weight there. I'm getting tired. Okay. And they smell it. Okay, now we, these are happening simultaneously and they're going to vie with each other to get their message on stage. And let's see how this goes. It's a little like a tennis tournament. So we're going to put ladies out and a's going to play b, c is going to play d, e's going to play f and g is going to play HDD.
I think you saw binary competitions between. Is it going to be a binary competition? So you saw a 31 up there a second ago, and 31 is the sum of all the weights. And what I said is that the winner is going to have a probability of winning in proportion to its weight. So that's a really nice competition because that means it's independent of location. So in a tennis tournament, if suppose there are four players and three of them have ranking one and one of them has ranking zero, you'd really prefer to play the one with ranking zero, right. You have a better probability. So where you're placed is going to matters a lot. But in our case, it doesn't in this thing. So let's see how it goes. Let's go through. So a is going to play b, and this is a winner take all competition. So a gets now a weight of.
Okay. And in this case, actually, if you notice, d had a lower weight. But at each node, what we have is something called coin flip neuron. And it's going to give, it flips the coin. And so even though d had weight, one, one out of six would have had a chance of winning. So there is a chance that it could. And when it wins, it acquires all that additional weight. Winner takes all. Okay. And we're going to see how it goes. Especially we're going to watch f because F. Washington, what's her name.
Yeah. Okay, so now we're going to look at e and f. And now if you look here, the sum of the weights at that point is 14. And the probability of f winning is what? Nine over 14 at that. Chase. And let's suppose our coin flip nor made f win and it gets 14 points. Yeah. Now g is going to play h. I think h is going to win here. And hdd gathers the total if the winner takes all. And now we have the second round going. So a is going to play with d, f is going to play with h. And I think d wins in this. Yeah, he's pretty lucky, even though his and d gets nine points.
Now, what's happening is f is playing h. Now, at the first round, it was what, nine over 14 was the probability. And now at the second round, it's going to be what, 14 over the sum. And the sum is 14 over 22. So that's the probability that f gets to this, wins at this stage, starting at the beginning. And now f is going to win and it's going to get the sum, the weights, winner takes all, 22. And now d is going to play f. And f was what's her name. That is f is what's her name. And now look what's happening. The sum of the weights here is 31. And so what's f's probability of winning from the beginning to this stage? The first stage was nine over 14. The second stage was 14 over 22, and the third stage is 22 over 31 over 31. Everybody see that? But what is that? Let's see how things cross out. Let's see what the probability is.
The 14 over 14 is going to cancel out and the 22 over 22 is going to cancel out. And what's the probability of FDA winning? It's 931, which is exactly what we wanted. We wanted to show that the probability of any player, in this case f, winning, was in proportion to its weight. It's very nice. Very nice. Now I understand the mechanics, and maybe I should let you finish to the end before I ask my question. But very quickly. You believe that this kind of algorithmic process is happening in our heads all the time? Not exactly, no. What we're making, we're not modeling the brain. We're interested in consciousness. And as people are coming from their different areas, you know, from neuroscience, from philosophy, looking at it from their perspective, and we're looking at, as theoretical computer scientists, and we want to understand consciousness if it has properties like the brain. That's really nice for us. But we want to see if we can get the hard problem. We want to see if we can get from this simple computational model, things that solve the heart problem that have sent, you know, feelings. What's very nice about our model is that it, on the whole, it's doing very much like what our brain is doing, because, in fact, this is what's her name comes up.
And now what's going to happen is what's her name gets on steam stage, and what's going to happen now, it's going to be broadcast to all the processors out there. And now we're starting that process. One of those processes out there in the audience is going to say, oh, I remembered I met her several years ago, that's going to get up on stage, and that process is going to continue very much like what's happening to us. So in very similar ways. And what really remarkable thing about this model is that we've been able to show a lot of phenomena that people identify with. consciousness can be exhibited in this very simple CTM, for example. You'll see, I think, an illustration of change blindness. We have blindsight. We have a number of different phenomena that happen in this machine. You want to add to this, Manuel, at all? I think you're doing great.
Okay, so, right. And so if you build a system that actually carries out the algorithm that, you know, I guess you can call it a conscious Turing machine. But a Turing machine that models consciousness perhaps feels better to me. Do you. Do you think it will have some level of consciousness? Okay, so let me clarify, too. People see really two kinds of consciousness. One you could call cognitive or access consciousness, and the other is the subjective phenomenon, the feeling of it. Right. Right. Now, we've sort of demonstrated the more access or computational part, the cognitive consciousness. We don't have any feelings yet, and this is what we call attention. So we say the conscious Turing machine has just paid attention. But attention is not all you need. What some of these processors do is create models of the world. And that's very critical for the subjective part, when you have the model of the world and yourself in that model. And that's essentially the second part of our model, the conscious Turing machine. It builds in the model of the world. Now, that makes it very different than Baars model.
He just has the attention part, and we have now. And it's very different from, like, large language models, which is just statistics. I don't think so. I'm one of these people who believes that these large language models are a little bit conscious, and let me tell you why. I believe they're building models of world. They're not just, you know, everybody says one of the ways they started is very statistically, they essentially predict the next word. And now they do have such a core of data, and they have so powerful machines that their predictions based on all this data really come out to sound really reasonable. Right. But I believe also, at the same time, they're creating models of the world because they're hooked into the world. Now, when they have a. If you have a robot with one of these large language models, what is it doing? It's getting senses from the outside. It's actually acting on the outside. It's getting information in. It's acting on the outside, it's starting to make models of the world.
And once it starts to do that, I believe. I think, I'm not sure so much of men will, but I certainly believe it will have the kind of phenomenological consciousness that when you said it has a little bit of consciousness, do you mean like, right now? I think even right now. And, in fact, there's been some research recently of looking inside these models. What are they doing? Anthropic came out two weeks ago. There's an article, I think, in science about looking inside and seeing what's happening. And you can see that these large language models are aggregating information. They're making sort of metaphorical analogies, and they are doing it by making models of the world. We see that. We also see, at MIT, they did some work on some of the large language models that were trained on looking at faces and looking at objects. Okay? And then they had a large language model that was just looking at faces and objects. And they found, to their surprise, that it was doing as well as the ones that were just looking for faces and the ones that were just trained on objects.
And when they looked inside, they saw that this machine had really bifurcated and actually created an internal part that looked like dead faces and an internal part that was looking, like at objects. So these large language models are starting to do things, and I think, to think that they're not doing more, I think is actually putting our heads in the sand a little bit. But do you feel the same way? Do I have. It's very hard for me to say. I think not, but I think it's coming. I do think that these machines will become conscious. And do you think that they can become conscious without us having built in a world model? I mean, think that that's good that you do not have to build it in. The processors to start with are, by and large, all of built in the same way.
They know, for example, they know, for example, that when they want to do something, they are making predictions. They know that the consequence of doing what they want to do will lead to something. And if it doesn't, then they have to correct their algorithm. So there are these correction algorithms in. In the processors that are going on all the time and making this model better and better, to the point where now we think we're actually seeing the world and we're not really seeing the world. We are seeing a model of the world which is not the world itself. And you can get a sense of this from. Think about the infant when it's born. Newly born infant. It doesn't have much built into it, and so it's born. And all the processors, they don't have much to offer. They have a weight of one. They're essentially no ops. They don't really have much to say. Weight of one. But there's one processor that's kind of a gauge, like a fuel gauge, but this one is an oxygen carbon dioxide gauge. And it notices that it needs oxygen, and it starts at a weight of one, but that weight begins to go up and up until finally it's really very powerful. And this processor's chunk gets up on the stage and says, I need oxygen.
Now, it doesn't have any way of saying that. It doesn't have a language yet. There's no language at this point, but it's saying garbage and a very high weight. And the processors know that when something like this is going on, something very terrible with high weight, that they've got to do something. Of course, they don't know what to do, all of these processes. So they do everything you can imagine. The processor that controls the arms will flail the arms. The processor that controls the legs will flail the legs. There's a processor that controls the vocal cords, and it will either scream or cry and say it cries. And now the weight starts. The cry works, and the weight, this terrible weight, starts to go down. And the processors at this point realize, ah, crying is good. When something bad is going on, cry, it's good. And that, you know, so you're saying it's building a world model from that success in that particular mode of crisis.
Right, right. And if you've ever seen a call, the midwest, you see these babies when they're born and they're flailing all around, and everybody's happy when it cries, because at that point, they know it's getting the oxygen and the baby does, too. And that really is a kind of satisfaction. And the baby now knows when it cries, it can get fed or whatever. So I gather you take that even further. So you imagine, I mean, if this is the baby's view right here, then it learns about itself and the world just through the innumerable instances of the example that you just gave. And presumably at some point starts to see its own limbs and recognizes some sense of self and so forth. And in this world model, it's starting to label. It starts with a very foggy world model, and then it starts to label things. And another feature of our machine is there's an internal language called brainish.
Which is a multimodal language. It's a very rich multimodal language. So, for example, if you want the word for rose, you just have rose. But if you want a brainish word for rose, you want to fuse together the sweet smell, the soft touch, the bright red color, and those senses are all fused together. And in the model of the world, when you label the rose with this brainish word that's fused, then you have the sensation of smell, the sweet smell. And today's AI's are really doing a lot of multimodal learning as well. So there's no reason why these AI models can't be actually fusing all of these sensations, which is the sensations which are very much part of phenomenological consciousness. So where do you stand right now? So, you gave us a brief outline. I know it's just the merest taste of the computational processes that are the engine of this conscious AI, this turing. I really should call it a conscious Turing machine that you imagine ultimately being. But where are you? Are you actually programming this? Are you building this? Is it all theoretical?
It's all theoretical, sort of like string theory. Thank you. Yeah. By the way, I really love your book. You didn't hear that part of the comment, but do you envision creating a thing? So, right now, it's very theoretical. We can think of it as a framework. One thing we like about it is very, very simple. And, in fact, when we've taught many classes now past four or five years, and what we do is we challenge the students to tell us things that they want the model to do, and they have to add all these bells and whistles, and then we come back to them and say, no, no, no, you didn't need that at all. We can with a very simple model. So, in fact, the model today is simpler than it was four years ago, because you're finding that it can generate the higher levels of complexity from the basics, and we don't need more. So that's one thing that gives us confidence that we have some kind of principles. Another thing that, in fact, this paper that we just wrote with my provocative title, that a consciousness is inevitable. What we show is that at a high level, actually, it aligns with many of the major theories. So there's global workspace. It aligns with.
These are the theories of consciousness that people have put forward. Absolutely, yes. Not trying to put it in a computer, per se, just thinking about the human brain. Right. But we align with, for example, Michael Graziano's attention scheme of theorem, which is my favorite, by the way, that's the model of the world. That's exactly the model of the world. So we have the global workspace, the model of the world. I mean, the attention scheme of theory, predictive processing. In our machine, we have predictions, testing, feedback, learning. So those predictive processing is part of our machine. I have the four e's. It's embedded. It's embedded in the world because it has sensors coming in and actuators going out. It's embodied because it has the sensors. Well, it's another one. It's enacted because it can affect the world and it's extended because we allow, actually, including all the modernization technology that's available, allow those. Why not use that machine?
But then what is your vision of the future? So are we going to find ourselves in a world where there are these biological beings that have consciousness, there are these synthetic artificial beings that have consciousness, and we all just try to get along. Mangrove's view is that there are progeny. I'll mention something. I was once a part of a Berkeley thing called the Miller Institute. And people from all chemistry, physics, math, all areas were there having a nice dinner. And the question came up, how long do we humans have for survival? How long? And the numbers that came up were I between 50 years and 500 years? This was 45 years ago, between 55 hundred years. And no one of these people thought we would survive beyond 500. So my sense is, I don't know if we'll. Without these machines, we're not going to survive. They are our only hope that maybe they'll help us to survive. If not, they will be our progeny.
And so do you view us as an evolutionary link in a chain that goes from single celled organisms to artificial conscious systems that will. Absolutely. I don't quite agree with him on that. What? I don't quite agree with you on that. So that's the differences. I think it's a more going to be a collaborative endeavor. And in fact, many of us at our age are bionic people because we have all these artificial organs. So that's already happening. So you do imagine that there could be a version of whatever it's called, the ship of Theseus, where we just replace parts of the brain, and at the end of the day, the organic material is gone. I don't know quite see it that way, actually. I mean, I do see that there'll be a different species and it is true. Will their consciousness be like ours? And it is true. The questions that you brought up, what are going to be the ethical implications if we have these entities that are conscious, very much like animal consciousness, and I think that's in everybody's consciousness.
Now, what to do, you know, with the New York declaration a few weeks ago here. But if you really, if you really believe this in your heart of hearts, and you must, we all recognize that these AI systems will have an intelligence that surpasses our capacity. It already does in certain limited ways, and there's no reason why it won't do. We just become an irrelevant ingredient in the category of living conscious being. It depends how we deal with things. You know, our machines can move much faster than we can, but we still enjoy racing against each other and we will still enjoy doing our. Using our brain to prove theorems. I want to make another point too. One of the things we realized with this conscious string machine, very different than Barr's model, is we have no central executive. In fact, the competition actually works as that. And what's really nice is it creates this federation, and so that it's really a good model for aid, artificial general intelligence as well. Because if you imagine having a central executive that would executive have to be omnipotent and know which of the processors it would not be able to work out of the box. So in fact, having this conscious Turing machine really is a kind of model or a kind of framework for an AGI as well.
So I think a lot of people see this as consciousness and intelligence are intertwined. And I think this is going to happen, that the consciousness is going to be important for this intelligence and vice versa. These machines are going to start to develop consciousness. So you do see the project in essence, as in some sense trying to save perhaps the better qualities of ourselves by being able to put them in these other systems. I think Manuel sees that more than I do. I see it more as a collaborative. Collaborative. So we just hold hands as we try to learn things. For example, the protein folding that we couldn't. Yeah, amazing, right? We couldn't do that ourselves. And here we have a machine that did it. That's fantastic.
It is amazing. These are collaborations and I think it has to be thought that way. Does any of this affect the way you treat today's available AI? Yes, yes. Let me just mention here, you've heard of the mirror test, a baby, when they're two years old, you can put some rouge on their forehead, and when they see themselves in the mirror, they'll try to rub. Rub it off. It's the mirror test of self awareness. So do you know that many animals have been found to also pass that test. Right, elephants. You've seen this elephant? Yes, I've seen them. But in fact, even a fish has passed the test. There's a fish called the cleaner wrasse. You can have this fish. It's about a few inches long. It's a beautiful fish. You can have it in your saltwater aquarium.
And it's very interesting fish. You put a mirror in front of it, and when it sees that mirror, it starts to comb its hair. You can see it. Exactly. You can see it doing this and this and swimming upside down, which it never does in front of the mirror. And then that's not a mating dance to what it thinks is a part. On the other side. It's clearly got the idea. Self awareness. Some self awareness. And then what they do is they inject some dye into its chin. And when the fish sees that, it goes over to a rock and tries to rub it off and then goes back to the mirror to see if it succeeded. Even ants. And. Yes, and there's a wonderful, wonderful thing. There are ants. They are genus Myrmica. There are three species of this genus Myrmica. They all have eyes. One species has very good eyes. One species has very poor eyes. They all pass the myrrh test.
And do you anticipate, like, an AI system? Is that the point, that we'll somehow pass some analogous version? Passing the mirror test? Of course. I can build a robot easily program it to do that. Program it, yeah. The point is that these, the fish was not built to do this. I see. And yet it can. So if we want to tell if an animal is conscious, we're going to have to not just see what it does, but we're going to have to look at how it does it, what it's built to do. Right. Which is tough to do, but it's tough to do even among us conscious beings here. Well, look, this is a fascinating arena. I think I wish you luck in your project, but it does give me a certain kind of angst to imagine you succeeding. But thank you so much for this conversation. Thank you, Sadeena.
Consciousness, Artificial Intelligence, Technology, Computational Model, Cognitive Neuroscience, Conscious Machines, World Science Festival
Comments ()