ENSPIRING.ai: What Creates Consciousness?
The discussion on consciousness in the age of artificial intelligence starts with an exploration of the basic question of what consciousness is. Despite our everyday experiences affirming our personal consciousness, we remain uncertain about how matter creates conscious experience. There are various considerations about whether mind makes matter or vice versa, and theories about proto-consciousness existing in fundamental material particles. The discussion extends to understanding consciousness in the context of emerging AI technologies.
David Chalmers and Anil Seth, prominent figures in the study of consciousness, contribute their insights into whether AI systems could be conscious. Chalmers believes AI can achieve consciousness as the brain is a machine that produces consciousness, thus a silicon machine can do the same. Seth argues consciousness is an achievement of biological machines, and our tendency to conflate consciousness with intelligence complicates the issue. They discuss the "hard problem" of consciousness, a term coined by Chalmers, which distinguishes between easy problems related to cognitive functions and the hard problem of subjective experience.
Please remember to turn on the CC button to view the subtitles.
Key Vocabularies and Common Phrases:
1. consciousness [ˈkɑːnʃəsnəs] - (noun) - The state of being aware of and able to think and perceive one's surroundings. - Synonyms: (awareness, sentience, alertness)
We do not know what consciousness is, and that's in light of the fact that each of us, I think, but I do not know for sure.
2. phenomenological [fəˌnɑːməˈnɑːlədʒɪkəl] - (adjective) - Relating to the philosophical study of the structures of experience and consciousness. - Synonyms: (experiential, subjective, perceptual)
...generate inner worlds of phenomenological experience.
3. Altered States Of consciousness [ˈɔltərd steɪts əv ˈkɑːnʃəsnəs] - (noun phrase) - Any mental state induced by various biological, psychological, or chemical means that is significantly different from a normal waking state. - Synonyms: (trance, hypnosis, dream)
...what altered states of consciousness are like.
4. aggregate [ˈæɡrɪɡət] - (noun / verb) - A whole formed by combining several elements; to form or group into a single cluster. - Synonyms: (collection, assemblage, amalgamation)
...they somehow, in aggregate, generate inner worlds.
5. Proto-consciousness [ˈproʊtoʊ-ˈkɑːnʃəsnəs] - (noun) - A hypothetical state or quality that represents a precursor to consciousness. - Synonyms: (pre-awareness, nascent consciousness, embryonic sentience)
Matter, even at the level of fundamental ingredients, does contain the seeds of consciousness containing something that some have called proto consciousness.
6. biological machine [ˌbaɪəˈlɑːdʒɪkəl məˈʃiːn] - (noun phrase) - A metaphor for living organisms like the brain considered in terms of a machine comprised of biological components. - Synonyms: (organism, biological system, living mechanism)
I agree with Dave, that I think consciousness is, if you like, an achievement of a biological machine.
7. cognitive neuroscience [ˈkɑːɡnɪtɪv ˈnʊroʊsaɪəns] - (noun) - The scientific field that is concerned with the study of the biological processes that underlie human cognition, especially in relation to behavior and the brain. - Synonyms: (neuropsychology, brain science, behavioral neuroscience)
It's probably about as hard as anything in ordinary cognitive neuroscience.
8. panpsychism [pænˈsaɪkɪzəm] - (noun) - The doctrine or belief that consciousness is a universal and primordial feature of all things, potentially even at the particle level. - Synonyms: (universal consciousness, cosmic awareness, holistic sentience)
Another way things could go is it could turn out there's some element of consciousness at the very basis of matter. It's the view that you mentioned, the view people call panpsychism, and that's extremely speculative.
9. metaphor [ˈmɛtəˌfɔr] - (noun) - A figure of speech that, for rhetorical effect, refers to one thing by mentioning another. - Synonyms: (analogy, figure of speech, symbolism)
And we'll try and use metaphors, and we have this metaphor as the brain, as a computer.
10. psychophysical laws [ˌsaɪkoʊˈfɪzɪkəl lɔz] - (noun phrase) - Hypothetical laws that would connect physical states or processes with conscious experience or phenomena. - Synonyms: (mind-brain laws, consciousness principles, experiential regulations)
And importantly, that there might be fundamental laws, I call them psychophysical laws, connecting physical processes and consciousness.
What Creates Consciousness?
Thank you so much for joining us this evening for this exploration of consciousness in the age of artificial intelligence. In any discussion of consciousness, it is important to get one thing straight at the outset. We do not know what consciousness is, and that's in light of the fact that each of us, I think, but I do not know for sure, can attest to what the experience of consciousness is, what consciousness feels like. Some of us can further attest, through meditative practice or chemically induced modifications, to what altered states of consciousness are like. But we are still very much in the dark regarding how it is that configurations of material particles that themselves do not seem to have any kind of inner world, somehow, in aggregate, generate inner worlds of phenomenological experience.
Some will consider this mystery and say that I have phrased it with undue bias. They'll say it is not that matter makes mind, but rather that mind makes matter. Or in another variation, mind transcends matter. Or, in another variation, matter, even at the level of fundamental ingredients, does contain the seeds of consciousness containing something that some have called proto consciousness. These issues are surely deeply compelling in their own right. consciousness is utterly essential to life as we experience it. But in recent years, these issues have become yet more central, because, as we all know, we are living through a transition in which artificial intelligences of various flavors are becoming ever more present, raising the question of the insights we might glean by thinking about consciousness in this era of artificial intelligence.
To discuss these issues, I am pleased to bring in two guests who have really spent decades immersed in these very questions, trying to gain insight into thinking about the process of thinking. First, we have David Chalmers, who is a university professor of philosophy and neuroscience and co-director of the Center for Mind, Brain, and consciousness at New York University. His most recent book, "Reality Plus: Virtual Worlds and the Problems of Philosophy," was named one of 2020's best books of the year by the Washington Post. David, great to see you. Great to be here. And we also have Anil Seth, who is a professor of cognitive and computational neuroscience and director of the Center for consciousness Science at the University of Sussex. He is editor in chief of Neuroscience of consciousness, and his book, "Being a New Science of consciousness," was a Sunday Times top ten bestseller. Congratulations and great to see you.
Before we get into some of the details, I'd like to just get a real quick sense of where each of you is coming from. I think I know the answer, but just even if you want to give a yes or no, that would be good enough. Do you, David, think that an artificial system will ever be conscious? I think it's possible for an AI system to be conscious. I think it's possible for a machine to be conscious. The brain itself is a big machine. Somehow that machine produces consciousness. We don't know how, but it does it somehow. I think if biology can do it, I don't see why silicon can't do it. I can't say we don't understand how silicon could give us consciousness. We also understand how neurons could give us consciousness. So I don't see a difference in principle. So that's a yes.
Take it. Absolutely. Anil, how about you? I'm going to give the annoying, it depends answer. It depends on what we mean by AI. So I think for the kinds of AI that we have at the moment, I think it's very, very unlikely. I think it can't be ruled out. But I think that we overestimate the possibility because we conflate consciousness with intelligence, and we have still this pervasive idea that computation of some sort is the basis of consciousness. And I think that is a really shaky assumption. I agree with Dave that I think consciousness is, if you like, an achievement of a biological machine. But to call it a machine, the brain is a very different kind of machine, and it may be the kind of machine that silicon stuff just cannot emulate.
Yeah. All right, so let's just get into a little more detail. Famously, and I know that you've been asked this question so many times that you probably recoil at it, but in 1995, you coined the term the hard problem of consciousness, which for many people, certainly I include myself in that, crystallized why this is such a conundrum. So can you just give us a short summary of what you mean by the hard problem? Sure. And I should say, this was never an original observation. I think everybody knew in their bones that consciousness posed a hard problem. This label just kind of crystallizes the problem and makes it a bit harder to avoid. But you go to a conference on consciousness, and you find people talk about many different things. Sometimes it's just used for the difference between being asleep and being awake. Sometimes it's used for the ability to control your behavior in certain considered ways. Sometimes it's used for the ability to report certain internal states.
But I think where consciousness is concerned, those things are actually what I call the easy problems, not because it's straightforward to explain them. It's probably about as hard as anything in ordinary cognitive neuroscience, but we've got a paradigm for explaining those things. You come up with a mechanism that produces appropriate behavior, or, say, behavior typical of a wakeful person, and you'll have explained the difference between being asleep and being awake. But when it comes, the hard problem of consciousness is subjective experience. I think you gave a great gloss on this in your introduction. It's the feeling of experience from a first-person point of view. The feeling of seeing and hearing, the feeling of feeling your body and emotions, pain, the feeling of thinking, the feeling of acting, all the stuff that we experience subjectively. And what makes it hard is those paradigms that we have in science, and especially in neuroscience and cognitive science, for explaining things in terms of mechanisms that do a job and producing behavior doesn't seem to work for subjective experience. There always seems to be a gap. Explain. Yeah, sleep versus wake. Explain.
Report versus not. There's still the question, why is it subjectively experienced? Seems to need a new method. That's why it's a hard problem, and I think we're many, and I'm really thinking of my own journey in appreciating the depth of this problem. There was a time when I would hear things like that and say, hmm, it's just a matter of figuring it out. It's just a matter of understanding how the brain works better, fuller, more completely. And once we have that, somehow this explanation for phenomenological experience will emerge. And then I encountered this little thought experiment, which I think had a big impact on you, too. And maybe, Anil, I don't know if you as well. This thought experiment about Mary from Frank Jackson, and we have a little version of it that I'll quickly play, and then maybe I can have you both comment on what you think it may be telling us about the nature of conscious experience.
Imagine that in the far, far future, there's a brilliant neuroscientist named Mary, who, for some reason, is confined to a room in which everything appears in black and white. There is no color of any sort whatsoever. Mary can study and access and examine the world outside, but it all comes to her only in black and white. Even so, Mary is able to reach a goal that has long eluded humankind. She totally and fully unravels every last detail about the structure, function, physiology, chemistry, biology, and physics of the brain. She knows absolutely everything there is to know about the behavior of the brain's every neuron, every molecule, every atom. She knows precisely what goes on inside our heads, the details of all neural processes that cascade when we see a beautiful red rose or when we marvel at a rich blue sky. One day, Mary is allowed to leave her room, and the very first thing she sees is a plump red tomato. Now, here's the question from this experience of the color red, will Mary learn anything new? Will she shrug and just move on, or will she be surprised or thrilled or moved or gain some new insight through this actual experience of color?
And if she does, what does that tell us about the limits of a purely physical description of the brain and consciousness? So that's the little story. So what should we take from it, and where do you come down on that story? I like the thought experiment. I mean, this thought experiment has been used for many different purposes, but I think one thing it does wonderfully is it illustrates the gap, a certain kind of gap, between our understanding of the objective world and our understanding of consciousness, because you can set it up so that Mary seems to know all of the objective properties of the brain, how your eyes respond to different wavelengths and how it gets fed to visual cortex, how it gets categorized, how it is. We come up with labels like red, green, blue, and so on. She knows all that before she ever sees color. So you'd think she knows everything about the world, but she knows everything about the objective world, but she doesn't know about the subjective experience of seeing red.
If she sees it for the first time, it's like, oh, so that's what it's like to see red. Now, Jackson goes on to argue from here that this shows there's more in the world than physical processes. And that's a further story that involves a lot of controversial elements. But I think it's a wonderful illustration of this basic gap between our knowledge of the objective world and our knowledge of subjective experience. And so, Anil, how does this story affect your thinking? Well, I think I like it a bit less. And this is possibly because I'm not a philosopher by training, but I'm always suspicious of these kinds of thought experiments. They're sort of conceivability arguments. They ask us to imagine things which actually we can't really imagine. What would it be like to know everything, absolutely everything about anything? I don't think we can ever really know what that would be like, and therefore what would be surprising and what wouldn't be surprising. And also, Dave's right, there is a gap here. But for me, it's not a surprising gap.
Knowing about the details of how something works doesn't necessarily give you the experience of being that thing. Like, if I know everything about flying, I don't become able to fly. And so I imagine that if Mary did know everything there is to know, and she goes out of the door, and she might say, oh, that's exactly how I would expect it. So she would shrug, potentially, she'd probably shrug, but of course she would still learn something new because she would have an experience she hasn't had before. But that would be, I think, just reflective of a gap about how we get the knowledge, not some sort of deep gap in reality that has to be crossed that shows that consciousness is beyond the reach of science. I don't think it shows that. And so when you think about consciousness, I gather that you place significant weight on the biological mechanism by which we have one example, our own. How it has emerged is that, do you think, utterly central to consciousness? I mean, and when you say that, are you saying that it has to be the things that make us up, you know, nitrogen, oxygen, carbon, hydrogen, sulfur. I mean, if you changed out the molecules, could it still work? Or is it really essential that we got here from some evolutionary trail that took us from single-celled organisms here? Is that the vital part of what a biological system provides?
I think in practice, our evolutionary history is very, very important. That's true pretty much of every aspect. But I think if we could sort of magically be reconstituted without having had an evolutionary history, that would be fine, too. I mean, there are so many things about how we are as animals, how other animals are as the animals they are, that depend on their biology. Metabolism depends on biology, it depends on chemistry. Digestion does, many things do. So I think, as a sort of first approximation, it makes sense to me to think that consciousness is another biological property. It doesn't mean that necessarily only biological systems can be conscious. But as you said, that's the only system we know of so far. And we'll try and use metaphors, and we have this metaphor as the brain as a computer, but it's easy to confuse a metaphor with the thing itself.
When we do that, that's when I think we might get into trouble and think consciousness could be stripped away from the stuff that we're made of and implemented in some other thing. So are you driven by the hard problem? Because I've also heard you coin an analogous type of problem called the real problem. Well, that was mainly to annoy Dave, just to wind him up a little bit. But I think the hard problem has been so definitional for the field. When I started in this area about now, 20 years ago or something, it was already the way the field was organized. Thanks way to make me feel old, Anil. But it's really important because it does highlight how difficult the problem is. But I also think that the fact that it seems difficult now does not mean that it will always have this aura of there being something beyond the reach of explanation in terms of mechanisms.
To give a very imperfect analogy, we've been here before. So about 150 years ago, not so long ago, people thought life couldn't be explained in terms of stuff, in terms of physics and chemistry. There had to be something else. There seemed to be an analogous hard problem of life. But of course, that didn't turn out to be right. We still don't understand every last detail of life, but there's no longer a sense of conceptual mystery that we need an enigma vital, a spark of life, something beyond the laws of nature as they are. So I like to think that as we build bridges between explanations in terms of mechanisms and what experience is like, then maybe the hard problem won't be solved, but it might be dissolved.
You know, I tend to agree with you. In fact, I often make the same analogy between the fact that there was a hard problem of life and a hard problem of. We solved the former, we think, but is that too quick? Have we solved the hard problem of life? Are we convinced that, you know, we understand the mechanism? It's just a matter of putting things together in the right way? I don't think it's a great analogy, to be honest. In the case of life, all of the things that we really wanted to explain were kind of these objective processes of reproduction, of adaptation, of metabolism, growth, and so on. And I think there was a certain point where we didn't see how it is that a physical mechanism could do those functional things. So some people thought we need maybe a vital spirit.
But the problem was always the problem of explaining these objective behaviors that living systems show. And eventually, we found how DNA and so on could extend into a story about how that could happen. Whereas. But in the case of consciousness, we've got analogs to all those things. But those are all the easy problems. Yes, if someone said, look, we can't even explain how it is that people are walking and talking and remembering and so on, then that would be analogous to the vitalist about life. But there's this further datum in the case of consciousness, which is first-person subjective experience, which doesn't really have an analog in the case of life, except for the case of consciousness itself. Some people have argued, well, actually, we can't explain everything about life because consciousness is itself a crucial aspect of life that we're not explaining. But then we're just back to the same problem.
And so when it comes to the real problem, which I guess you maybe would sort of characterize as the easy problem, I mean, how far along are we? I mean, in terms of even just having models of consciousness that can give us insight into the physical processes that allow this kind of experience to emerge. Some days, you know, when I wake up, I think we're nowhere. It still seems as mysterious as ever. But then other days, with a bit more of a sober look at things, progress has been made. And I think that's strategically, that's one of the advantages, I think, of this easy problem, real problem approach.
I do think they're very similar. I think I call the real problem mainly because to emphasize that we can still talk about the nature of experience rather than just what people do or say, we can try and explain why vision is the way it is, different from emotion, different from experiences of free will. And much more is now, I think, understood about why these experiences are the way they are, and why they are different from each other. And we are now at a stage in the neuroscience of consciousness, with the help of other disciplines as well, that we have a bunch of theories that target different aspects of consciousness that are beginning to be compared and contrasted, and whether we will come up with a fully satisfactory solution to consciousness as a whole.
I think that. I don't know. It's too early to say, but I don't think we can exclude that as a possibility. So the analogy, I think, operates at a different level. It's not that life as a problem is the same as consciousness as a problem. It's just that something that seemed really mysterious with the tools and concepts available at one point was no longer so mysterious with a different set of tools and concepts. And we should all be show some humility in the face of this problem. I mean, it's very early days. No one's philosophical or scientific pronouncements now are going to reflect how things are at the end of the day. So I think we should all be open to all kinds of amazing new insights which will make what we're now saying be primitive.
But I actually like what Anil calls the real problem. I'm not wild about the name. I think there are a lot of problems here, but I think it is important that we can actually make progress in the science of consciousness without solving the hard problem. If we had to wait for a solution to the hard problem, we might be waiting a long time for the science. And one thing we've really seen over the last, say, three decades or so since the science of consciousness really, really got going is people studying things like the neural correlates of consciousness, those processes in the brain that correlate most directly with consciousness. You can study that scientifically without having an answer to the hard problem.
So I call this the mapping problem. I think of it as one of the easier problems. But I totally agree with Arnel that this is a really important problem for the science. And it could well be that as we get better and better mappings, correlations from physical processes to consciousness, somewhere along the way, we'll be struck by something, some, say, mathematical property of the processes and consciousness, that leads us to propose here is a principle that might cross the gap. Yeah. And so you mentioned the number of theories that people put forward, and they are replete. Right.
We have integrated information theory, global workspace intention schema theory. I mean, there are many. Do you have a favorite or one that guides your own thinking about mine? Yeah. What's that? I prefer my theory. Let's hear it. So it's the others. I think they all have good points. One of the problems is that all theories of slightly different things, which does make them difficult to compare. So the theory that I tend to favor, it's a collection of ideas, really. It's just, I put it in a particular way. It's the idea of the brain as a prediction machine. So arguably, it's not really a theory of consciousness at all, because it does not say, like, these are the sufficient conditions, and then, boom, consciousness happens.
The other theories tend to say something like this. The idea of the brain as a prediction machine goes way back. And it's really this idea that everything the brain does pretty much involves making predictions about the causes of sensory signals and then using sensory signals to calibrate, to update these predictions. And when it comes to consciousness, the idea is that everything that we're conscious of, whether it's an experience of the world, whether it's an experience of the self, whether it's an experience of free will or volition, is a kind of perception. It's the brain trying to make sense of the situation in some way. And in that framing, every kind of conscious experience can be understood, can be thought of as underpinned by this process of the brain making predictions and updating predictions but in different ways, in different contexts.
The slogan for this is that perceptual experience is a kind of controlled hallucination, that we don't read the world out objectively. We create it, we actively construct it. But then the way I take it is that doesn't just apply to the world around us. It applies to the experience of being a self within that world. It applies to emotion, it applies to free will. And ultimately, it's all about physiological regulation of the body. The reason brains do this prediction is because prediction allows control, and brains evolved, I think, fundamentally to control, regulate, keep the body alive. And that kind of leads. If you put on this thread long enough, you do get to this intimate connection between how consciousness seems to us and the fact that we are living, breathing, flesh and blood, energy consuming creatures.
But in the end of the day, if I understand what you're saying correctly, you're not imagining that there's anything beyond the physical when it comes to consciousness. And you're not imagining that we need to modify our understanding of the fundamental ingredients that, at root, make up the physical. It's just a matter of putting it together and getting a deeper understanding of the processes, and somehow in there, an explanation for consciousness will emerge. Yeah, I think it really should be a last resort to invoke new fundamental principles of the universe. Right? I think matter is very complicated. It's not just neurons that go.
That turn on and off. It's not just atoms bouncing around in the void. The resources of this idea of materialism, that consciousness is a property of matter, suitably arranged, well, there's a lot that can be done with that. It seems short-sighted, I think, to say that, well, clearly we could never explain consciousness in terms of things happening in matter, because matter is really, really rich and interesting. So, David, when you think about, I mean, you didn't just coin the hard problem. You've been trying to solve the hard problem, you know, for decades. Can you imagine, even if you don't know the solution, the flavors of how the solutions might ultimately look? Sure, and I think there's a few different candidates. But the basic idea, which I tend to focus on, is finding some kind of mappings between physical processes and consciousness and ultimately trying to boil that down to something really simple and fundamental. I mean, I like the predictive processing story that Anil was telling about the brain as a prediction machine, but I think in a way, that explains too much.
It applies just as well to unconscious processes as to conscious processes. It needs to be combined with some other completely different bit of machinery to explain why some states of the brain are distinctively conscious and others are not. I like a number of the existing theories, like the global workspace theory, as giving you the beginnings of a physical basis for consciousness. But ultimately, what I would like is something like a. In physics, people say sometimes you're looking for laws so simple, you can write them on the front of a t shirt, right, to be like the fundamental laws of physics.
Well, if it turns out that we can't explain consciousness fully in terms of physical processing, then that doesn't mean it's beyond science. But it may mean we need something like another fundamental law or a fundamental principle to connect physical processes to consciousness. And then the question is, will it connect to something like biology? I'm skeptical. I think biology is somehow a little bit too high level. In a way. I suspect it's going to connect to something like, if you look at the correlations between consciousness and the brain, it's really the informational properties of the brain that matter and not ultimately the biological properties. If you ask me what I'm really looking for, some kind of beautiful mathematical equation that connects information and computation in the brain to consciousness. There is this integrative information theory that does some of that. I'm actually very skeptical about that for some other reasons, but it's at least trying to do the right kind of thing and coming up with a fundamental principle.
So do you allow for, you know, what is normally called a dualist perspective, that there's consciousness and there's the physical, and what we are experiencing is some kind of interaction blending between them. But it would simply be wrong to imagine that consciousness could be solely explained by understanding the physical. Is that a solution that you could imagine? I'm open to that kind of view. And in philosophy, we sometimes talk about property dualism, because people. When you say dualism, people, a lot of the time, people think about a soul, some non-physical entity that got attached to our body and is hanging out with our brain and interaction and then continues living after the body dies. That's not the kind of thing I have in mind, but the idea is rather, there could be fundamental properties of the universe beyond space and time and mass and charge or whatever the latest fundamental physical theory says. If it turns out that existing properties don't explain consciousness, then we should be open to the idea that, hey, consciousness is itself a fundamental.
And importantly, that there might be fundamental laws, I call them psychophysical laws, connecting physical processes and consciousness. And that needn't be unscientific or spooky. It's just one way things could go. Another way things could go is it could turn out there's some element of consciousness at the very basis of matter. It's the view that you mentioned, the view people call panpsychism, and that's extremely speculative, but it's a view I take seriously. If someone comes up with the scientific form of panpsychism, then I think we should take that seriously. And so the particles themselves would have potentially some kind of seed of inner experience. And when you put enough of them together in the right way, the aggregate yields the conscious experience. And the real problem for this view is some people think, oh, come on, this is loopy or crazy. But for me, the biggest problem for this view is precisely that aggregation. How do you take a bunch of conscious particles and put them together and get the kind of unified conscious experience that I'm having right now? And that's called the combination problem, and nobody has a good solution to it. But if somebody solves that problem, then that's instantly a contender for a theory of consciousness.
Well, I mean, maybe, but I think there are other problems with it as well. I think all the versions of this idea of panpsychism that I've encountered all face the problem that not only is it not testable in itself, but it doesn't lead to testable predictions. And I think that doesn't mean it's wrong, it just means that it's very hard as a scientist to know what to do with a view like that. I would say panpsychism is a philosophical thesis. It's not itself a testable theory, but a specific panpsychist theory that came up with, say, some mathematical principles that, say, under these conditions, you get this kind of physical system, this kind of consciousness, that specific panpsychist theories could be tested. Yes, but I haven't seen any like that. Very early days, we don't have any good theories of consciousness. Number one thing to keep.
Can you imagine, then, that one day we may all converge on an answer that, at least in physics, we seem to be satisfied with? Maybe we shouldn't. If you ask me, what do you mean by the mass of a particle? I'd actually tell you, functionally, what the mass does, how it responds to gravity, how it responds to forces. If you said to me, what do you mean by the electric charge of a particle? I'd kind of play the same game. I'd say, well, in an electric field, it will do this or that based upon the charge it has. But I would be unable to tell you what mass is and what charge is. There are primitive fundamentals that exist in the universe, and I'm willing to say, okay, they exist by fiat. I know they're there, and go forward.
Could it be that one day we simply say, consciousness? It's just this fundamental quality of reality, and it doesn't have a deeper explanation, and you take it as a given and you go forward? This is great because the Norwegian philosopher Heder Hasselmerck has called this the hard problem of matter. You say we don't know what consciousness is. We actually don't know what mass is. Physics tells us what mass does and the equations it's involved in. But what actually is mass? What is the intrinsic nature of mass, or of charge, or maybe even of space and time? And, yeah, philosophers and scientists argue about this. Is the universe just mathematical? Is it structural?
I mean, a lot of people, I think, want to say there is no intrinsic nature of mass. That's just a chimera. You're looking for what mass does. That's all there is. And so somebody could take that view for consciousness, too. All there is to consciousness is what it does. The trouble is, in the case of consciousness, what it does, that's just easy problems, and it leaves out the central datum of subjective experience. If somebody finds a way to take subjective experience, seems intrinsic and just turn that into a problem about what consciousness does, then that might be an avenue to a solution. But so far, anytime anyone does that, which happens a lot, it just looks like a bait and switch. You've moved from talking about consciousness to talking about behavior or something else.
So, Neil, can I just ask you one question along the side, because I do want to get to this issue of AI systems. And so we're now in a realm where there are computational systems that are mimicking certain aspects of behavior. They're able to respond to certain prompts in a way that ordinarily we would have thought only an intelligent human being could do. And of course, the question comes to the fore of, are these systems conscious? It's pretty clear they're not yet, but could they be conscious? And of course, it's a deep question, an important one, but how could we ever possibly know? I mean, this is another very hard problem about how we test for consciousness in things that are not ours. I mean, we face this even with other human beings. I mean, it's often said that I only know for sure that I'm conscious. It's just an inference that you are, that you are, that any of you are. But it's a reasonably safe.
I guess you were wondering, but you would say that. You would say that. It's a pretty safe inference day. Me too. Not so sure. But because we have so much else in common, we can basically say, it would be very strange if it was only me that was conscious, given everything else that we have in common. The further we get away from the benchmark of an intact human being, the harder it gets. Even with human patients suffering brain injuries, it's already very difficult to know whether they're conscious because whether they are or not can be dissociated from their behavior, their ability to tell you that they're conscious.
And then the further we get, we have huge debates about non-human animals. There was a recent New York declaration about animal consciousness, trying to just put the idea in people's minds that many non-human animals might be conscious. Vegan. Just saying. But go ahead. When it comes to computers and AI, it's so much harder. And I think here we're misled by our psychological biases. Now, we as humans, we have got a pretty terrible track record of withholding moral consideration from things that are not us. And part of the reason we do this is because they don't seem sufficiently similar to us in ways that we think matter. And the ways that we think matter tend to be things that we think make us special, like language, intelligence.
Of course, it's questionable how intelligent we are as a species, but we tend to elevate ourselves and think, okay, no language, no consciousness. Descartes did something like this many, many centuries ago. So we might make false negatives a lot with AI. I think we're in almost exactly the opposite situation. We have these language models that exercise our biases. They speak to us. They seem to be intelligent in some way, way that's still easy to catch out, but something interesting is going on there. So, because they're similar to us in the ways that we elevate and that we tend to prioritize, we project qualities into them that they probably don't have, like thinking, understanding, and, of course, consciousness, whereas they're very different to us in other ways, and it's those other ways in which they're very different that might actually matter for consciousness.
And if we were seeking to build a conscious just called AI, just as a blanket term for something computational that we build, should we base it on trying to mimic the architecture of the brain? Or, again, is that just such a limited way of thinking about how the process of thinking might be generous. This is the one. Sorry. This is the one case we know about. So, I mean, it's true. When it comes to AI, AI systems are a long way from human brains. Any non-human system is some distance from the one case we know for sure about. So there is something to be said for looking at human-like AI, simply because it's going to be as similar to us as possible, but in a different substrate. So one idea that I like is the idea of gradually replacing parts of your brain, say, replace biological neurons. Can you take us through this? Because I think it's a curious way of thinking about it. Yeah. Here we go.
We got brain. Yeah. Brain uploading. So, this is a drawing of the philosopher Susan Schneider, who's written about this by an illustrator for my book, Reality Class, called Tim Peacock. Yeah. Just say we gradually replace neurons in our brains by silicon chips, which are as similar as possible. And first you replace 1% of the brain, 10% of the brain, I guess, right here, we're seeing about 50% of the brain replaced, and she's still saying, I'm still here. It's like, yeah, the silicon chips are doing the job just as well. Then we go all the way. She's sounding more and more like Siri. Right. We get all the way to 100%, and she says, yeah, I'm still here. You could do that. I could do that. And then at that very moment, maybe we would then have the first-person datum that we are conscious, although we are made of silicon, and that would be a kind of.
So would that experiment, that thought experiment, if it were successful in the real world, would that convince you? But I don't think it could be successful in the real world. I think it's another example of these nice thought experiments that we can help ourselves to. But actually, if you unpack what it's asking, it's very, very difficult to imagine anything like this could happen. I think that matters because it matters for the conclusions that we draw. We have this very nice idea that you gradually replace every neuron, every connection with a wire and a little silicon chip. But brains aren't really the kinds of things where you can do that.
Everything that the brain does is incredibly entrenched, intertwined. There are chemicals swishing. Isn't that just complexity? It doesn't just make it difficult. It does make it difficult, but it makes it difficult in a way that I think undermines the utility of thinking about these simple thought experiments. It's just not something that you could do.
I mean, very basically, I think the brain is the kind of system where you can't cleanly separate what it does from what it is. Sometimes a neuron fires. It's not to communicate with other neurons, it's to get rid of metabolic waste products. So if you're going to get a model neuron that does that, you have to then not only replace all the neurons and connections, but all the stuff, all the metabolism that's going on underneath as well. And before you know it, it's no longer possible to build it out of silicon, just as it's not possible to build a replica of the Brooklyn bridge out of string cheese. You just can't do it. It doesn't work like that.
And so you made reference to human exceptionalism as perhaps misleading us or perhaps giving us guidance. Right. It really depends on the problem and how you apply the idea. But can you imagine, maybe I should say this to David first, but because you've already said you don't even think it's possible, but can you imagine that human consciousness is just one example of this huge spectrum of conscious-like experiences that can be instantiated in other systems that could be artificial or they could be organic, and that what we consider sort of this, you know, wondrous quality of being a human being is actually just a pedestrian example of something that can take on so many other forms? I think that's very likely, right?
I mean, the history of science and philosophy over the centuries has repeatedly shown us we're not at the center of things, and we're not at the top of every mountain, we're not at the center of the universe. We're not separate from all other animals created by God. And human consciousness is one way of being conscious, and it's one little region in a vast space of possible ways of being conscious. Many non-human animals will be conscious in different ways, and I do think consciousness is very likely something that's a material phenomenon. So it's very plausible to me that it could be implemented in something else, but maybe not in computers.
Another example where I worry about a lot more than GPT five suddenly really feeling things are emerging. Neurotechnologies, things like brain organoids, these are collections of human brain cells grown from stem cells in dishes. And they don't exercise any of our biases because they don't do anything. They just kind of sit there in a dish, but they're made out of the same stuff, and they self organize and they display electrical activity. So immediately, a whole area of uncertainty goes away. Frankly, we don't know whether it matters what we're made out of, and we just don't know whether that matters or not. And so if you put that uncertainty out, for me, it's much more plausible that in ten years, we have grown conscious systems in the lab than they've come out of the next generation of OpenAI's chatbots.
I see. And so, David, final question. Whether we grow new conscious systems in the lab in some glorified petri dish, or are we able to create them at OpenAI, whatever, wherever it happens, should we be thinking about the ethical, moral side of this? I mean, if there's this conscious being that maybe can't even communicate its conscious state, do we worry about that? I think, absolutely. I mean, consciousness, many people think, including me, is kind of the gateway to the circle of morality, the circle of beings that we care about. The moment you acknowledge that animal, I mean, there's a debate about whether fish. Let's say you're conscious, but the moment you acknowledge that a fish is conscious and can feel pain and suffer, then suddenly a fish is a being that we should at least take into consideration. In our moral calculus. If you don't take conscious beings into consideration, there's the danger of moral catastrophe.
So I think this very much applies to AI and even to the emerging systems, like the large language models of the GPT family. We don't know for sure whether they're conscious. There's various reasons for thinking there's potential obstacles to consciousness they haven't overcome yet. But I think it's entirely possible that in the next ten years or so, we will develop language models that overcome those obstacles and show every sign of being conscious. We've already got language models that are very close to passing the traditional Turing test, being indistinguishable from human beings in conversation. Some Personas generated by GPT, four of past five-minute Turing tests. In the past, we would have said, that's evidence of consciousness. Now, maybe it's not. Maybe for various reasons, we want to resist that. All I want is. That question, though, is all important, because if we just.
If an AI system is conscious like a human, and we continue to treat it simply like a tool, so that we don't even have to take it into account in what we do, we are in danger of moral catastrophe. Well, I'll simply say that all my prompts to chat are incredibly respectful. So, anyways, a great conversation. Thank you. Thanks, Ron. Thank you so much. Thank you very much. Cheers. Sadeena. Sadeena.
Consciousness, Artificial Intelligence, Philosophy, Neuroscience, Science, Technology, World Science Festival
Comments ()