ENSPIRING.ai: Code to Cure - AI and the Future of Health

ENSPIRING.ai: Code to Cure - AI and the Future of Health

The video explores how AI can revolutionize biology similarly to how calculus transformed physics. By analyzing vast amounts of data, AI holds the potential to uncover hidden patterns and connections in biological systems, promising advancements from molecular mechanisms to ecosystems. Notable developments include AI in genomics and protein folding which have already marked significant strides in biology.

The application of AI in biology could extend to areas like medicine, agriculture, and environmental science, providing better predictions and interventions. With AI as a potential game-changer for biological research, it could foster personalized medicine, anticipate therapeutic outcomes, and reshape our approach to human health challenges. However, its effectiveness depends on the availability of large data sets and cross-disciplinary collaboration among scientists.

Main takeaways from the video:

💡
AI is expected to be as transformative for biology as calculus was for physics.
💡
Machine learning algorithms enhance our understanding of genetics, protein folding, and potential drug-development.
💡
Critical challenges include ensuring ethical use, data accessibility and fostering collaborations between traditional and emerging scientific fields.
Please remember to turn on the CC button to view the subtitles.

Key Vocabularies and Common Phrases:

1. revolutionized [ˌrɛvəˈluːʃənaɪzd] - (verb) - Caused a complete and dramatic change. - Synonyms: (transformed, altered, changed)

The invention and the development of calculus, really, I think, as many of you know who have studied physics, you know that it absolutely revolutionized physics.

2. analogous [əˈnæləgəs] - (adjective) - Comparable in certain respects, typically in a way that makes clearer the nature of the things compared. - Synonyms: (similar, comparable, parallel)

AI may well be an analogous game changer for biology.

3. foundation [faʊnˈdeɪʃən] - (noun) - The basis or groundwork of anything. - Synonyms: (base, groundwork, basis)

It provided, and, of course, it continues to provide an utterly foundational tool for understanding how physical systems evolve.

4. profound [prəˈfaʊnd] - (adjective) - Very great or intense. - Synonyms: (deep, intense, great)

AI has made remarkable progress in the longstanding and profoundly important puzzle of protein folding.

5. arcane [ɑːrˈkeɪn] - (adjective) - Understood by few; mysterious or secret. - Synonyms: (mysterious, secret, obscure)

And as you get into the more and more arcane and sophisticated, there will only be first cut approximations that give you only a certain slice of what's going on.

6. exponential [ˌɛkspəˈnɛnʃəl] - (adjective) - Rising or expanding at a rapid rate. - Synonyms: (rapid, expansive, escalating)

You look at the primary driver of that exponential curve, it's the availability of large amounts of high quality data that the machine is being trained on.

7. causality [kɔːˈzæləti] - (noun) - The relation between cause and effect. - Synonyms: (reason, principle, relationship)

So that gives you the starting point of causality, because causality is fundamentally what you need to understand in order to make interventions in a person's observational data so misleading

8. trajectory [trəˈdʒɛktəri] - (noun) - The path followed by a projectile flying or an object moving under force. - Synonyms: (path, route, course)

Just a moment on your own trajectory, right? Because you've had a number of interesting career zigs and zags.

9. perturbations [ˌpɜːrtərˈbeɪʃənz] - (noun) - Disturbances or deviations from a normal path. - Synonyms: (disturbances, disruptions, deviations)

And we can edit that cell. We can introduce perturbations and say, well, what can we make a Daphne neuron?

10. reductionist [rɪˈdʌkʃənɪst] - (adjective) - Relating to the practice of analyzing and describing a complex phenomenon in terms of its simple or fundamental constituents. - Synonyms: (simplistic, minimalistic, elementary)

A cell is a very reductionist thing.

Code to Cure - AI and the Future of Health

The invention and the development of calculus, really, I think, as many of you know who have studied physics, you know that it absolutely revolutionized physics because that particular mathematical framework it provided, and, of course, it continues to provide an utterly foundational tool for understanding how physical systems evolve. I mean, anybody who has studied any physics at all knows that it is virtually unfathomable to imagine that our understanding of anything about the laws of motion could have gotten off the ground if Leibniz and Newton had not given us the calculus. Now, in this conversation, we are going to discuss how AI may well be an analogous game changer for biology. I mean, with its unparalleled ability to analyze and interpret vast amounts of data, find hidden patterns and deeply obscured connections, AI offers the potential to revolution our understanding of life at every level. Right? Molecular mechanisms to ecosystem dynamics.

Now, we have already gotten a taste of AI's impact on biology through several groundbreaking advancements, some of which we will discuss. Because machine learning algorithms have revealed patterns in genomic data, leading to new discoveries in genetics and personalized medicine, it has accelerated the identification of potential therapeutic compounds by predicting how molecules will interact with biological targets. And, of course, as no doubt many of you know, AI has made remarkable progress in the longstanding and profoundly important puzzle of protein folding.

Now, one can imagine that as AI continues to evolve, it will reshape biological research with profound implications for the future of science and medicine. And our guest for this conversation has been right at the nexus of all of these developments. So I am so pleased to introduce Daphne Kohler, who is CEO and founder of In Sitro, a machine learning driven drug discovery and development company. She was the co founder, co CEO and president of the online education platform Coursera and is also and MacArthur fellow. Thank you so much for joining us.

Thank you. So, if we just sort of start with the big picture, and then we'll get into some details, where do you see AI and biology going? I mean, this analogy, which, frankly, I think comes from you, actually. Eric Schmidt. I think Eric Schmidt. Okay, fantastic. So I knew it was in some conversation that was out there in the ethereum, but this notion of thinking about AI as sort of the calculus of physics, and taking that analogy, topology, is that a good analogy? No, I think it is an incredibly powerful analogy because I think, as you articulated in the introductory comment, we would not have been able to even start to put a framework around physics without that underpinning of mathematics. And in biology, we've struggled because biology is so complicated, so multifaceted.

There's so much going on at different levels of biological scales, interplay between different entities. I mean, it's very, very complex, far beyond what certainly the human mind can encompass or our existing mathematical tools. And the beauty of AI is that I think it will actually give us that sound framework that will allow us to make predictions. Because if you think what calculus does to physics, it basically says, if you set up the experiment this way, the following things will happen. And it is, by and large, the first cut approximation, a pretty reasonable prediction. We don't have any such predictive ability in biology. What we have are, in most cases, at best, like a qualitative kind of descriptive narrative about what's kind of happening. But in a new experiment, what is likely to happen in that new experiment? I mean, people just largely toss up their hands and you just got to do it to find out. And I think we will be in a position where the AI will, over time, be able to get to better and better predictions about what will happen in biological systems.

I think that will be hugely impactful. It's going to be impactful in human health. It's going to be impactful in the environment, in agriculture. I mean, so many aspects of our life depend on the interplay between, you know, between biological systems. And so we really need to have that ability to make better predictions. Now, when it comes to calculus and physics, the nice thing is you have this mathematical tool, and when you understand how to use that tool at one and the same time, you can also see how that tool is working. Right. There's nothing that's mysterious within the framework of calculus.

But if AI is sort of playing that role for biology, everybody sort of marvels at the fact that we don't really know what's happening in the. The innards of the AI system. Yet the output, of course, may be something deep and profound. Does that mean that our level of understanding may fundamentally have this vagueness to it because we can't see what's happening inside the system? So first I would question, what is the number of people on this earth who can truly understand the mathematical calculus models of advanced physical systems? So I would just start that. So it's a privileged few. Now, you could ask, will that level of understanding be even lower in the context of AI systems of biology? And the answer is probably yes, but you can get glimpses and people have been working on explainable AI, the different ways of understanding what's going on.

But to my mind, that is less important then the ability to trust that the predictions that are being made are in fact, likely to be relevant in the problem that we are applying the system to. And so when people talk to me about the way to AI, safety being understandable AI, I think that is not useless, but probably not the most important thing. The most important thing is to actually do the experiment. With your analogy to the number of people you know, any physicist who's been trained can see the workings of calculus in the basic laws of motion. But as I understand it, nobody trained in any way, shape or form will be able to gain a narrative for the output of an AI system, because what's happening in there is just too complex. Maybe for a biology, that's not such a bad thing, because, as you say, it's the output that really is what draws our attention.

So I think traditional mechanics is something that is probably on the simplest end of the spectrum, even in physics. And then you can go up to more sophisticated quantum phenomena, a number of people understand, that shrinks dramatically in biology. I think it'll be the same thing. There will be pieces of it where you could kind of get a sense of what the AI is doing and how it's making certain predictions. And as you get into the more and more arcane and sophisticated, there will only be first cut approximations that give you only a certain slice of what's going on.

So I'd like to turn to a couple examples, concrete examples, a handful, actually, in a moment. But first, if you don't mind, just a moment on your own trajectory, right? Because you've had a number of interesting career zigs and zags. So is there a unifying principle that drives you, or how would you describe where you've been and where you're going? Well, you know, Steve Jobs said that life can only be interpreted when you look at it in reverse. And so I can try and look at mine in reverse, at least in terms of where I am. So I started my career as an academic. I thought I would retire as an academic. Both my parents were academics. I call myself now an OG AI person because I've been working in the space for long before it was considered to be a space.

When I got my PhD in AI in the early nineties, you couldn't. It was just. Wasn't a respectable discipline. You couldn't say you were doing AI. You could say you were doing cognitive computing or statistical learning. You couldn't say you were doing AI. And so I came back to Stanford. I was the first new AI, I would say, hire into that department, the one who didn't do old style logic based systems more of the modern probability machine learning type stuff and my career journey. If you had to describe the arc, it's an increasing focus on wanting to make an actual direct impact in the world.

So I was very conceptual when I started, and I moved into more and more applied sort of use cases, whether it was first applied machine learning to biology, first of all to robotics, then to biology, medicine. And the coursera was. It was digression. It was not related to my research agenda at all. It was a passion project that I always had about using technology to make education better. And when we put out at the end of 2011, these three courses out there for anyone in the world to take for free, and we had 100,000 people in each one of those courses, it was like, oh, my God, that is more impact that I could have in a month than I could have by writing papers and publishing them, hoping someone does something. And so that led to the coursera journey, and I spent five years there.

And if you look at the timeline, the eye revolution began in 2012, just as I left Stanford. And so no correlation though, right? No, I don't think so, but I kind of missed the beginning. And so in 2016, when I've been in the coursera for five years, I picked my head up over the trenches and said, oh, my goodness, AI is changing the world. And I believe that we are on this exponential curve. And that was apparent to me in 2016, long as that's more or less when you went back, and that's when I went back to doing AI for biology and healthcare. And the reason I did that is because I said, AI is going to transform. Everything already is starting to happen, but it's not having much of an impact in life sciences because there's not a lot of people who speak both languages, and I do.

And so let's look at some of those impacts. And I know there are a number that you find exciting and have been deeply involved with, one that has to do with collecting and generating data at scale, which, of course, is central to all these AI systems. So can you tell us a bit about that? So, first of all, let me come back for a moment to that exponential curve, because I think it's really important. exponential curves are these deceptively misleading things, because at the beginning of the exponential curve, it seems awfully flat. It's going to be decades, centuries before anything happens. And then all of a sudden, the exponential curve starts to go up. It's like, oh, my God, we're on this exponential curve. But we've been on that for a while.

And when you look at the primary driver of that exponential curve, it's the availability of large amounts of high quality data that the machine is being trained on. In biology, we're, I would say, five to seven years behind where we see the general purpose, large language models today, because the amount of relevant biological data is relatively limited. However, one of the things that drew me back to this field is the realization that not only we were on a tidal wave of innovation on the AI side, we're also on a comparable tidal wave of innovation in generating biological data at scale. And that includes things like being able to take a skin cell from you or me and then return it back to stem cell status, where it could turn into any cell in our body, and all of a sudden, I can have a daphne neuron, and we can edit that cell.

We can introduce perturbations and say, well, what can we make a Daphne neuron? That's just like my genetics, but with a disease causing mutation, what happens to that cell at that point, we can image the cell with microscopy at super resolution, so you can start to see individual proteins, molecules within the cell. All of these are like tools that life scientists and bioengineers have put together that allow us to generate unbelievable amounts of single cell biological data that for the first time, if you pair it with AI, because you give it to a person and their eyes, like, glaze over, you put the two together, and all of a sudden we can start to make causal inferences about, if you do this to a cell, the following things happen, and it leads to disease in the following way, because you can read it from the cell and you can start to interrogate it using the AI.

And is that of when you say at scale, there are many axes on which that scale could happen. So do you imagine this kind of an approach involving a huge population, or is it the data that you can extract from even a single neuron at such depth and scale that it gives you an enormous amount of depth. Is it both or. So the answer is, it has to be both, I think, because you can certainly have a lot more flexibility in generating cellular data. You can edit it to introduce mutations, you can put drugs on it to see what happens. But ultimately, what you care about when you're making medicines is not curing cells. You have to cure people.

And so to close what is called the translatability gap, which is taking what we can do in the lab and make sure that it applies when you do it to a person, you start to need population scale data around human biology and human clinical status. So what state is that in? I mean, we have this other visual here, which I think is giving one a sense of generating cells in the lab. This is cells in motion. It's one of my favorite things. You could actually see them and look at longitudinal progressions of the cells as they mature, as they are perturbed. And so you all. So that gives you the starting point of causality, because causality is fundamentally what you need to understand in order to make interventions in a person's observational data so misleading.

So many things have gone awry in human health by looking at observational correlations and saying, oh, these two kind of go together. Well, these two go together because, you know, there's some third thing that is completely unrelated that is driving both of them. And so you're not going to get to actual medicines that work here. By watching cells in action following perturbations, you can start to get a sense for what happens as a cause effect. And are there any concrete insights today that these approaches have already resulted in? Oh, there's many that, I mean, certainly in the context of core cell biology, we now have a much better understanding. And this, of course, all started with the human genome project, so that you could actually have that as the beginning of what do I perturb so as to make a difference? But we now understand so many biologies, including ones that are relevant to human disease in really exquisite ways for the first time.

Now, another great success story. I mentioned it at the outset, but I would love to just spend a moment or two diving into it. Is the protein folding problem, which I think is the one that many people have heard of. And if you could just give us a sense, a why is that a vital problem? Why was it so hard? And how is it that AI has allowed us to make progress wherever decades of more traditional work just couldn't make any heads or tails of the situation? It's this beautiful example of an AI success story, and hats off to the people at DeepMind, DeepMind, who put this in motion. I mean, it's a really hard problem because we have had a very simplistic, physics based model of how proteins fold. That was driven by our human understanding of the underlying physical system, which is, frankly, quite coarse grained.

And so those models were used, and they really plateaued. About five years prior to the DeepMind launch of alphafold, there was basically no improvement in performance for about five years. The reason why this is such a beautiful AI success story is because protein folding is one of those few places where, at the time that it was launched, one of the few places where there was sufficient data to actually train the models. Nice thing about proteins is that there's a lot of them. There are a lot of them across species. They fold in the same way, regardless of whether they're in a yeast or in a bacteria or in a person.

And so there was a lot of training data of sequence to structure, and they were able to take that and introduce a little bit of sort of intuition from the physics and also from human evolution into the model. We call in informal jargon, inductive bias. That allowed the model to really take advantage in maximum ways, because while it's nice to have hundreds of thousands, it's not hundreds of millions. And so that combination of some insight about how the process actually works alongside a lot of data was really the winning combination in that particular case. And by doing that, they were able to basically learn an energy function that is quite different than the energy functions that people had tried to hand construct.

And what is also really important is because it is a learned model, it is much more expandable. And so that is what has driven all of the developments that have happened since of co folding with another protein with a ligand. You know, really, because once you have a data driven approach, you just give it the right kind of data and it just knows how to suck that in and put that into the same framework. Whereas if you'd had to do the same thing with the person, they would have said, oh, wait, that's a whole different problem. I now have to go back and sit and think about how to design the model from scratch for AI. It's very flexible.

And so has this had an impact on people's thinking about the role of the more traditional idea of modeling a system, as opposed to just using enormous amounts of data to try to extract patterns? I think it's showed us that you benefit tremendously from large amounts of data, and that we should be seeking places in biology where those data exist, because those can really push us beyond the boundaries in significant ways. But I think it also showed us on the other side, that the numbers that we currently have are not big enough to do it without any inductive bias whatsoever. And so that combination today, I think, is where the sweet spot is.

And what about this other example of rna sequencing? That's also an arena where there's been significant progress. So that's a really exciting, again, bioengineering development where you can take a single cell and basically sequence the rna in the cell. So everybody now knows about rna because of the vaccines. But rna is this intermediate between DNA, which is kind of like a print out, if you will, of a program. And this is like the beginning of executing that program. And if you can look at the rna in a single cell, it tells you what the cell is actually doing, which parts of the genome are active.

And so that starts to tell you which processes the cell is engaging in, and also which of those are potentially disrupted by having some, you know, some mistake in your genome or some, or some cellular exposure that makes the cells sick. And so that ability to collect that activity profile of a cell at the single cell level, and you can do it for hundreds of millions of cells in a single experiment. And now you're talking real numbers. Numbers, and especially because of some of the additional advancements I mentioned earlier, this notion of CRISPR, which some people have heard of as a therapeutic, which is you introduce CRISPR into the body, and it starts editing your cells to correct a genetic mistake that makes you sick, but it's actually at least as valuable as a research tool. So there are these things called pooled CRISPR screens, where you take these hundreds of millions of cells, and each one of them gets a different intervention.

Each one of them gets a different change in its genome. So now you have a beautifully controlled experiment, which you would never be able to do in people, because we're all so different in so many ways. These are cells that are identical, except for one thing. And so you can start to ask, how much of a difference does that thing make to the very complex cascade of events that happens? And now we can start to get at that causality at the single cell level with enough data to feed the machine learning beast.

And so does this sort of start to build? I don't know what the right word would be, but like a chat GPT for cellular systems per se, I mean, that sort of got enough data that it stands alone, but it's so specialized that it's going to give you the kinds of answers that are relevant in the cellular domain, I think it is. And we're really excited about that. I mean, we should understand that this is a journey, because even within our human body, there's thousands of different cell types. And to do this experiment, we need to do this again and again across at least multiple cell types, to the point that we might be able to extrapolate to ones that we haven't done the experiment in. No one has gotten to that stage yet, but we're at the cusp.

Remember, I told you that we're five to seven years behind in the data for biology. This is because we have that capability to start printing data at this massive scale. And I can do this for neurons, and I can do this for liver cells and for heart cells, and for fat cells and for muscle cells. And over time, we'll begin to have a notion of, if I do this thing to this type of cell, the following things will get disrupted, and then I can start making predictions about what disease does to a human.

And what do you think the time scale is for putting this into full practice? I mean, to deal with some of the most degenerative diseases that we face, Parkinson's, Alzheimer's, heart disease. Every part of the body is subject to entropic decay is the physicist way of describing it. So are we heading in a place where in our lifetime, you may put in the right prompt, poetically speaking, and get out some kind of prediction for what we need to do? I think that within our lifetime, we should be able to make much better, I mean, pretty reasonable predictions about what an intervention will do, at least to a reasonable set of cell types.

Now, I think it's important, however, to remember, and this is coming up, I know, to a topic we're going to discuss, that this is at the level of a cell, and a cell is a very reductionist thing. I mean, even a single organ has multiple cells of multiple types. And so that's where you need to start thinking about it as coming from both sides. You need to build more complex cellular systems, and there's a lot of work going on on multicellular cultures and organoids and things like that, so that you can actually start to investigate in a dish what happens when you have a less reductionist system, and at the same time, to close the translatability gap, you need to understand, okay, fine, but we're not going to have whole humans in a dish.

So how are we, what does this do in an actual human? So that my predictions, from what I see in the dish, are actually telling us, relevant to us as a whole system. Exactly. And so what tools do you imagine using to head at the more aggregate type of approach? So, this is a place where I think we're seeing similarly, a growth in the amount of data that is available about human individuals, things that used to be incredibly expensive and only available to a small, in a small number of patients, like MRis, like, you know, for example, as we know now, the new Alzheimer's drug require an MRI to be done before the drug is even prescribed. And so all of a sudden, we have this wealth of data of MRI that is now available to us, and if we start putting it together with other covariates like human genetics or human genomics, so we can start to see the connection between things that we could inter.

Because you can't intervene in an MRI. An MRI is the final readout, but I can intervene in a molecule or in a gene. So if we can start to understand, to measure, when you have this genetic makeup, this is what happens to your MRI. You can start to sort of say, okay, this is the node, this is the hub at which an intervention might make you have slower neurodegeneration, say, if you have Alzheimer's. And I think that's really exciting.

And what's the profile of the person who sort of works in this space? I mean, we're talking about large data. We're talking about, obviously, computer sciences, AI in terms of blending those two together. But then biological expertise, one would think, still has a role to play in really trying to understand how to put this all together. So is a new breed of science mind being trained to do this, or you just use the traditional disciplines and we'll just bring them all together? So, first of all, let me say that even taking the traditional disciplines and bringing them together is an incredibly hard problem. Yes, of course.

People are trained in very different ways. They think about the world. They have different jargons, different expertise, and different ways of thinking about the world. So, for example, if you have a bunch of points in a graph and you're an engineer, you're looking for the simplest pattern that explains that bunch of points so that you can make good, reliable predictions. If you're a scientist, often you look for the outliers because the outliers are like the things that don't make sense, and those might be the next new big idea. And so you take, in order to bring those people together and make them communicate is a very complex social engineering problem.

The people that you just described, the ones who are what I call bilingual, who actually are able to think in both ways and talk both languages, are an invaluable component of that mix because they can play the translator and they can bring those two points of view together in ways that if you just take the two disciplines in the room and even if they're incredibly well meaning, often there's just this impedance mismatch that makes it hard. And I would say that has been one of my biggest tasks as I build in Citro is to. Which, by the way, is for those of you who are aficionados of Latin, it is the merger of in silico, which means in the computer, and in vitro, which means in the lab. And so it actually is part of the ethos of the company. And so bringing those groups of individuals together to the point that they work effectively has been incredibly important.

And one of the things that we've seen is that when you do that and they really do work together, they not only come up with better solutions, they come up with better problems that neither group would have thought of on their own. And as someone who spent five years in the education space, do you see the opportunity or the need for remaking education so that we can maximize our human capacity to leverage these tools? I think in the world of AI, which is the world that we're now emerging into, educating people differently becomes of paramount importance, because I think the need to memorize stuff the way of the dodo, when we had a Google, so we didn't need to memorize stuff. We needed to start putting patterns together. That too is something that the AI is now going to be able to do better than we do, where I think we really need to train people to think in a structured way about what are some really big, important, gnarly challenges that we need to deal with, and then break it down into pieces that the AI is able to help, in many cases, solve better than the person.

But the structuring of the problem, the coming, the envisioning of it, and the breaking it down into pieces, I don't think the AI is there yet. So there is still a place for us if we. Yes, if we are, I think there is, and I think there should remain such. But we need to make sure we're training our kids to think in the right way.

So if you were to look at the next ten years, I mean, obviously you're in a setting of a company, and you've also been in the setting of academia. So maybe even blending those two perspectives, ten years from now, where would you envision all of this going? I mean, is there going to be some encyclopedic database that will be the basis of the AI systems that we use to promote human health? Or what structure would you implement? You know, I think there is. If I had a magic wave, I would certainly want to create an encyclopedic database that allows AI algorithms to be trained on biomedical data so that it can achieve its full potential.

I think the barriers that are more societal than they are technological people are still in the mode of hoarding their data and don't want to share. But I think that the opportunities here are hugely significant. Oversimplify by giving things labels that are based on very coarse grained symptomology that is frankly not relevant to the underlying biology. Alzheimer's is not one disease, diabetes is not one disease. And yet we give one drug to everybody and then it works for 20% of patients and we say, okay, well that's pretty good. Well, I think that, and that's why drugs cost so much and why many people remaindezhe sick even in today's world.

So the ability to really disentangle that complexity, that heterogeneity, and find a fit for purpose, high effect size intervention for each subtype, for each individual in the long run, has got to rely on AI and large data. And I think that is the opportunity to transform human health. I think there is an equally large opportunity if you want to sort of look at the really big picture. I mean, I don't think it's lost on anyone that the problem of our day is climate change and its effect on both human health, but also on the environment as a whole. I don't think we're going to be able to address that without the help of AI to help us make crops that are more sustainable, plants that help suck carbon capture.

I don't think we're going to be able to do that without on biological systems tools, and I think that we're not going to be able to build them without AI. So one final question, one that I've asked a number of people in analogous conversation to this, I'd like to just get a spectrum of thoughts question of AI and creativity. Right. You don't have to be convinced, some people do, about the role of creativity in science. Let's take that as a given. Can AI systems, as we currently formulate them, things may be different five or ten years, where there's this big data set and there's these interesting new combinations of things that we put in.

Can that be deemed as a creative process, or have you somehow turned it into some rote algorithmic thing that in the end of the day is just the mundane reassembling of things from the past? So I think that creativity is very much even in humans, in most cases a process of just taking pieces that exist and putting them together in new ways that hadn't been done before. That's why I personally have felt, for example, that oftentimes some of the most creative things lie at the boundary between disciplines, because you take pieces from disciplines that haven't been put together, and you're able to create something that really didn't exist before. I think that the machines are getting to the point that they are able to put together these different pieces from disparate places and create something that didn't exist before.

Where I think there is still a gap is in the judgment that says, yes, this is a good idea. That's still us right now. I think that's still us. And so kind of pointing this in the right direction and being the critic and kind of giving it a little bit of structure about where to go, I think is still absolutely something that a person can do. But can it create something that didn't exist before? I think we're kind of there. We're very close to it, and I think that's a very exciting time because it gives us as people the opportunity to think about where do we channel that creativity and how do we use that to. To make the world dramatically better than where it is today. So it's a very optimistic vision.

I don't want to end on the downer, but it would be remiss if I didn't ask you, where is there fear of things going off the rails in a way that we would find unpleasant? So, I mean, we're already seeing things going off the rails. I mean, people are really good at finding tools that could be used for good and similarly using them for evil. Right. We've seen that in every other tool that has been invented. And sometimes that, you know, bad uses is inadvertent. So. And sometimes it is evil actors doing their thing.

I think it is something that we absolutely need to be watchful for. And in terms of the evil actors doing their thing, it is going to be an arms race. And we're already seeing that with misinformation. We're seeing that with evil at scale, in terms of reaching out to vulnerable populations and causing them to do things that they wouldn't otherwise do. I mean, there's a lot of bad uses for this technology, and it's somewhat similar, although at a different scale, to cybersecurity, where that's been an arms race all along. And we don't need to go back to the nuclear arms race as well. And so this is something that we do need to be careful of.

I will say that a lot of the doomsday scenarios that have been. That are going around are more along the lines of, you know, the terminator scenario, that these machines will somehow develop consciousness and, you know, are they self willed and. Well, I don't even think that we are. So that's a whole. Okay, that's a whole other discussion. So that, to me, would be maybe the less worrisome scenario versus the. Let us make sure that those tools are not advertently or inadvertently used for evil purposes.

Well, this is fascinating. Best of luck. Thank you. Hugely important research. Thank you. Thank you. Sadeena. Sadeena.

Artificial Intelligence, Biology, Innovation, Medicine, Health, Data Science, World Science Festival