ENSPIRING.ai: AI Revolution: The Global Impact and Vision

ENSPIRING.ai: AI Revolution: The Global Impact and Vision

In a compelling conversation between OpenAI CEO Sam Altman and Amandeep Gill, UN envoy on technology, the future trajectory of artificial intelligence (AI) is explored. Sam Altman discusses the importance of making AI technologies widely accessible and affordable to ensure global benefits, particularly for the developing world. Amandeep Gill emphasizes the need for inclusive Governance and collaboration among nations, highlighting the recently adopted global digital compact aimed at improving digital equity worldwide.

The discourse further delves into the advancements of AI models, illustrating tremendous strides in reducing costs while increasing capabilities. Both speakers agree on the need for democratizing AI applications across various fields to stimulate growth and innovation. The conversation also touches on narrow AI applications, particularly in healthcare, and the potential of AI to enhance scientific discoveries. Altman expresses optimism about the capability of AI to make significant contributions to knowledge, despite existing challenges and the need for ongoing Governance.

Main takeaways from the video:

💡
AI technologies are rapidly advancing, but equitable access and Governance are crucial for global benefit.
💡
Collaborative efforts are needed to enhance data sharing and model development, especially in the Global South.
💡
Continued research and creativity are essential to unlock AI's potential fully; successful integration with human work processes remains a challenge.
Please remember to turn on the CC button to view the subtitles.

Key Vocabularies and Common Phrases:

1. Inequality [ˌɪnɪˈkwɑːləti] - (n.) - A situation where there is an unequal distribution of resources or opportunities.

You know, we were having this whole conversation around how do we we make sure that this technology benefits more people, that it is global, that it reduces Inequality, not expanding Inequality.

2. Governance [ˈɡʌvərnəns] - (n.) - The action or manner of governing a state, organization, or people.

I think fundamental step is to make the Governance of AI more inclusive.

3. Paradigm [ˈpærəˌdaɪm] - (n.) - A model or pattern for something that may be copied.

So when we switch from the connectivity Paradigm, the supply side Paradigm, to having a demand side push...

4. Democratization [dɪˌmɒkrətaɪˈzeɪʃən] - (n.) - The action of making something accessible to everyone.

So we need to create the right use cases, and we are very far from the kind of use cases that will exponentially drive growth.

5. Inductive [ɪnˈdʌktɪv] - (adj.) - Characterized by the inference of general laws from particular instances.

It's going to string together different pieces in a very Inductive fashion and come to something.

6. Frontier [frʌnˈtɪr] - (n.) - The extreme limit of understanding or achievement in a particular area.

The most important way that the world gets better over time is we push the scientific Frontier forward.

7. Anthropomorphize [ˌænˌθrɒpəˈmɔːrfaɪz] - (v.) - To attribute human characteristics to a non-human entity.

And it's always tempting to Anthropomorphize, but something that is like reasoning ability to do this...

8. Compute [kəmˈpjuːt] - (v.) - To calculate (a result or amount).

Two years ago, a little bit more than two years ago, the best AI in the world was GPT-3 and it was $15 per million tokens.

9. Mediate [ˈmiːdieɪt] - (v.) - To intervene in a dispute to bring about an agreement.

But the reasoning ability that comes from these general purpose models and the incredible ability of people to use this tool for very diverse use cases...

10. Public infrastructure [ˈpʌblɪk ˈɪnfrəˌstrʌktʃər] - (n.) - The public systems and services that are necessary for an economy to function.

We've seen this with digital Public infrastructure.

AI Revolution: The Global Impact and Vision

So it was just at the State Department event maybe were as well. And you know, we were having this whole conversation around how do we we make sure that this technology benefits more people, that it is global, that it reduces Inequality, not expanding Inequality. And it was great to see so many leaders from companies out there making pledges. And I think it's a real challenge in that my sense is that without intervention, that's not the way that technology would inherently spread out its benefits, that it can, but it's not predetermined that it will.

What needs to be done from each of your perspectives to make sure that all of the planet benefits, and not just certain countries, not just certain people, not just certain slices of the economy? I think you remember this, nothing about us without us. I think fundamental step is to make the Governance of AI more inclusive. Bring the 118 countries that are not part of the seven leading initiatives today into those discussions. And by Governance, I don't mean regulation. That's the domain of Governance. By Governance I mean stair guidance. So how can we shape the development and the deployment of technology so its opportunities are spread, the benefits are shared, and its risks are not loaded on to certain vulnerable populations or geographies? I think that's fundamentally important.

And we took a great step yesterday with the adoption of the global digital compact and the first universal framework on international government. And it does strike me that making those decisions and we'll get back to government is a challenge because the more people you have in the room, the better represented, but the harder it is. So I want to get into some of that, agree with all of that. And the thing I would add on top is make it as cheap and abundant as possible.

I think, you know, intelligence too cheap to be Arizona liberal statement, but it's a nice aspiration. And if we can make this, of course, with input and Governance from all the people that be affected, but also just super widely accessible and, you know, abundant to anybody who wants to use it for anything, I think that will disproportionately benefit developing world. And it doesn't feel like that's the moment we're in yet it does feel like right now the technology is somewhat expensive. It comes down pretty quickly. But my sense is the benefits at the moment are not necessarily being spread equally. Or am I mistaken?

Two years ago, a little bit more than two years ago, the best AI in the world was GPT-3 and it was $15 per million tokens. Today, two years later, 26 months later, something like that, GPT 4.0 mini, a much better model is fifteen cents per million tokens. So a reduction in cost by a factor of 100 for a much better model, you know, the equivalent of a model that is, let's say like ten x, maybe more effective Compute. We used to get excited about Moore's law doubling the number of transistors every 18 months. This is a rate far, far beyond that. And, you know, like fifteen cents to generate a million reasonably helpful tokens. Almost words I don't know that's getting better.

What do you think? Is cost the main barrier? So you talk about the importance of Governance, and we'll come back to the importance of Governance. But from a pure adoption standpoint, is cost enough? Is it about skills, is it about languages? I think cost is just one piece in terms of adoption and the spread and speed of adoption. I think we need to create the right use cases, and we are very far from the kind of use cases that will exponentially drive growth.

So we may be seeing exponential growth or drop in certain aspects. And I'm speaking more as an engineer rather than a diplomat. So what we need to see is Democratization of applications, Democratization of access, and also model development at a scale which is not at the scale where OpenAI is going or anthropic is going, but the middle layer which is more relevant to the sustainable development goes agricultural, health, environment, education. We've seen this with digital Public infrastructure.

So when we switch from the connectivity Paradigm, the supply side Paradigm, to having a demand side push in terms of uptake of digital products and services, the market grew exponentially. I think this is what we need to shape through Governance, as we need to work actively for by the private sector and the public sector collaborating together.

I got to go on a trip around the world last year, except an article. I was struck by how much people are excited about these tools and using these tools all around the world for many of many things related to the sustainable development goals. It's early, but two years ago almost no one cared about AI. That was like before Chan, GPT and I think a lot of the fastest adopters are not in America, but are all around the world doing really remarkable things for their communities.

And the adoption, integration of these tools, the speed that is happening around the world. Totally agree that we have a long way to go and we need to figure out how to build this middle layer out. But I think what's happening is still pretty extraordinary. Yes, and we also need to focus on the other AI.

So this is not just large language models, but narrow AI that is, for example, in the healthcare area. I worked for three years in digital health and AI for health before coming to the UN. It's very hard when you want to get into the flow in terms of diagnostics flow and some of the trust issues, then you need to do narrow reaction. I think the large language models, companies like OpenAI and Anthropic, will have a huge role to play.

They'll become like during the foundations of some of the way in which we engage with humans, with customers, with citizens, as governments or patients when they're walking into hospital. But then you need other kinds of AI to solve some of the specific problems. Although my sense is, and I've heard you say this, and, you know, I think if you take something like health, where you have a real, you know, the us model of doctor patient, you know, we have our criticisms about the health system, but it really doesn't scale this approach in the developing world.

I mean, my sense is the ability to just have that type of diagnostic information broadly accessible in language will make a huge difference. Is that your sense on global health? Look, obviously I have a bias here in favor of large language models being awesome for a lot of things. That's the only softball you're getting.

So you may as well. But I think what's special about AI is the generalization that leads to the reasoning ability. So yes, these large models can memorize lots of facts and they can do all these domains, but they're the world's worst, most expensive, slowest, least accurate databases. What they can do that's quite remarkable is process information in a useful way, and it's always tempting to Anthropomorphize, but something that is like reasoning ability to do this, and when we hear stories of doctors using chat GBT to help them come up with a difficult differential diagnosis, and then as the human doctor figure out what the patient actually needs, we're very proud of that.

And the fact that it can do it now so well in so many languages, we're very proud of that, too. So narrow AI, I think, is important for lots of reasons, but the reasoning ability that comes from these general purpose models and the incredible ability of people to use this tool for very diverse use cases, but with the same underlying user invention engine. We're certainly excited about that.

Since you went there, I kind of want to go there now. I was going to bring this up later, but, you know, it strikes me that we got this new model a couple weeks ago that you released strawberry, or Owen, as it's now called, and it adds this additional reasoning capability. And my sense is it's hard on the outside world for the rest of us to appreciate what that means. And even as you guys presented, it was like, oh, it's better at math and coding and science. My sense is it opens up a lot more doors. And could you talk for a minute about what is happening in starving and what does it mean to be reasoning again?

I know it's tempting, not saying to Anthropomorphize without anthropomorphizing too much, but what does it mean? Because my sense is, well, I'll give you one more thing. I struggle with this, covering this every day. You know, one moment I'm really excited about LLMs, and then, you know, I'll get to this point where I'm like, are they really going to go there? They're just kind of repeating back what they've been trained on. Are they really going to do what you talk about, about developing new knowledge? Where does something like strawberry get us in terms of that longer term goal?

So part of my subjective experience of the new model was that it was the first time I had a hard time coming up with things to test because it was like, so good that I'm like, what can I do that I can quickly evaluate that'll tell me it can do this or not do this. And you can find a lot of toy examples. But in terms of the useful things day to day, that you test really quickly, you know, it was like, it is surprisingly good, and this is a very early version. The next version is like, really, really good. And that was just sort of a strange experience to go through.

And then all of a sudden, it was commonplace. I was used to. I was like, yeah, why isn't it even better? But what I think this enables in the short term is agent. Well, the obvious thing that people use it for in the very short term is it's so good at helping people with programming things that there's a huge amount of economic value in and unlock people to build on top of it all sorts of amazing things.

In the slightly longer term, a year or two years, whatever, I think we will finally be able to see some really great agent-like experiences. Still early, still limited. But once you have a model that can reason, I think you can have a model that can go do longer horizon, more complex tasks for you and that's going to be tremendously useful to people. And we're hearing a lot about agents in the last week or two.

I know I'm a DP where at Salesforce we heard a ton about agents. Are these the kinds of agents we're talking about? I sort of think of them as mini agents. The types of stuff you're thinking about isn't here today. Where are we with agents? Definitely not here today.

Let's say the kind of thing where you could give a senior colleague a task that would take them like a day to do. They have to ask you for some clarifications along the way. Use multiple different tools and interact repeatedly with an environment and like accomplish some goal. You sounded a note of skepticism around the power of LLMs is what Sam's talking about.

Is that, is that going to eliminate narrow AI or do you think? I think Sam is right that we get close to reasoning. But what kind of reasoning? Because as humans we do Inductive deductive reasoning. We go back and forth and the agents that are out there, you know I saw some in San Francisco, the capital of AI as the mayor called it. So they are getting very impressive and I think that means a lot already in the next twelve to 18 months in terms of how we communicate like government services to people in their own language.

I mean I'm talking of the public side of the application. Cool stuff in the marketplace and very cool stuff in terms of human machine teaming, handing over tasks to humans when your internal sets don't allow you to go into pricing or return policies. But I think to get to true human reasoning we will still take some time. At the moment we're just looking at if the agent is accessing pricing policy it's going to just look at this thing is being returned within six days.

It's going to string together different pieces in a very Inductive fashion and come to something. But I think to get to true human like capabilities and things are moving very fast. I don't want to predict anything beyond six to twelve months. We need much more research and we need to much more about our own capabilities because we are kind of benchmarking a very basic level of human capability which is like sitting here, we've not got here yet.

And sometimes we think with our whole body and Sam is a meditator, you know, when creative things come out in the space between two thoughts. There are many, many things we don't know about human capabilities. We're just latching on to one aspect of our intelligence and then trying to approximate it. We are getting very good at that, but pretty soon, we might discover that other things can be happening.

Sam, how do we get you talk about what you're most excited about being. When AI systems are helping aid and doing scientific discovery on their own, it is struggle for me to understand how we get from a system that maybe understands every word that's ever been written on the Internet. How does it suddenly discover something new versus finding a pattern in existing data not on their own as a tool for people like, what I care about is that the scientific discovery happens.

I am a huge believer that the most important way that the world gets better over time is we push the scientific Frontier forward. We understand more about the world, we can cure diseases, we can solve climate change, we can do the long list of other things that we want to do. If we can make people more efficient, if we can help them find better insights, that is equally as good to me, maybe better, as if the AI would just go off and do it.

And so I think the bar of we need the AI to autonomously go do this is way too high. The whole history of human progress is we make better tools and then we do better things with them. And I'm totally fine with that being how it works. Enthusiastic now, I think with Oan, you can see the glimmers of how this is starting to work.

And you can, like here, the thing that Terrence Tao said that stuck with me is, before zero one, AI was like a very incompetent grad student, and now it's like a mediocre grad student that you could give tasks to, and soon you can see it being a useful research partner. And Terence Taos is that I'm inclined to believe it. So I kind of think that's the path forward.

I think people get very hung up on the fact that, yeah, it's just like being trained to predict the next token, the figure of merit to me or the thing that we're optimizing for, is, do people that are advancing our knowledge, our collective knowledge society, do they find the tool useful or not?

So are you. I mean, it sounds like what you're saying, and I don't disagree. Yeah, maybe that's all it does, is predict the next token. But that task is so valuable and so finding value that it doesn't matter. I mean, and yeah.

Once it can start to like prove unproven mathematical theorems, do we really still want to debate? Ah, but it's just predicting the next token. Like it's contributing to knowledge. It's contributing. So do you ever have the doubt that I do like, because it seems like you're always confident.

It's always like, going to do these things. I vacillate between getting really excited and being like, because I see a lot of not so exciting uses and I see people running into stumbling blocks too. Are those stumbling blocks just a function of where we are? It will continue to have weird limitations and there will be things that it seems like we expect our AI to do that it really struggles with, and there will be some not so awesome use cases.

But the thing that I think bothers is too strong of a word. Well, it's also too weak of word in some other sense. The thing that I find strange is I think if you could go back to, if you could like time warp any of you in the room, if you could time warp yourselves back a few years ago and an oracle told you that there was going to be this model called zero one available to anybody on the Internet and it could do all these things and it was better than all but the top few hundred math students in the United States.

And it was in the top very, very echelon of all programmers. And it was like ability to do better than human PhDs on test scenarios, expertise and all these other things. And that was going to come in just a couple of years or three years. Most of you would have said definitely nothing.

And, you know, like, fine. I actually think that's great. I think it is wonderful that we go so quickly from saying this is impossible to, but it's hung up in these ways to, okay, I said it was impossible. Now I say it's too dangerous to, okay, I said it was impossible.

Now this other thing is impossible to finally, like, why is the thing not faster? I'm tired of waiting. I think that says something great about what propels us forward and human discontent that drives progress.

However, I think we do need to contend with very powerful systems coming soon. And that same, like, well, fine, it did a little better than I thought it was going to do, but, you know, this is the ceiling. It's not going to get any smarter. Even people who like, very confidently said, yeah, we're not going to have models that can do reasoning.

Three weeks ago and now, like, yeah, yeah, I, we won't really have superintelligence. I worry that that psychological urge is so tempting that we don't really contend with what's coming at us.

I want to get back to that, but I do want to, at the same time this thing is happening, this demand for power and Compute is so large. And amadeep, I kind of want to get your thoughts. Are you at all worried for both of you about the environmental impact? We hear a lot about, you know, these, you know, are going to help us fight climate change, etcetera.

In the short term. They're driving a lot of demand for computing or restarting three mile island. There's a lot. That's great, let's do that. Well, so I'm in favor of nuclear energy, but something happened. You know, nuclear weapons also came along the way and getting to electricity too cheap to meter became very difficult.

So I know a little bit about the nuclear domain. So I think we need to think about what are we doing with AI, at what price and at what cost. And those are just three considerations. There may be more. I think the costs are not only energy, but also water, material, copper in particular.

And then, you know, the talent that's going into this industry, you know, if you talk to other CEO's of software companies who are not doing AI or other companies academia, you know, they are, they can't pay those salaries. So there is an opportunity cost to all this. So running queries about, you know, where is the nearest Starbucks through an LLM, I don't know if that's a good use case.

I mean, we have to think about what cost we are doing it. And then when it comes to the opportunities, we have to think about the enablers. What do you need to realize those opportunities? Five years back, my team and I said, you know, we will focus on EMR, antimicrobial kills millions of people.

We use AI to build the best decision support systems. It's like, you know, let's solve for cancer, let's solve for climate. But excuse my french, it's bloody hard. So even across two hospitals and two countries to get them to agree on the same data model, start to put together the data set that, you know, then train your algorithm, then cross train it on another data set so you can multiply this.

I don't think we can get a shortcut to all of this through large language models. Large language models will play a very important role in training of farmers in their own language, in, you know, triaging patients as they come in. But this will not be a shortcut. You know, getting to some kind of a super intelligence will not be a shortcut in terms of how do we put together data sets, how do we get nurses and doctors and patients work together, the dynamics, cultural dynamics in the operation, data between nurses and doctors. It's amazing, you know, so there it has to work in that context.

Model we had for Covid screening beautifully in silica, you took it to a hospital, very noisy signal, and just broke down completely. So we are just starting to see we are at the beginning of the journey, and we need more humility as we go down this path using very powerful technology.

History is littered with examples. The way we've assumed the technology's implication will only go in one direction. They go off from the crossbow down to nuclear weapons. There are implications, second order, third order, implications that we don't understand, societal implication. That's why we need some reflection on a continuous basis on what it means, what we need to do about that.

So I asked an environmental question, but Ahmadi, I think, took it to a different place, and maybe we can go there and then we'll come back to the environmental question. It seems like a lot of the challenge in absorbing the technology companies that are trying to use generative AI at work isn't limitations of the technology, though those are there oftentimes.

It's the processes and the human ability to integrate it and so forth. Is that your sense from working with businesses and organizations, too, that a lot of times it's not that the technology isn't ready for them, but they're nothing they would have to redefine the way they work? Or is that not what you're seeing?

You know, definitely somewhat. But again, we're 22 months into this whole era, like, there was no chat GPT 22 months ago, and hundreds of millions of people use it, counting the people that use it through the API or API, I don't know, but I bet it's more than a billion. That's really quite rapid for 22 months.

And so I think people are adopting it and they're doing it because, like, they get value out of it. It's not perfect, and I disagree with a lot of that, but I do very much agree that it's not all going to be good. And we do have a lot of things in front of us to figure out, but people are finding places where it is good and is useful, and they're building on top of it from there.

What about on the environmental piece? I mean, is it clear to you that, you know, yes, we have all these demands upfront for increased Compute, and those are going to create power needs, but it's definitely going to pay off, you know, environmentally. How do you think about that? Having a little bit more purview into both what's needed and some of the benefits?

Well, I wish that every time people flew to a conference to complain about energy usage, they had to, they had to, like one time out of ten, they had to skip it and put all of the money that went to burning fuel into fusion research, because I believe that the best thing we can, you know, you're speaking to Climate week at Unga, where everyone is floating from around the world to talk about the climate.

I do know that, but I think the best thing we can do on the energy front is figure out how to create abundance, safe, clean energy. And I think, I think we have a few very. The one I. The approach I believe in most is nuclear, particularly fusion at massive scale.

But I want everybody in the world to have a great life. And I think that if you study kind of history, availability, cost of energy, that's maybe been the single biggest determinant. And so I'd love a lot more of that. And I think we should fund that much more aggressively.

Now, I'm also hugely in favor of doing other things, doing all the things that we can do to reduce energy usage. And I think the world has made good strides there. I think a lot of people are out of touch with things they say, the way they act. But that aside, let's make a lot of clean, safe, cheap energy.

Let's reduce usage where we can. On the AI front, in particular, we talked about this incredible decrease in cost that we've gone through. That's a direct reduction in energy usage too. The energy use of AI, I think, relative to the value it's creating is quite tiny today.

I don't want to minimize it too much because it's going to go up. We will use gigawatts over time out of terawatts on Earth, but I think it will more than pay off, not only in terms of benefits towards helping us do a better job with carbon capture and figure out, you know, better solutions to the generation side, but also in terms of what it replaces.

I agree, it is a little bit silly that people are putting where is the nearest Starbucks? Through chaat GBT, but it's using a really trivial amount of energy, like a truly trivial amount of energy. And we can push so much further to make that even more trivial. And the things that it replaces, many of the other uses can, I think, be such a net saving of energy. So, yeah, let's get the generation side fixed.

Let's get to a future with really abundant energy that we're all happy about. But I think there's like so many reasonable things to beat on AI about. There's so many real problems with Aihdeme. And I appreciate you talking about things in addition to just the energy constraint like water and some materials, because I think those are more real.

But on the energy side, I think that one has gotten blown quite out of proportion. On the energy side, I think, and don't get me wrong, because I'm very excited about AI. I love AI and I've been for a while, but I think on the energy side, just shifting to cleaner resources of energy, we, Lord knows, we know, need lots of them.

600 million people without access to reliable energy. Today we have the devens paradox. You know, as coal use in the 19th century got more efficient, coal use rose up. So we are going to have more use of energy, and therefore we also need to think about what are we turning that energy into.

So what's the solution that we're looking at? And also we need more collaborative development of AI models. So if everyone is training the same model a million times over all around the world, then that's a waste of resources. So I think the world needs to come together to collaborate on solving some of these large challenges.

And agree with Sam, you know, cancer, climate change, plastics in the ocean. But I don't know if we should do it through pinching it out to one place or it should be done through more collaborative effort because we need a massive effort on datasets also. I mean, we can scrape stuff off the Internet, but I think.

And we can use artificial data, synthetic data, but I think the really interesting stuff is still to be done in many of these areas. So collaborative data commons, collaborative AI commons and sharing, Compute. This is one of the big outcomes from yesterday, a capacity development network where, you know, even in countries in Europe like Finland, Lichtenstein, sometimes at peak use, 40% of the HPC is lying out. Is there a way that we can come together, link this and, you know, be more energy efficient?

Well, obviously there's a ton to talk about, and if this were a seminar on climate, I would keep going there. But a few different things I want to hit. I have some lightning round questions for Sam that I want to hit on. But amadeep, you have a chance.

What do you want most from? We just saw at the state department event, it was not just Sam, but it was Google, Meta, Amazon, the leaders of all these companies saying hey, we want to help this be more equitable. We want, we're putting money to train it in new languages. We're putting money into training people. You launched the OpenAI Academy today.

What are your other asks? What isn't happening? It needs to happen. I think these are very what was announced today, and Sundar came over to the action day two days ago, announced $120 million contribution to talent development.

What we fear at the UN is fragmentation of the capacity building effort and different people pursuing on and in the global south. We hear, you know, capacity buildings being parachuted in without any regard for what we actually need. So we need to have a collaborative capacity development effort around Compute, around talent development, around curation of data sets.

We also need to shape the marketplaces for data in the future so that SME startups can access data to train their own models. Getting to the right kind of data is becoming very difficult for companies in the global south because the costs are rising and so on. So what we need from the private sector is health in terms of expanding the opportunity.

It's good in the long run for the leading AI companies, AI Labs as well. Second thing we need is we need the help of these companies in having an up to date scientific understanding of the capabilities. I'm not talking about safety, because capabilities imply both risks and opportunities. So join us in having a science based, evidence based assessment of opportunities and risks on a regular basis so that policymaking doesn't happen in the dark.

People don't swing from one wide extreme of regulation to another. They do sensible things same. I'm getting the warning, so I have to get in my lightning round questions, the ones that I know you're dying to answer. So you've talked a lot about what you do, what opening, I think is going to be an expensive thing.

You've been raising some money. Are you all set for the next round of blend in? I don't think I can comment on that one. I would love to, but I think that's one of the big no nos. All right, then we'll go on to another topic that talk about. I'm curious.

You know, it's been out there for a while that you were working with Johnny Ives company to design some hardware. I think he finally told the New York Times. Yeah, it's true. I assume you're not designing a phone, but you have talked about wanting, imagining that for each new generation of technology that tends to be a device, there can be not a phone. Yeah, yeah.

I mean, the phones are incredible. Like a phone is. I think one of the triumphs of human technology, for all of its flaws, it's just an amazing thing and really great in its generality. I don't think you should try to do a better phone. I hope that there is something great and new to do and I think the affordances of AI maybe get there, but it's a long way away and you've got to sort of go through a process to see if something makes sense.

So not this year, maybe next year. Long, long. I mean, hardware is like, I think it took open I four and a half years to ship a product. And I thought that was fast.

All right, and then last topic that I know you're trying to talk about the next model. I mean, you guys talked about that. As great as zero one is and strawberry, in this direction, you're going to keep building a bigger and bigger model. We didn't say bigger, we said better.

Okay, better, maybe smaller. That'd be great. Is GPT five going to be smaller than GPT four? I don't know how we're going to do naming from here on. Genuinely. Like, it's. We're in this awkward tweener phase of naming, right? Because we're like, between paradigms and we're like discovering the science so fast that I feel pleased if I can predict our roadmap a few quarters out.

A few years is hard right now. But what I can say with certainty, don't worry about model size, model capability will go up. Oh, let me ask this a different way. You said you're going to keep developing in the GPT series of models.

My understanding is you're training now, but it's probably not going to happen this year. Is that fair? Obviously, I'm not gonna say that I expect some more cool things from us this year. Let's say that, like, you know, here's one thing I will say. We don't always update the names, but GPT four O, the model that was launched whenever that was a while ago, to the current model available in the API, like, unbelievably better.

And I. I think people should pay more attention to that, be like, hmm, I wonder what they're doing. But, like, that model, although still called GPT four O, performs nothing like the thing that was launched. That's called GPT four O. We just keep making it better and better and better.

So something cool this year. Many people don't know what you're going to see. Many. And you're glad that you closed that last funding round, right? Well, as much as I would love to keep her.

Technology, Innovation, Global, Artificial Intelligence, Inclusion, AI Governance