ENSPIRING.ai: Inside OpenAI: Unveiling the AI Revolution
The video provides an insight into OpenAI, a leading startup that developed groundbreaking technologies like ChatGPT and DALL-E, sparking an AI race amongst tech giants. It explores how OpenAI, initially a quieter player in the tech landscape, has come to revolutionize AI-powered technologies, pursuing a strategy of innovation coupled with safety and reliability through continuous feedback and system refinement.
The discussion features key personnel from OpenAI, including CEO Sam Altman and CTO Mira Murati, who elaborate on the challenges and strategies involved in AI development. They discuss the role of AI, its societal impact, Misinformation challenges, and new concepts such as prompt engineering. The video also highlights the changing dynamics of job markets influenced by AI and how OpenAI maintains ethical considerations in its operations.
Main takeaways from the video:
Please remember to turn on the CC button to view the subtitles.
Key Vocabularies and Common Phrases:
1. Nondescript [ˌnɒndɪˈskrɪpt] - (adj.) lacking distinctive or interesting features or characteristics.
Inside a Nondescript building in the heart of San Francisco, one of the world's buzziest startups is making our AI powered future feel more real than ever before.
2. Introspective [ˌɪntrəˈspɛktɪv] - (adj.) characterized by introspection, contemplating one's own thoughts, feelings, and sensations.
This is one of the most Introspective minds at OpenAI.
3. Hallucination [ˌhæl(j)uˌsɪˈneɪʃən] - (n.) perception of objects with no reality, or of events that do not occur.
One of the things that I'm most worried about is the ability of models like GPT four to make up things. We refer to this as hallucinations.
4. Turbocharged [ˈtɜːbəʊˌtʃɑːrdʒd] - (adj.) enhanced or accelerated in a very fast manner.
OpenAI has kind of Turbocharged this competitive frenzy.
5. Intuition [ˌɪntuˈɪʃən] - (n.) the ability to understand something instinctively, without the need for conscious reasoning.
It's this ability to really develop an Intuition for how to get the most out of the model.
6. Ventures [ˈvɛntʃərz] - (n.) a business enterprise or speculation in which something is risked in the hope of profit.
Venture capitalists are pouring money into anything, AI startups hoping to find the next big thing.
7. Disruption [dɪsˈrʌpʃ(ə)n] - (n.) disturbance or problems that interrupt an event, activity, or process.
What it certainly does is it creates a wave of Disruption.
8. Propulsion [prəˈpʌlʃ(ə)n] - (n.) the action of driving or pushing forward.
There's going to be a co pilot for every profession.
9. Misinformation [ˌmɪsˌɪnfəˈmeɪʃən] - (n.) false or inaccurate information, especially that which is deliberately intended to deceive.
Isn't this going to accelerate the Misinformation problem?
10. Collaborate [kəˈlæbəˌreɪt] - (v.) work jointly on an activity or project.
But we are moving towards a world where we are collaborating with these machines more and more.
Inside OpenAI: Unveiling the AI Revolution
Inside a Nondescript building in the heart of San Francisco, one of the world's buzziest startups is making our AI powered future feel more real than ever before. They're behind two monster chat, GPT and Dal E, and somehow beat the biggest tech giants to market, kicking off a competitive race that's forced them all to show us what they've got. But how did this under the radar startup pull it off? We're inside OpenAI and we're gonna get some answers. Is it magic? Is it just algorithms? Is it gonna save us or destroy us? Let's go find out.
I love the plants. It feels so alive, so amazing. I love it here. It's giving me very Westworld spa vibes. It's almost like suspended in space and time. A little bit. Yeah. It has a little bit of futuristic feel. This is one of the most Introspective minds at OpenAI. We all know Sam Altman, the CEO, but Mira Merati is a chief architect behind OpenAI's strategy. This looks like the OpenAI logo. It is. Ilya actually painted this. Ilya, the chief scientist? Yes. What is the flower meant to symbolize? My guess is that it's AI that loves humanity. We're very focused on dealing with the challenges of Hallucination, truthfulness, reliability, alignment of these models.
Has anyone left? Because they're like, you know what? I disagree. There have been, over time, people that left to start other organizations because of disagreements on the strategy around deployment. And how do you find common ground when disagreements do arise? You know, you want to be able to have this constant dialogue and figure out how to systematize these concerns. What is the job of a CTO? It's a combination of guiding the teams on the ground, thinking about longer term strategy, figuring out our gaps, and making sure that the teams are well supported to succeed. Yeah, sounds like a big job. Solving impossible problems. Solving impossible problems, yeah.
When you were making the decision about releasing chat GPT into the wild, I'm sure there was like a go or no go moment. Take me back to that day. We had chat GPT for a while, and we sort of hit a point where we could really benefit from having more feedback from how people are using it. What are the risks, what are the limitations? And learn more about this technology that we have created and start bringing it in the public consciousness. It became the fastest growing tech product in history. It did. Did that surprise you? I mean, what was your reaction to the world's reaction? We were surprised by how much it captured the imaginations of the general public and how much people just loved spending time talking to this AI system and interacting with it.
ChatgbT can now mimic a human. It can write, it can code at the most basic level. How does this all happen so chaotic is a neural network that has been trained on a huge amount of data, on a massive supercomputer. And the goal during this training process was to predict the next word in a sentence. And it turns out that as you train larger and larger models add more and more data, the capabilities of these models also increase. They become more powerful, more helpful, and as you invest more on alignment and safety, they become more reliable and safe over time.
OpenAI has kind of Turbocharged this competitive frenzy. Do you think you can beat Google at its own game? Do you think you can take significant market share in search? We didn't set out to dominate search. What chatgpt offers is a different way to understand information, and you could be, you know, searching, but you're searching in a much more intuitive way versus keyword based. I think the whole world is sort of now moving in this direction. The air of confidence, obviously, that chat GPD sometimes delivers an answer with, why not just sometimes say, I don't know. The goal is not to predict the next word reliably or safely.
When you have such general capabilities, it's very difficult to handle some of the limitations, such as what is correct, some of these texts, and some of the data is biased, some of it may be incorrect. Isn't this going to accelerate the Misinformation problem? I mean, we haven't been able to crack it on social media for like, a couple of decades. Misinformation is a really complex heart problem right now. One of the things that I'm most worried about is the ability of models like GPT four to make up things. We refer to this as hallucinations, so they will convincingly make up things, and it requires, you know, being aware and just really knowing that you cannot fully, blindly rely on what the technology is providing as an output.
I want to talk about this term Hallucination, because it's a very human term. Why use such a human term for basically an AI that's just making mistakes? A lot of these general capabilities are actually quite human like. Sometimes when we don't know the answer to something, we will just make up an answer. We will rarely say, I don't know. And so there is a lot of human Hallucination in a conversation, and sometimes we don't do it on purpose. Should we be worried about AI, though? That feels more and more human like. Should AI have to identify itself as artificial when it's interacting with us?
I think it's a different kind of intelligence. It is important to distinguish output that's been provided by a machine versus another human. But we are moving towards a world where we are collaborating with these machines more and more. And so output will be hybridization. All of the data that you're training this AI on, it's coming from writers, it's coming from artists. How do you think about giving value back to those people when these are also people who are worried about their jobs going away? I don't know exactly how it would work in practice that you can sort of account for information created by everyone on the Internet.
I think there are definitely going to be jobs that will be lost and jobs that will be changed as AI continues to advance and integrate in the workforce. Prompt engineering is a job today. That's not something that we could have predicted. Think of prompt engineers like AI whisperers. They're highly skilled at selecting the right words to coax AI tools into generating the most accurate and illuminating responses. Its a new job born from AI thats fetching hundreds of thousands of dollars a year. What are some tips to being an ace prompt engineer? You know, its this ability to really develop an Intuition for how to get the most out of the model, how to prompt it in the right ways, give it enough context for what youre looking for.
One of the things that we talked about earlier was hallucinations and these large language models not having the ability to always be highly accurate. So I'm asking the model with a browsing plugin to fact check this information, and it's now browsing the web. So there's this report that these workers in Kenya were getting paid $2 an hour to do the work on the back end to make answers less toxic. And my understanding is this work can be difficult, right, because you're reading texts that might be disturbing and trying to clean them up. So we need to use contractors sometimes to scale. We chose that particular contractor because of their known safety standards, and since then, we've stopped working with them. But as you said, this is difficult work, and we recognize that. And we have mental health standards and wellness standards that we share with contractors.
I think a lot about my kids and them having relationships with AI someday. How do you think about what the limits should be and what the possibilities should be when you're thinking about a child? I think we should be very careful in general with putting very powerful systems in front of more vulnerable populations. There are certainly checks and balances in place because it's still early and we still don't understand all the ways in which this could affect people. There's all this talk about, you know, relationships and AI. Like, could you see yourself developing a relationship with an AI? I'd say yes. As a reliable tool that enhances my life, makes my life better.
As we ponder the existential idea that we might all have relationships with AI someday, theres an AI gold rush happening in Silicon Valley. Venture capitalists are pouring money into anything, AI startups hoping to find the next big thing. Reid Hoffman, the co founder of LinkedIn and an early investor in Facebook, knows a thing or two about striking gold. He was an early OpenAI backer and is in a way, trying to take societys hand and guide us all through the age of AI.
I mean, gosh, twelve years we've been talking. Maybe longer. That's awesome. A long time. Yes. You have been on the ground floor of some of the biggest tech platform shifts in history, the beginnings of the Internet, mobile. Do you think AI is going to be even bigger? I think so. It builds on the Internet, mobile, cloud data, all of these things come together to make AI work. And so that causes it to be the crescendo, the addition to all of us.
I mean, one of the problems with the current discourse is that it's too much of the fear based versus hope based. Imagine a tutor on every smartphone for every child in the world. That's possible. That's line of sight from what we see with current AI models today. You coined this term blitzscaling. Blitzscaling, in its precise definition, is prioritizing speed over efficiency in an environment of uncertainty. How do you go as fast as possible in order to be the first to scale?
Does AI blitzscale? Well, it certainly seems like it today, doesn't it? And I think the speed at which we will integrate it into our lives will be faster than we integrated the iPhone into our lives. There's going to be a co pilot for every profession. And if you think about that, that's huge. And not professional activities. Cause it's gonna write my kids papers, right? My kids high school papers. Yes. Although the hope is that in the interaction with it, they'll learn to create much more interesting papers.
You and Elon Musk go way back. He co founded OpenAI with Sam Altman, the CEO of OpenAI. You and I have talked a lot over the years about how you have been sort of this node in the Paypal mafia. You can talk to everyone, and maybe you disagree, but you are all still friends. What did Elon say that got you interested so early? Part of the reason I got back into AI and I was part of sitting around the table and the crafting of OpenAI was that Elon came to me and said, look, this AI thing is coming.
Once I started digging into it, I realized that this pattern that we're going to see the next generation of amazing capabilities coming from these computational devices. And then one of the things I had been arguing with Elon at the time about was that Elon was constantly using the word robocalypse, which, you know, we as human beings tend to be more easily and quickly motivated by fear than by hope. So you're using the term robocalypse, and everyone imagines the Terminator and all the rest. It sounds pretty scary. It sounds very scary. Robots doesn't sound like something we want. Yeah, stop saying that. Cause actually, in fact, the chance that I could see anything like a robocalypse happening is so de minimis relative to everything else.
So you did come together on OpenAI. How did that happen? I think it started with Elon and Sam having a bunch of conversations. And then since I know both of them quite well, I got called in and I was like, look, I think this could really make sense. Something should be the counterweight to all of the natural work that's going to happen within commercial realms. How do we make sure that one company doesn't dominate the industry, but the tools are provided across the industry, so innovation can benefit from startups and all the rest? I was like, great, and let's do this thing.
OpenAI I did ask chat GPT what questions I should ask you. I thought its questions were pretty boring. Yes, your answers were pretty boring, too. So we're not getting replaced anytime soon. But clearly this has really struck a nerve. There are people out there who aren't gonna fall for it. Yes. Shouldn't we be worried about that? Okay, so everyone's encountered a crazy person who's drunk off their ass at a cocktail party who says really odd things, or at least every adult has. And, you know, that's not like the world didn't end, right? We do have to pay attention to areas that are harmful.
Like, for example, someone's depressed, they're thinking about self harm. You want all channels by which they could get into self harm to be limited. That isn't just chatbots. That could be communities of human beings, that could be search engines. You have to pay attention to all the dimensions of it. How are we overestimating AI? It still doesn't really do something that I would say is original to an expert. So, for example, one of the questions I asked was, how would Reed Hoffman make money by investing in artificial intelligence? And the answer it gave me was a very smart, very well written answer that would have been written by a professor at a business school who didn't understand venture capital, right.
So it seems smart would study large markets, would realize what products would be substituted in the large markets, would find teams to go do that and invest in them. Very credible and completely wrong. The newest edge of the information is still beyond these systems. Billions of dollars are going into AI. My inbox is filled with AI pitches. Last year it was crypto and web three. How do we know this isn't just the next bubble? I do think that the generative AI is the thing that has the broadest touch of everything now. Which places are the right places to invest? I think those are still things we're working out now.
Obviously, as venture capitalists, part of what we do is we try to figure that out in advance, years before other people seeing coming. But I think that there will be massive new companies built. It does seem in some ways like a lot of AI is being developed by an elite group of companies and people. Is that something that you see happening in some ideal universe? You'd say, for a technology that would impact billions of people, somehow billions of people should directly be involved in creating it.
But that's not how any technology anywhere in history gets built, and there's reasons you have to build it at speed. But the question is, how do you get the right conversations and the right issues on the table? So do you see an AI mafia? For me, I definitely think that there is. Cause you're referring to the Paypal mafia, of course. I definitely think that there's a network of folks who have been deeply involved over the last few years will have a lot of influence on how the technology happens.
Do you think AI will shake up the big tech hierarchy significantly? What it certainly does is it creates a wave of Disruption. For example, with these large language models. In search of what do you want? Do you want ten blue links, or do you want an answer? In a lot of search cases, you want an answer and a generated answer. That's like a mini wikipedia page is awesome. That's a shift. So I think we'll see a profusion of startups doing interesting things in this. But can the next Google or Facebook really emerge? If Google and Facebook or Meta and Apple and Amazon are running the playbook and Microsoft.
Do I think there will be another one to three companies that will be the size of the five big tech giants emerging, possibly from AI? Absolutely, yes. Now, does that mean that one of them is going to collapse? No, not necessarily, and it doesn't need to. The more that we have, the better. So what are the next big five? Well, that's what we're trying to invest in. You're on the board of Microsoft. Obviously Microsoft is making a big AI push. Did you bring Satya and Sam or have any role in bringing Satya and Sam closer together? Because Microsoft obviously has $10 billion now in OpenAI.
Well, I think I could. I probably have a, you know, both of them are close to me and know me and trust me well. So I think I have helped facilitate understanding and communications. Elon left OpenAI years ago and pointed out that it's not as open as it used to be. He said he wanted it to be a nonprofit counterweight to Google. Now it's a closed source, maximum profit company effectively controlled by Microsoft. Does he have a point? Well, he's wrong on a number of levels there. So one is it's run by a 501, it is a nonprofit, but it does have a for profit part.
The commercial system, which is all carefully done, is to bring in capital to support the nonprofit mission. Now get to the question of, for example, open. So Dolly was ready for four months before it was released. Why did it delay for four months? Because it was doing safety training. It said, well, we don't want to have this being used to create child sexual material. We don't want to have this being used for assaulting individuals or during deepfakes. So we're not going to open source it. We're going to release it through an API so we can be seeing what the results are and making sure it doesn't do any of these harms.
So it's open because it has open access to APIs, but it's not open because it's open source. There are folks out there who are angry, actually, about OpenAI's branching out from nonprofit to for profit. Is there a bit of a bait and switch there? The cleverness that Sam and everyone else figured out is they could say, look, we can do a market commercial deal where we say, we'll give you commercial licenses to parts of our technology in various ways, and then we can continue our mission of beneficial AI.
The AI graveyard is filled with algorithms that got into trouble. How can we trust OpenAI or Microsoft or Google or anyone, to do the right thing, well, we need to be more transparent. But on the other hand, of course, our problem, exactly as you're alluding to, is people say, well, the AI should say that or shouldn't say that. We can't even really agree on that ourselves. So we don't want that to be litigated by other people. We want that to be a social decision.
So how does this shake out globally? We should be trying to build the industries of the future. That's what's the most important thing. And it's one of the reasons why I tend to very much speak against the people, like, oh, we should be slowing down. Do you have any intention of slowing down? We've been very vocal about these risks for many, many years. One of them is acceleration. And I think that's a significant risk that we as a society need to grapple with building safe AI systems that are general.
It's very complex. It's incredibly hard. So what does responsible innovation look like to you? You know, like, would you support, for example, a federal agency like the FDA that vets technology like it vets drugs? Having some sort of trusted authority that can audit these systems based on some agreed upon principles would be very helpful. I've heard AI experts talk about the potential for the good future versus the bad future and the bad future. There's talk about this leading human extinction. Are those people wrong? There is certainly a risk that when we have these AI systems that are able to set their own goals, they decide that their goals are not aligned with ours and they do not benefit from having us around and could lead to human extinction.
That is a risk. I don't think this risk has gone up or down from the things that have been happening in the past few months. I think it's certainly been quite hyped, and there is a lot of anxiety around it. If we're talking about the risk for human extinction, have you had a moment where you're just like, wow, this is big? I think a lot of us at OpenAI joined because we thought that this would be the most important technology that humanity would ever create. Of course, the risks, on the other hand, are also pretty significant, and this is why we're here.
Do OpenAI employees still vote on AGI and when it will happen? I actually don't know. What is your prediction about AGI now and how far away it really is? We're still quite far away from being at a point where, you know, these systems can make decisions autonomously and discover new knowledge. But I think I have more certainty around the advent of having powerful systems in our future. Should we even be driving towards AGI? And do humans really want it? Advancements in society come from pushing human knowledge.
Now, that doesn't mean that we should do so in careless and reckless ways. I think there are ways to guide this development versus bring it to a screeching hole because of our potential fears. So the train has left the station and we should stay on it? That's one way to put it.
Technology, Innovation, Entrepreneurship, OpenAI, Artificial Intelligence, AI Ethics
Comments ()