ENSPIRING.ai: OpenAI Structured Outputs,character.aiacquisition, and is it an AI bubble?
The video discussions revolve around the hype and skepticism surrounding AI and its impact on the economy, specifically addressing whether AI companies could potentially bring down the American economy. Panelists including technologists and AI governance experts, express skepticism about AI being the sole contributor to economic downturns, emphasizing the broader breadth of the economy beyond AI. They delve into the perceptions about AI, especially in the financial sector, and distinguish between the hype and actual, sustainable applications of AI in business practices.
The conversation transitions to a technical discussion on OpenAI's recent announcement of 'structured outputs', highlighting the shift back to integrating structured data with AI systems. This new feature indicates a practical step towards making AI tools usable in enterprise environments by providing more control over how AI integrates with existing data-driven systems. The panelists explore the significance of this development in making AI feasible for complex business operations, dispelling previous notions that AI alone can handle entire processes end-to-end without structured or traditional support systems.
Main takeaways from the video:
Please remember to turn on the CC button to view the subtitles.
Key Vocabularies and Common Phrases:
1. spooks [spʊks] - (verb) - To frighten or scare someone, especially in a sudden or unexpected way. - Synonyms: (frightens, alarms, scares)
Wall street spooks pretty easily and hypes pretty easily, and they're also on a cycle that research certainly is not.
2. moat [moʊt] - (noun) - A deep, wide ditch surrounding a castle, fort, or town, typically filled with water and intended as a defense against attack; metaphorically, any barrier that prevents competitors from advancing easily. - Synonyms: (barrier, obstacle, defense)
You have to know what's your value add and how much of that is a differentiator with high moat so others can't just come in and do what you do.
3. skepticism [ˈskɛptɪsɪzəm] - (noun) - An attitude of doubt or disbelief. - Synonyms: (doubt, disbelief, distrust)
We have uniform skepticism at that position, and I think that's actually what I wanted to get into.
4. macroeffects [ˈmækroʊ ɪˌfɛkts] - (noun) - Broad, overall impact or influence, often used in economics or social sciences to denote large-scale consequences or influences. - Synonyms: (global effects, large-scale impacts, overarching impacts)
And is kind of a popping or at least kind of increasing skepticism around AI having these big macro effects.
5. seismic [ˈsaɪzmɪk] - (adjective) - Of enormous scale or impact; relating to earthquakes or other vibrations of the earth. - Synonyms: (enormous, colossal, significant)
So there's absolutely no confusion about the fact that AI is going to have, is having a seismic impact on the businesses going forward.
6. mundane [mʌnˈdeɪn] - (adjective) - Lacking interest or excitement; dull; related to the ordinary world, rather than spiritual. - Synonyms: (ordinary, banal, routine)
Yes, I think it's a mundane task.
7. heterogeneous [ˌhɛtərəˈdʒiːniəs] - (adjective) - Diverse in character or content; composed of different kinds, mixed. - Synonyms: (diverse, varied, miscellaneous)
Let's go back to the fact that especially if you're trying to mix and match a heterogeneous system, you do need structured output because these things don't know how to talk to each other
8. constrain [kənˈstreɪn] - (verb) - To limit or restrict someone or something, often to prevent them from doing something. - Synonyms: (restrict, limit, curb)
Effectively, what they're offering is for the very first time, model developers are allowed to basically work with their systems to constrain their outputs to match specific schemas that are defined by engineers.
9. schema [ˈskiːmə] - (noun) - A structured framework or plan; in computing, a diagrammatic representation of the structure of a database. - Synonyms: (outline, framework, structure)
Effectively, what they're offering is for the very first time, model developers are allowed to basically work with their systems to constrain their outputs to match specific schemas that are defined by engineers.
10. contradiction [ˌkɒntrəˈdɪkʃən] - (noun) - A combination of statements, ideas, or features which are opposed to one another. - Synonyms: (inconsistency, clash, discrepancy)
So tim hop take on this. This is the first time OpenAI is now appreciating and admitting that the whole workflow, end to end, won't be done by an LLM.
OpenAI Structured Outputs,character.aiacquisition, and is it an AI bubble?
Wall street spooks pretty easily and hypes pretty easily, and they're also on a cycle that research certainly is not. Structured output's probably the most sexy release of this summer. You're kind of breaking this fucking bronco that just came out of the blue. Does the acquisition of character AI make any sense at all? You have to know what's your value add and how much of that is a differentiator with high moat so others can't just come in and do what you do.
All that and more on today's episode of Mixture of Experts. I'm Tim Hwang, and I'm joined today, as I am every Friday, by a genius panel of technologists, engineers, and more to help make sense of another hectic week in AI land. On the panel today, we've got three guests. Marina Danielewski is a senior research scientist, Kush Varshny, IBM fellow working on issues surrounding AI governance, and Shobit Varshini, senior partner consulting on AI for us, Canada and Latin America.
All right, so let's just get into it. First story of the week is a big one, but I want to start with kind of a round the horn question. Let's just start with a quick yes or no, and it's a very simple question to kind of kick off the discussion, which is, are AI companies going to bring down the american economy? Kush, yes or no? What do you think? No. Showbit? No. And Marina? No. Okay. We have uniform skepticism at that position, and I think that's actually what I wanted to get into.
So if you've been keeping your eyes on the financial news this week, markets were massively down across the board internationally, and there was a lot of speculation as to why this was the case. People were proposing the unwinding of exotic financial positions, concerns about the Fed not cutting rates. But one thing that a number of people kind of argued was, should we blame AI, like, the hype around AI for this? And part of this claim was based around the idea that the companies really leading the downturn and arguably a big drag on indexes like the S and P 500 were tech companies that have made big bets on AI in the last twelve to 24 months and so wanted to get the kind of panel's opinion.
And Khush, maybe we'll toss it over to you first, is, you know, do we buy this as a theory? Like, why should we or shouldn't we believe that AI is kind of a contributor to this downturn and is kind of a popping or at least kind of increasing skepticism around AI having these big macro effects. I'm curious why you said no in the first question there. Yeah, I mean, there's clearly, I mean, hype cycles with everything, but I think the economy has a lot more to offer. I mean, it's a very broad based sort of thing. AI is kind of the cherry on top or the icing on the cake. I mean, yes, it affects the perception of the view that it is really about the fundamentals at this point. I think that will change over time, but not right now.
Well, I think there's been, I mean, part of this I think is also following on the tails of, I think we've been talking about it for the last few episodes, is kind of these reports coming out of banks and other financial firms kind of raising some skepticism around, kind of like the excitement around AI. So there's the Goldman Sachs one that we talked about a few weeks back and also the sequoia report that some people might have seen. It is true, though, that the tech companies have made genuinely a really big bet on the market for AI. And I guess I'm curious, maybe showbit, I'll throw it to you, is are you seeing clients kind of following those jitters? Are they reading these reports and saying, well, maybe AI is not providing what we thought it would? Should we be a little bit more cautious about how we make these investments?
So I don't think the clients have to. And I'm talking about the Fortune 100 companies, 500. They don't have to read these reports to realize that certain areas AI has been overpromised. Certain areas they have, they are underutilized. Right? So there's absolutely no confusion about the fact that AI is going to have, is having a seismic impact on the businesses going forward. So no CEO can say that the next five years are not going to be massively impacted by what AI can do. It's a question of how do you apply AI surgically in the processes, and how do you think about a strategy of data that then leads to an AI strategy that then delivers value for you?
The conversation has changed more in. All right, after experimenting for two years, we have a good sense of where AI and Genai are working. Well, we now need to make sure that we have a good mechanism to figure out the high value unlocks in the business. Appreciate that it's a combination of AI automation and generative AI. It's not all Genai handling the entire process end to end. And we need to make sure that our data, real estate and the people and the processes are aligned to unlock that value. So I think there's a significant appreciation of the value it can bring, but also the fact that it's a journey and you need to make steps along the way to make sure that you're getting that value unlocked. That's very clear to all my fortune. 100, 500 clients.
Yeah. And I think that's maybe one thing that our listeners would benefit a lot from your expertise on showbit, is I think you kind of put out the idea that there's, like, under hyped areas of AI. Right. And I'm kind of curious if there's. When you say that if you've got kind of particular areas in mind where you're like, this is where businesses aren't looking. Right, you know, there's a lot of hype in the space, but, like, this seems to be where some of the hidden gems are. I'm curious if you can speak to that a little bit.
Yes, I think it's a mundane task. It's the stuff that. How do you make sure that every employee across the organization can experiment in their day to day workflows with AI, with generative AI, in a very secure and governed way? So within IBM consulting, for example, we have 160,000 consultants who wake up in the morning and doing all kinds of varied tasks.
There are small subset of people who are AI gurus, right? They feel that if it's been 20 minutes since lama 3.1 landed and we have not had it running locally, you are an embarrassment to society. There's a small portion of those, but 85% of the other part of consulting, they're doing things like, I'm going to do code creation. I'm a tester. For the last eleven years, I've been doing marketing campaigns. I'm going to do finance workflows. So I'm going to get a invoice. I would marry it against the contract, the purchase order, and I'm going to approve it or disapprove it.
So those kind of mundane workflows have a human in the loop, and you need to figure out how Excel got embedded in those workflows. You're now at the point where you're having AI and generative AI get embedded. Everybody figured out how to use Excel to improve their day to day workflows. We're at that same point today. So you need to get to a point where every IBM consultant, we call that consulting assistance as an example. It could be a co pilot from Azure, could be Amazon queues, Google's of the world. But you need to democratize people actually messing with the day to day. Figure out that, oh, this email that I write 1800 times a month can be automated. And that's the value unlock. Get your end customers to start, your end employees to start experimenting in a governed way so that Kush doesn't have a heart attack. Just make sure we're doing this in a way that we don't get ourselves into trouble.
Yeah, I think that's actually in some ways, like, it has ended up being, what you're describing show a bit the kind of, of like 800 pound gorilla of the AI world. You know, I love this kind of joke that, like, you start OpenAI because you really want to create, like, AGI, but like, just slowly but surely the gravitational well of being like b two b SaaS and offering that as a service is like, really where the gigantic amount of money is. Marine, I did have a question for you based on kind of what shoba just talked about here. I know, shoba, you kind of made the difference between, like, people saying, okay, if you can't implement llama three on day one and revolutionize all your business processes in the first day, you're a waste of society. I'm kind of curious.
So there's one standout company here, which is Nvidia, which is hardware, and that is a company that has been hit in the stock market rather hard. And I think was one of the examples that people said, see, this is why AI is hyped. Do you buy that? I mean, is Nvidia indeed the most valuable company in the whole world? And how should we think about sort of hardware in this picture? Like, will hardware continue to be sort of the most valuable kind of piece of this AI pie, at least as far as the stock market is concerned.
I mean, it's a dependency. So just talking in pure engineering terms, you are pretty much tied to it because it's very much a dependency. I will say, as far as Nvidia going up in value, crashing in value, Wall street spooks pretty easily and hypes pretty easily, and they're also on a cycle that research certainly is not. They want to know. All right, q one, what do you got? Q two, what do you got? Q three, what do you got? It's not the rate at which research actually happens.
So when you have preliminary results, Wall street will get overexcited, and then the results next time are not as good, and then they get over depressed. And we actually have the same thing in research where I'm like, I can't guarantee you that the research breakthroughs are going to happen in three months on the dot. This is not how you got to deliver your q two breakthrough, Marina. Right. I can't promise my q two breakthrough. So I would also say that this is also to some extent, a mismatch between the schedule of Wall street and the schedule of research in a new area, in an area that we don't yet understand very well. And that's, I think, a lot of what we're actually seeing here.
Yeah, that's fascinating. It's almost you're saying, like, we should not be looking to the stock market to judge the value of the AI space, in part because, like, the market doesn't know how to value it at the moment is kind of what you're saying is like. I don't think it's very clear yet. I don't know if Kasha but you disagree, but I actually don't think we actually know very well yet how to value AI properly.
I'm with Marina on this and I don't think that the common stock investor understands the impact, especially in the enterprise space and what it can do. So we have been just dunking on AI stocks saying that, hey, you are leading to the downfall of economy. We look at the positive that has done. It's also contributing insanely towards the overall economy. You should give AI enough credit to lift the entire stock market up as well, not just the trailing week, and say, hey, the market is down x points because of the large Nvidia swing.
Just look at the world we live in. In the last few months, Nvidia has swung a trillion dollars in market cap. A trillion. Just pause and realize how much of an impact that's having on people. Right. So it is people reacting to, oh, my God, I don't want to miss out. But also not knowing at what point are you investing in the fundamentals or are you pulling out of the stock too early? Even massive companies like Ark invest ended up missing the boat on Nvidia. Lost a billion dollars of opportunity there. Right. So you need to understand the fundamentals and stay long in the market versus going and reacting to these quarterly ups and downs.
Yeah, I don't know. You're not arguing this, but I think it's almost like you could make out the argument that you know what the biggest meme stock in the whole world is? It's Nvidia. It's not gamestop. It's not anything like that. I was just going to agree with Marina. The fact is that and what Shobhat was saying as well, I mean, this is a long game. We don't really know how to value things yet. It's not like some commodity where you can grab it and hold on to it and see what it's doing.
So I think we'll get better. Just like we've had trouble valuing data as well, valuing the models and what we can do with them is going to be part of this as well. So I'm going to move us on to our second segment of the day. So OpenAI this week announced a new feature they call structured outputs. And this is huge, although it might not seem like it on the surface for people who are not in the day to day work of AI, effectively, what they're offering is for the very first time, model developers are allowed to basically work with their systems to constrain their outputs to match specific schemas that are defined by engineers. And this is a little bit nerdy, but I think it's actually worth kind of walking through the technical points here, because I think it's one of the areas where if you dive a little bit into the technical kind of understand what's going on, you may recognize why, out of a summer of lots and lots of announcements of AI, this may actually end up being the biggest announcement of the summer in some ways.
So I'm going to try to explain this, and then I think, Marina, you'll keep me honest. You should be like, that's completely wrong, Tim, you've completely misunderstood what they're trying to do. The way, as I understand it, is that language models are, of course, very powerful. They can do all sorts of remarkable things, but the problem is that they kind of output in sort of non determinative ways. They produce outputs that are kind of difficult to kind of constrain and standardize.
And this has been a really tough problem because you have to take AI and then you have to connect it to all these other traditional systems that are expecting structured data. Right? Like there's a computer just being like, oh, well, I'm expecting a table that has the following elements within it. And it's been very hard to kind of like integrate language models with that. And is what OpenAI is saying here that you can finally, for the first time, do that reliably? Correct me if I'm wrong, I'm just kind of thinking through this.
The thing I'm actually going to push back on is this whole, finally, for the first time thing. This is not for the first time, the fact that we before were like, all right, structured outputs, semi structured outputs are where it's at. We used to say what you do with unstructured data, and this is work that I've done for years, is you try to turn it into something more structured so that it's features and you can feed it into a classifier, feed it into ML and go from there. Then everybody said, oh, foundation models. All right, now we can, doesn't matter, no more structure is needed.
No more data is needed. Nothing is needed. We're just going to go and have unstructured data is going to go everything. You go and you work with that for a while and you go, no, guess not.
All right, we're going to go ahead and walk it back a little. We're going to walk it back a little. Let's go back to the fact that especially if you're trying to mix and match a heterogeneous system, you do need structured output because these things don't know how to talk to each other. So I'm going to pretty strongly push back on them for the first time and go back to, no, now that we're trying to be practical about it, back to the fact that you need to impose a bit of structure.
I would also say that this is with like the success of code models, where we see that there already is a lot more structure imposed on what kind of things can go in and can go out. There's some lessons being learned there again going, oh, maybe we don't do just generally unstructured text and we're going to go back to having a bit of a mix. Kush, would you agree with that particular, we're kind of back, yeah, no, I mean, I think that's exactly right.
I mean, one way to look at it is, I mean, you're kind of breaking this bucking Bronco that just came out of the blue in the last couple of years and bringing it back to where it should be. Right. I mean, the control, the governance, I mean, all of that is part of making these things practical. Right. And I think another way to look at it is one good thing about these language models is that they're very creative. They're coming up with all sorts of different things, but it's really a trade off, safety versus creativity. And the control, the constraint is bring us back to that safety aspect.
And if you're inspiring a poet, I mean, go ride that Bronco, it's all good. But I mean, for all of the enterprise use cases that we care about that are going to make the productivity differences and all that sort of stuff, then that extra control is where it's at. Yeah, for sure.
So show a bit. Am I just being an OpenAI shill here? Just really hyping this feature where I guess Marina is just telling us like this has all been said and done before. They're just selling something that everybody has known how to do for a long time.
So tim hop take on this. This is the first time OpenAI is now appreciating and admitting that the whole workflow, end to end, won't be done by an LLM. We have admitted by releasing this, that at step number three, somebody is going to call an LLM and expect it to behave in a structured manner so it can be a part of a team that does an end to end flow. Other aspects will be automation, RPA, there'll be some regular AI, there'll be just plain old API calls. But now LLM, they have admitted to this by releasing this, that it's now down to a sub task level versus being the LLM that's going to do the entire process end to end, right? So I think it's a really hot take on what they're doing for practical deployments.
For me in the field, we are the launch partners of OpenAI and whatnot. We do a ton of OpenAI with clients in our workflows. Last week, on Monday, actually, we were working with a large healthcare client where we are reading reams of different documents and stuff, and we're extracting things from those documents. So if I'm talking about my healthcare coverage, I need to know what's in network, what's out of network, what's family coverage, what's single coverage, and so on, so forth.
So we're using an LLM to go extract things out from it. Every time we run this against our rubric of checking the accuracy there, quite often it responds back with a blurb instead of giving me the in network and out of network. So the way we used to solve this historically, we would ask questions in a manner, and then we provided some coaching saying, just respond with the actual dollar amount. The problem there used to be, it responds back with saying for 14.9, and in three out of ten cases, it'll forget to put million in front of it. There are practical issues with you are having with leveraging these large language models. And then we're like, okay, fine, just give me the entire thing. And then to Marina's point, I'll just use a small regex somewhere to extract what I need from it. And then I'll plug it back in.
That was a horrible way of doing things in production. Yeah, that's awful. Having a commitment now saying that this is the JSON I'm going to get, and if you can't fill that number, if you don't know what the single coverage is for out of network, it'll be null, it'll be blank. Then I can do something in a structured manner, raise some alerts and have a workflow.
I think it's brilliant that they're allowing you to do this. This combined with the price drop that we got, 50% decrease in inputs, 33% in the outputs, makes it very, very easy for us to plug it in. The 400 mini price is just rock bottom. It's low, it's very inexpensive to deploy mini, even the fine tuned versions of mini now they're allowing you to go fine tune. These models very, very easily have a structured opera around it. So they've understood the fact that instead of doing a generic top down, I'll take care of the entire thing all the way down to a subtask level. It has to be fine tuned for that task. It has to be super inexpensive, and it has to be a good contract on what the input and the output structure coming out right. In other words, like a good tool to be used in the enterprise.
So super interesting takes on this. It definitely went in a direction that I wasn't expecting, but I think is very helpful in thinking through why OpenAI did this. I think the final aspect of this I want to touch on is it was very funny. I mean, as someone who is a software engineer kind of turned into a lawyer. I read this very long blog post about structured outputs, and then at the very end it's like, oh, by the way, it's not eligible for zero data retention, which I think was a very interesting part of the announcement, was basically like, normally the promise is that OpenAI will not train on any data that you send in through the API on the enterprise basis. But in this one case, if you send in a schema, they're going to train on that.
And I guess for our listeners, I think it'd be useful for them to hear a little bit some intuitions for why it is that OpenAI sees this data as so uniquely valuable that they're going to say, we've got this general policy of zero data retention, but for this tiny little segment we're going to cut out a hole. And if you send us our schemas, we definitely want to train on that. Chris, I see you nodding, but I don't know if you want to speak to why they would do something like this.
Yeah, I mean, I was reading the announcement as well, and I think, I think they're taking two different technical approaches to make this work. One is just training on more and more of these schemas. The second is constrained decoding, using this context free grammar to really make sure that what comes out is really matching the schema and stuff.
So on the first of the two, it's really hard to get this sort of variety of what kind of schemas are going to be out there. This is not something you can just download from the web. I mean, in some of our work, we also, I mean, look at very unique enterprise sort of policy documents or other stuff like that, and it's just not easy.
Like, I was talking with one of my group members yesterday. We were trying to figure out what are like different policies for guidelines for different professions. And I was looking like, can I get the New York State barber license guidelines? Like, what does a barber need to do to do their job? And like, there's tons of stuff like that that is really not out there.
So, I mean, just the uniqueness of it is the key, I think. I think that's absolutely right. And I think that will be coming sort of the increasing battle, it seems like, right, as all of the easy to get data is now accessible, now, the kind of question is, who's got these kind of access to very hard to get data?
And it's kind of, these schemas are, they're valuable tokens, right? They're unique tokens in a lot of ways. So this has been a big struggle for us with our clients in enterprise settings. We go through enterprise security governance when we take a new product and we have to make sure that it's being used in a particular way, everybody signs off on it and so on, so forth. Right? So we're struggling with this, with our enterprises.
When you outsource your API calls to a third party, then every time the API calls change or they do something differently, or now in this case, there's the retention issue with the schemas. Right. You need to go back through the whole process. And I don't think enterprises have a good mechanism to understand, capture and then act on each one of these incremental updates that happen.
So it scares me a little bit that enterprises will end up approving a product in a particular state, but it so rapidly evolves with features and stuff that you won't be able to go back in time and say, I have to, this small incremental thing has to be done differently. The data scientists will start getting super excited about these function calls and about these structured outputs and start using it. And then that's where Kush and team are going to come in and say, guys, time out.
There has to be a good discipline around how you govern incremental updates that are happening to these so you don't get yourself into trouble. So I think that's a very unaddressed issue with at least my enterprise clients.
So I'm going to move us on to our final story of the day. It was announced last week that Noam Shazeer, who is the CEO of character AI, was going to rejoin Google along with a core team from his company, and also that Google was going to acquire a license to all character IP. This is widely seen, though it's disputed, as an acquisition ultimately of character, which had raised something like $150 million and was basically building sort of personalized companion AI's. And so I really want to go into this story because it's very interesting and part of a trend of acquisitions in the space, if you will, that I think are very interesting and I think get us to thinking a little bit about how this market's going to evolve and what we really anticipate from AI startups the next twelve to 24 months.
Kush, I wanted to turn to you first, is why is a company like Google interested in a company like character AI at all? It feels like Google's got all the resources in the world to do all the AI. Why are they acquiring companies at all at great cost? It feels like couldn't they just build kind of a character product on their own? And we'd love to get your thoughts on what do you think is motivating this in the first place.
Yeah, I think that's a similar question. Like why does IBM research exist versus why don't we just tell me a little more about that? Yeah, we just keep acquiring a lot of startups. I think there's always going to be a balance between kind of organic growth and kind of the acquisition sort of thing.
There's always a spark of some idea. You can't assume that you're going to have all of them. And I mean, in these cases there is something unique, there's something where there's a market that they've touched on and something that I think only a startup can maybe tap into because they have a different pulse of the scene. So I think it makes sense for a company like Google to have a mix of ways that they grow. Yeah, for sure.
To push you a little bit further on that. Do you think it's because is there some kind of compliment? Like, what's the angle that I think you think Google is trying to chase after here? Because, I mean, as a search company. Right. Like, ultimately this feels like very consumer y in some ways of what they're trying to do.
Yeah. I mean, maybe they don't think they are a search company going forward. I don't know, their edging onto others. I mean, more things or other things. But I think just once you get, I mean, something interesting, something exciting that just draws customers to you, draws consumers to you, and then you can keep them and get them into other stuff. So, yeah, as part of a pivot, for sure.
So maybe we could take the other angle at this story, I think, which is you can see it from the perspective of the acquirer. Why would Google do something like this? But I think it's also worth investigating it from the perspective of the startup. Marina, there was a bunch of commentary online where people were saying, look, you've seen adept go through a similar transaction. There's another company called inflection that went through a similar transaction. These are companies that have raised an enormous amount of money and by all accounts would be very successful, maybe some of the most successful startups in the AI space, but as yet, the founders are choosing to sell effectively. They're choosing to go and join the big tech companies.
Do you have a theory for that? Why would you? I mean, if I'm sitting there, I'm Noam Shazeer, I've raised $150 million. That's certainly more money than I've ever raised. What is motivating these kind of founders to say, okay, well, actually I want to kind of throw in with the big companies rather than trying to make it on my own. And does this suggest problems in the startup market, do you think?
I mean, even 150 million can be burned through pretty quickly if you're doing a whole bunch of your own training. What is 150 million anyway? Yeah, exactly. Else there might be a case here of, again, if there's an understanding that you want to have sort of a pre baked user base or a pre baked set of being able to use a whole bunch of resources, which a company like Google, a company like Meta, they're going to be really quite good with that. Again, potentially other people to collaborate with.
I really will suck in what Kush said, which is you've had one or two or three good ideas. It doesn't mean that you're going to have 40. And there really are a ton of extremely interesting smart people who are working in these companies. So it may be that there's a desire to also do that as well and have that partnership be a lot more close in order to be able to see that.
Yeah, I mean, zooming out to the macro level, I mean, showbit, do you think that what is this presage, I guess, for kind of like startups in the AI space in general? Are you seeing more AI startups over time? Because I think there's almost one way of reading this, which is, well, even if these companies that have raised so much money can't make it independently, no one can make it. We're about to see a lot of consolidation in the AI startup space, Tim. I think the core values, the fundamentals haven't changed. You can't have a thin wrapper around an OpenAI API call and expect it to keep growing more and more. You do realize that the intellectual property that you've built is what people are going to pay for. And the talent that you have assembled that particular team, that's what is golden.
Big companies will try to walk around acquisitions and come get very creative to work around any of the antitrust rules and things of that nature as well. Right? So in this case, they're not acquiring it, they are getting hiring some people or they're licensing some terms and so on and so forth, right? So you can see that there's some motivation on not just outright acquiring it, but on the flip side, just like any startup environment, you'll also see big companies like Wiz, which Google was trying to acquire, and Viz walked away from $23 billion offer. I'm just laughing because it's like, that's like a literally hilarious amount of money, right? That is insane.
And Asif, who is the co founder of Biz, he wrote a very humbling letter to all the employees explaining them why you're not getting rich today. Right? Essentially explain to them why I'm not taking this offer. It's a very humbling offer, but here are the reasons why we believe that going IPO is a better value add and so on, so forth. Historically, we have seen a lot of misses and hits and misses, Yahoo trying to sell itself to Google or like Netflix to blockbuster.
All of these have been multiple reminders that you have to know what's your value add and how much of that is a differentiator with a high moat so others can't just come in and do what you're doing. So it takes a while to understand the rhythm of where you lie in the competitive landscape and you're trying to forecast. I think we put undue pressure on co founders, on the founders who are just passionate about building a product, but now, all of a sudden, we are surrounding them with venture capitals who have different objectives than what.
You mean I need to build a business? Yeah, I think they need to bring back Silicon valley as episodes in today's world with LLM. That's right. Yeah, for sure. Yeah. I saw this great Twitter thread that was on like, if we modernize Silicon Valley, what would it be? And everybody's in AI, basically.
I mean, it goes to a point that Marina raised earlier in our first segment, though, it almost kind of feels like this is almost like the micro version of the market being not able to price these startups properly. It feels like in a lot of these cases, these big companies like Google are ultimately acquiring the talent versus necessarily the product. I guess character you can maybe debate because it actually had a big install base, but it feels like at the core of it is just simply like, here's a team of people who seem to be able to get what they want out of the AI, and that actually ends up being this huge value that's almost separate from like, did you have a blockbuster AI product release?
And. Yeah, it kind of goes to these interesting questions that I'm thinking about now about, like, how do you actually value these companies? Right. Cause it's just like so unclear in such a fluid environment. Any final thoughts on this? Super, super interesting. And I think, again, I mean, to argue against myself, this is also during the same week we saw a bunch of top leadership from OpenAI leave. It's not necessarily all consolidation. It's possible that people are moving between big companies and also creating new startups onto themselves.
Any final thoughts to round this out for today? Just one conversation I was having with my brother in law last week, not related to this, but the difference between running your own business versus doing a job in a big company and the lifestyle issues there. I think, like, I mean, the point you were making before, Tim, like, if you just want to make one product versus building a business, I think maybe a lot of the folks that are getting into this right now are not in it for maybe that lifestyle or for that business building sort of, sort of way of going about it.
So maybe it's just a way for them to return back to their natural sort of state. So that could be driving it as well. Lifestyle issue. Yeah, I believe that for sure. Yeah. It's, I mean, personally crazy to do a startup. I think I had a friend who was a founder who was like, it's literally an irrational act to do a startup.
Well, great. On that note, no shade to anyone else who has already been on mixture of experts as a panelist. But I have to say this is my favorite panel. The Marina Kush showbit power trio is basically like, we just get the best conversations all the time. So I appreciate all three of you coming on the show. And for all you listeners, thanks for joining us this week. If you enjoyed what you heard, you can get us on Apple Podcasts, Spotify and podcast platforms everywhere. And we will see you same time next week.
Artificial Intelligence, Business, Technology, Ai Hype, Ai Economy, Ai Governance, Ibm Technology
Comments ()