ENSPIRING.ai: Snowflake CEO Sridhar Ramaswamy on Using Data to Create Simple, Reliable AI for Businesses

ENSPIRING.ai: Snowflake CEO Sridhar Ramaswamy on Using Data to Create Simple, Reliable AI for Businesses

The video features an in-depth discussion with Sridhar Ramaswamy, the CEO of Snowflake, exploring the current state and future of AI, particularly in enterprise applications. Ramaswamy shares his insights on Snowflake's efforts to integrate AI into its data cloud services, emphasizing the importance of reliability and ease of use in deploying AI solutions like Document AI and Cortex AI. He highlights the significance of making data easily accessible and usable for enterprise customers without the need for heavy software engineering projects.

Ramaswamy discusses Snowflake’s right to win in the AI arena by focusing on integrating AI functionalities seamlessly into its platform, claiming that Snowflake simplifies AI for its customers, making it a cost-effective and secure option for data management and application development. He shares his observations on broader AI trends, emphasizing the wide range of applications being implemented, from unstructured data processing to advanced sentiment detection. The conversation also touches on AI breakthroughs and the potential impact of future advancements like GPT-5.

Main takeaways from the video:

💡
Snowflake aims to democratize AI usage by embedding it into its platform, offering seamless access to AI features for analysts.
💡
Cortex AI and Document AI are significant innovations by Snowflake, designed to transform data interactions and simplify AI applications.
💡
Ramaswamy emphasizes speed and quality in software development, encouraging a focus on practical application of AI rather than mere technological advances.
Please remember to turn on the CC button to view the subtitles.

Key Vocabularies and Common Phrases:

1. perk up [pɜrk ʌp] - (verb) - To become more cheerful or lively. - Synonyms: (brighten, enliven, cheer up)

But that's the thing that makes every Snowflake customer perk up and go like, I want that.

2. interoperable [ˌɪntərˈɒpərəbl] - (adjective) - Able to function with other systems or products without any restricted access or functionality. - Synonyms: (compatible, cooperative, functional)

...which is basically an interoperable storage format for cloud storage.

3. purview [ˈpɜːrvjuː] - (noun) - The range of interest, activity, or authority. - Synonyms: (scope, range, field)

Then the early applications that we have built, like document AI are a very natural next in the progression of what people want to do, which is, hey, I want to act on the data that is within Snowflake's purview

4. leverage [ˈlɛvərɪdʒ] - (verb) - To use something to maximum advantage. - Synonyms: (utilize, exploit, apply)

We are leveraging our strengths in data to make AI products better.

5. omniscience [ɒmˈnɪsɪəns] - (noun) - The state of knowing everything. - Synonyms: (all-knowingness, omnipotence, omnipresence)

Part of what I feel we have implicitly accepted with chat GPT is it's sort of like, is it omniscience?

6. cognitively [ˈkɒɡ.nɪ.tɪv.li] - (adverb) - In a manner relating to the mental processes of perception, memory, judgment, and reasoning. - Synonyms: (mentally, intellectually, thoughtfully)

The analysis that they do is constraint. It's pretty much, if a metric is wrong, go slice it by ten different dimensions.

7. ecosystem [ˈiːkoʊˌsɪstəm] - (noun) - A complex network or interconnected system. - Synonyms: (system, network, environment)

And our long term bet, Pat and Sonia, is that ecosystems move upstream.

8. paradigm [ˈpærəˌdaɪm] - (noun) - A standard, perspective, or set of ideas. - Synonyms: (model, pattern, example)

I tell people something as ridiculous as copying, I don't know, an address like from your calendar or a piece of email over to Uberegh.

9. facade [fəˈsɑːd] - (noun) - A deceptive outward appearance. - Synonyms: (front, veneer, mask)

Consumer choice is fiction in a whole bunch of things that we do. We eat what's put in front of us and we will search with the default search engine.

10. virtuosity [ˌvɜːr.tʃuˈɒs.ə.ti] - (noun) - Great skill or ability in something. - Synonyms: (expertise, skillfulness, proficiency)

And my firm belief all through my life is that virtuosity, Trump's strategy all day long.

Snowflake CEO Sridhar Ramaswamy on Using Data to Create Simple, Reliable AI for Businesses

The product that makes even the people that go, I have GPT four, I have an army of software engineers. The thing that even they struggle with is things like a reliable talk to your data application, because even with GPT four out of the box, you end up getting 45 odd percent reliability, meaning it gets half the questions wrong when it tries to answer it. We are well in the nineties and we are racing to get to 99% reliability on talk to your data applications. Obviously we restrict the domain and turn this into more of a software engineering problem than just like a pure AI model problem. But that's the thing that makes every Snowflake customer perk up and go like, I want that because even the people with the money and the resources to spend on software engineering teams very quickly realize that this is a wall that they are likely not going to break through.

Today we're excited to welcome Sridhar Ramaswamy, CEO of Snowflake. Snowflake is one of the most important enterprise companies in the public markets. It's the default cloud data platform. But today the question of what role does Snowflake have to play in the world of aih looms large. Sridhar is somebody we've known for a couple decades. He actually started on the very same day as our partner Bill Korn at Google. Back in April of 2003. We backed Sridhar and his own startup, Neva, which was an AI driven search engine. Snowflake acquired Neva, which is how Sridhar became the successor to Frank Slupman. Rarely have we encountered somebody who is as in the weeds on the technology, but also as commercially savvy as Sridhar. And he will join us today to talk about what AI means for Snowflake, the importance of safety nets, the open source community, the competitive landscape, and the practical applications of AI that he's seeing in the enterprise through his lens. As CEO of Snowflake, we hope you enjoy.

All right, Sridhar, we're excited to have you here with us today. You're a technologist by trade. You've spent a lot of time in the consumer world, and you are now at the helm of one of the most important enterprise companies of our generation. So before we jump in, we have a lot that we want to know about enterprise AI, what Snowflake is up to, some of your predictions on the world of AI. Before we jump in, though, just a level set, can you give us a couple words on your personal background? And then just for people who aren't familiar, which is probably not a lot, but just for fun, for people who aren't familiar. What's Snowflake? So who street are what? Snowflake. Let's start there.

That's great, Pat, Sonia, super excited to be here at iconic sequoia, home to many, many legends I admire. Yeah, I'm a computer scientist by training. Early career as an academic. I joke to people that I'm a reformed academic because I was like, I wanted to do things with more impact. Super lucky to be an early part of Google, where I joined one of the greatest businesses ever invented by humanity, which is the search ads business. I ran that for close to a decade. All of ads and commerce at Google for five years helped grow that business from a billion and a half to over $120 billion in revenue. And then, funded by Sequoia, did an ambitious startup called Neva, which wanted to modestly rethink what search meant before getting acquired by Snowflake and becoming its CEO. Snowflake is the AI data cloud. Our core thesis is that a cloud computing platform that puts data at its center is going to be way better for enterprise customers to act on data than a generic cloud. And AI, of course, we think of as a transformational technology that is going to change every aspect of how data is stored, how it gets moved around, and of course, how it's accessed. We have over 10,000 customers, made 2.6 billion last year. But at the center of everything, enterprise and data. That's a super quick blurb.

Perfect, thank you. You have 10,000 or so customers. I know you've met at least 100, probably hundreds of them since you took over. I met hundreds of them by now. There you go. I'm guessing you have a pretty decent read on what's going on in the world of enterprise AI. So maybe we'll just start there. What's going on in the world of enterprise AI? What are you seeing at your customers? First of all, people get that this is going to be transformational. You know, lots of technologies have skeptics. I'm sure you've run into folks who are like, ah, mobile. It's not going to be a thing. This browser, like, so lame. It takes a while for people to absorb.

I think what's different about AI, first and foremost, is people are like, I get what this can do. I think some of the power is just like, honestly looking at the magic that chat GPT is. Anyone that has interacted with it, asked you to write a poem, asked you to create an image, knows like, wow, this is something that's very special. So the level of awareness is incredibly high. And we have thousands of customers that are in, in various stages of implementing AI solutions. They span the gamut from people like Bayer that are very excited by the idea of giving business users access to business data without going through like an elaborate. You need an analyst, you need a bi tool, you need blah, blah, blah. You need a week before a change can be made. They're like, I just want to put data into the hands of people that need it right now.

But we also have dozens of people that are using AI as a transformation engine. So, for example, if you have unstructured data, whether it's an image or let's say, like a transcript, previously you had to run a software engineering project to figure out, what's this image about. Now you feed into a model, ask it a question, and you get the answer. People are super excited by things like that. We have a product called Document AI, which extracts structured information from documents, say, like contracts. All of us have contracts sitting around in our company folders that have all kinds of magic numbers, um, that ideally you want to do analysis on. So there's a wide variety of cases that people are implementing and sending into production.

But I would say stuff at the bottom, which is how do you transform data more effectively, more flexibly, and stuff at the top, which is how do you make data easily available to all kinds of business users in new ways, in interactive ways? I would say that's the sandwich in terms of what are people wanting to do with data. Can you say a couple words on snowflakes right to win? So some of the things you mentioned, like data transformation, for example, feels like that is very close to the core business of Snowflake. But then there are some things that are maybe a bit further afield. If somebody wants to deploy an enterprise agent of some sort, they can use snowflake to do it.

But what's snowflakes right to win in that situation? Can you just say a couple words about how snowflake fits into this overall landscape and sort of the right to win? So, first and foremost, the basic approach that we took to AI, sort of enabling or infusing AI into snowflake, is it should be an accelerant for everything that you do with Snowflake. That's what Cortex AI is. It's a model garden. But it's more than that. Snowflake prides itself on super tight integration of its various product features. This is not another service that's part of Snowflake. It's built into Snowflake. This means that any analyst that has access to SQL, has access to AI. It's a massive democratizing mechanism.

Then the early applications that we have built, like document AI are a very natural next in the progression of what people want to do, which is, hey, I want to act on the data that is within Snowflake's purview. By both expanding the data that Snowflake has access to via things like iceberg, which is basically an interoperable storage format for cloud storage. But then providing things like document AI, we just make a whole bunch of AI applications that previously used to be software engineering projects into two commands that an analyst can issue. And so our first lens very much is that AI should become easy, trivial for data that is sitting in Snowflake 100%. There are going to be applications that are cutting edge, are going to involve many, many different services.

But the angle that we bring to all of those customers is we make reliable aih. And there's a topic that we can get into. So for example, I tell people, you have no business believing the raw output of a language model for anything. You can't actually do any business with that because it's ungrounded. It doesn't understand truth from falsehood, doesn't understand authority. So we make things like creating a grounded chatbot. Again, as I said, two commands, not a software engineering project. Similarly with Cortex analysts, which is our talk to your data API, we bring the full power off. We know everything about the schema, all the queries that have been run on the schema, the semantic context on the schema. We can produce a reliable application that others are going to struggle to create. So we are leveraging our strengths in data to make AI products better.

Are there going to be specialist applications that can only be done with GPD, photo and a custom integration with a bunch of other stuff? Absolutely. But that's not what we are after. The bulk of our customers want to get work done. They're not in the business of doing research with AI. And are you seeing customers bring net new data that maybe didn't sit inside Snowflake historically into Snowflake because of your AI services? And how do you think about your right to win as it comes to the data that's not in Snowflake yet? This is a broader question. I think one of the things that I've actually been a good part of is in expanding the lens of data that Snowflake should play in.

Snowflake, as you know, is, first of all, it's closed source software. For the most part, the code engine is closed source, just like search. But we also had a proprietary storage format where data was ingested into Snowflake and kept in this format. But what we consistently heard from customers, and I'm sure like you hear all the time, is there is 100 or 1000 times more data sitting in cloud storage than there is inside a specialized player like Snowflake. And more and more industry trends have been towards interoperable data. People want their data to be accessible from multiple places. So, for example, if they want to write their own bespoke applications, most people don't want to do that. But the biggest ones do. They want the data to sit in cloud storage, where, yes, snowflake perhaps can write it and read it, but other applications should also be able to read it.

So we made a big push around iceberg, which is the interoperable format. We also announced a cloud catalog recently. The idea is that in ten years, data is going to be sitting mostly in the cloud, mostly in cloud storage, which is very cheap, mostly in interoperable formats, accessible via open catalogs. And this is the place where we see there being a so much more access to data from snowflakes. So everything from data engineering and AI now comes into our purview. We have customers that, for example, are doing things like, oh, let's run a video model using snowflakes container services on data that is sitting in s three, extract transcripts and stick it into Snowflake.

So it's just a very different world we are playing in. Makes sense, let's say, for the data that's currently sitting in one of the hyperscalers. For example, you started the conversation by saying the core tenet of the company is that when you build your infrastructure all around the processing of data, you can do better things. What are some of the ways that you're able to offer better AI services around the data that doesn't currently sit in Snowflake, but that you're hoping customers will bring in versus what the hyperscalers are doing already? Yeah, can I add onto that real quick? Because one of the things that we have heard from customers is at either end of the spectrum, you've got at one end of the spectrum, work directly with OpenAI, send your data into their cloud, and maybe have some nervousness around whether that data is going to leak into the model or whether they have the right security and privacy governance around it.

At the other end of the spectrum, you can just do everything yourself. Grab a model off a hugging face, build it internally, super safe, super secure, but pretty painful to do. All that and then the middle ground, you've got Amazon bedrock or you've got a snowflake, and they both kind of have a value prop of best of both worlds. We're going to make it easy for you, but it's also safe and trusted and secure and all that good stuff. And so I think, um, my angle on Sonia's question is, like, for somebody who's making a practical decision about sort of what should I build in Snowflake versus what should I build on bedrock or a comparable cloud service, what leans people in the direction of Snowflake? It's the fact that everything that you want, whether it is data security, data governance, ease of use, all come out of the box. The incredible power that comes with core snowflakes platform, including things like collaboration, other third party applications.

We make AI simple, 100%. There are those people that will say, I want to take data that's sitting in cloud storage, or even in another application, I want to bring it into cloud storage. I want to recreate ACL's access control list, and then I want to create a vector index using a bespoke vector indexing solution. And then I will stitch together, I'll figure out which model that I want to use, whether it's an API or something that I host myself, and then I will use LangChain and write like custom routing logic for my application. I can assure you that 99.9% of our customers want no part of this. That's just the reality. All those poor people wanted was a chatbot to run on 100,000 docs that they have so that they can replace the annoying search box for faqs on their site with. Here's a solution that just works. Our take is, yes, whatever governance you've had before works out of the box and your data does not go anywhere else. You have the same rock solid guarantee that Snowflake will never use your data to train any cross customer model.

And we will be very efficient and cost effective from just overall cost of running the solution. But Snowflakes magic, honestly, is we make the hard simple and it's things like total cost of ownership. Many of our customers are banks, they are healthcare institutions, they are finance or other kind like we play a lot in the media space as well. Most of our customers want to solve problems, not solve technology for the sake of technology. We have a foundation model team. They're very focused on things like how do we get models that have better grounded generation, how do we get them to follow directions well? How do we get them to say no to questions that they should not be answering when it comes to, let's say, like, talk to your data. So we focus on specialized areas like that.

But the biggest reason to use Snowflake for a lot of our customers is 10% software engineering project with a whole lot of risk about data and security and what else can happen turns into 6 hours of work for an analyst. We are good at that. We are proud of that. It sounds like the one liner might be it's the level or the layer at which you're intersecting these products. If you're working with one of the public clouds, you're still very much at the infrastructure layer, building a lot yourself, snowflake, you're at the platform layer. A lot of the hard work's been done for you. And our long term bet, Pat and Sonia, is that ecosystems move upstream.

There was a time not so long ago where, I don't know, our parents, our grandparents knew every part of a car. They're like, oh, so manly to change a carburetor and get oil in between your nails. I gotta be honest with you, I'm still impressed every time my dad knows exactly what is wrong with the car. Yes. You know, while I'm willing to go to, you know, go to strength training every day, getting oil in between my fingers with my car does not sound so attractive anymore. And so 100%, you can work with CSP's and you can be like, I have a model guard in here, I have a caching service there, I have a database here. I will stitch all of this together. As I said, everything turns into a software engineering project for us. You're like, no, that's just a little data pipeline that you set up. Here is a beautiful UI that you get if you want a chatbot.

Obviously you can do more, but you don't have to. Yeah. Whether your customers building on Snowflake and are there certain types of AI applications that are better suited to be built on Snowflake than others? As I said, the categories of AI applications come naturally from the kind of data that are already there. I would say the broadest, broadest use case um is really using Cortex AI via SQL, um in either interactive queries and dashboards or in jobs that people are running. And so these span the gamut from, oh, let's do sentiment detection with a small model. It doesn't really have to even be that expensive.

So that's just like, literally, it's one function call um, or, uh, let's do other kinds of data extraction. Um, where, as I said, you have things like a transcript, or maybe clinician notes. You take that out and you get structured data from it. Or the other thing that I talked about, document AI, which is you extract structured data from things like receipts from contracts, so on and so forth, that's kind of our sweet spot. But I have to say, the product that makes even the people that go, I have GPT four, I have an army of software engineers. The thing that even they struggle with is things like a reliable talk to your data application, because even with GPT four out of the box, you end up getting 45 odd percent reliability, meaning it gets half the questions wrong when it tries to answer it. We are well in the nineties, and we are racing to get to 99% reliability on talk to your data applications.

Obviously, we restrict the domain and turn this into more of a software engineering problem than just like a pure AI model problem. But that's the thing that makes every snowflake customer perk up and go like, I want that because even the people with the money and the resources to spend on software engineering teams very quickly realize that this is a wall that they are likely not going to break through. And how do you accomplish that? Like maybe peel back for us how you're able to get to the nineties percent. Are you training your own models? Just tell us about how the all becomes possible. It's systems design. Just like the magic of how you make a coding agent, or an effect, less a coding agent, more an effective copilot work. In practice, it's not always the giant models. It is carefully breaking problems down so that you present the right context to the model.

It's in deciding things like, oh, I see. The problem of answering a, whether to answer a question is different from how to answer the question. So you can specialize and have different models for these different sub tasks. And also, what's the. Basically the. I call this like a problem definition, a product structure question. We structure the product of cortex, unless so that it is more restricted than a free flow domain. What I mean by that is schemas are weird things. People do random stuff. They have horrible column names that mean completely the opposite. Every company has its own definition for revenue. And if you take the best model on the planet and let it loose on an arbitrary schema, the likelihood that it's actually going to understand the nuance of what's in there.

Close to zero. In our big deployments, for example, our customers have 200,000 tables, and you can bet that there are several tens of thousands of tables with the word revenue in it, they just don't have the same meaning. So it's really like problem definition to me, by the way, this goes back to the magic of product. I think of any amazing founder, any amazing product manager, as someone that can visualize what's the right trade off to be making in order to create something that has broad applicability. And that's the thing that we have done here. We constrain the problem. But as I said, we also explicitly train for things like when to refuse questions, um, as opposed to trying to pretend that you can answer every question, um, but obviously, there's a precision recall trade off there.

You can get 100% precision by answering no question. That's not the goal. You want to be useful, but still be precise. Um, but it's a lot of software engineering. Um, I want to go in a slightly different direction. Sure. Okay. Which that reminded me of this, and I don't know why, but you guys, you seem. The product velocity at snowflake seems to have inflected to the positive. Yeah. Even in the last six months or so. And we've worked with a lot of founders where, you know, the bigger the company gets, the slower and slower the velocity becomes. And so I guess I'm curious, what have you guys done to positively inflect product velocity? Because that's hard to do when you're dealing with an organization.

At the scale of snowflake. I've done this many times before, and the formula is always roughly the same, which is, first and foremost, you make sure that you have a safety net that you believe in, which is you have regression tests, so you don't blow up big functionality. But if you're pushing hard enough, you will make mistakes. And so you have to distinguish between different kinds of mistakes. For a database company, there are catastrophic mistakes, like, if you write data badly, it's going to take you months to get out of that. So you need to understand what is risk. And then you build a safety net for things like, as I said, to detect problems before they happen. But in case you do have problems, how you get out quickly. At Google, for example, we built auto experiment scaling framework. Basically, you would come up with a new experiment.

All changes went through this experiment framework, and this thing would automatically say, I'm going to run this on a machine, watch it for 15 minutes, make sure that the machine doesn't crash, and then it roll it out to 0.1%, 1%, 10%, with measurement all along the way. All of a sudden, you have velocity, because someone can design, people can design a whole bunch of experiments. They're sort of now pushed out. So, as I said, the first part is the safety network, and so we spend a lot of time on that. The second part is the inner loop productivity, which is how quickly can you get a single change in? Quickly, because ultimately it ends up being the decider for how many changes are you going to get through another system design.

Snowflake actually went through a process that predates me, starting about two years ago, of how to make the system extensible. As I said at Snowflake, we are very proud of the single unified product, but that can become something that gets in the way of speed. You have to design carefully for how do you make things extensible? Things like AI basically took advantage of that framework. And then to a certain extent, to be honest with you, it is also the focus that leadership needs to bring on what is important. How do you drive clarity at all times? With all teams, there is an infinity of work to be done. Yeah. And driving that clarity, driving a sense of accountability.

With AI team, for example, I force every team to make promises for, yes, over three months, but also, what are you going to do the next two weeks and calibrate yourself on? Did you deliver on the things that you were doing that you said you were going to be doing? Um, it's pretty much in my mind, if you want to get better and better, life boils down to say what you're going to do and do what you said you, you know, you would do. Yeah. Um, and. And examine and make things, make things better. Um, and so it's a bunch of things that have been there, um, that I've been building up at Snowflake, but certainly I bring this sense of quality and speed, or both requirements in what we do. It's a change, but people like the idea of just getting more things done. You and I have never met a software engineer that says, yep, I want to release that day after tomorrow. It's like, no, you want to get it done today. And so that itself builds momentum.

When you release a bunch of products and you have a lot of customers that are using it, that becomes positive energy for the team to build on the good behavior that got you there. I would say the team has responded very, very well. I tell them, hey, listen, this is the world of AI stuff changes every week, and you need to build with that speed. I'm very happy with how the team has responded. Is there anything in particular that you're most proud of, in terms of what you guys have done in AI thus far, I'll say Cortex analyst is probably the hardest product that we have designed and launched. Things like Cortex AI, which is like our platform layer, I am proud of it, but it is predictable infrastructure work, even though there's a lot underneath in terms of, hey, should you use VLM or something else? How do you optimize for inference? How do you get capacity in this annoyingly crazy world where it's very hard to get your hands on GPU's? There's a bunch of stuff, but to me, that is a unique.

That things like that, things like document AI are a unique combination of our strengths being applied to new areas in ways that can make a big difference to our customers. But you also know, Pat, that there's a little bit of who's your favorite child? So I can't really do that. And so there's a bunch of stuff like, even if you take Pilatus, which is our cloud catalog, done in a matter of three months. And so I think there's a lot of energy within the team because it's a slow message, but it's getting through that you can have speed and quality. They're just different aspects of the same problem. And my firm belief all through my life is that virtuosity, Trump's strategy all day long, what does that mean?

Your speed of execution, your speed of reacting to situations is going to Trump strategy very, very quickly. Yes, you need strategy, but life is never about fixed strategy. Because we live in a very, very dynamic world, it's hard to predict which product is going to be wildly successful, what your competitor is going to do. Like, we're going to talk about like, GPD five. It's like, it's a big unknown whether it's going to come out and what impact that's going to have. So I place a huge amount of emphasis on, you just need to be really, really quick at what you do. And I would say, like, that's the message that I'm trying to convey to the team.

That's very, I see nice continuity from the Slootman era into the Sridhar era because I know I've heard Frank say at least a few times, the general patent quotes, a good plan executed violently today is better than a perfect plan tomorrow. 100%. 100%. And I said that adaptability. Napoleon has a famous quote which roughly, I mean, it's not his, it predates him. It roughly translates into I commit and I adapt, which is you go into an important area knowing that you're not going to know everything and then you're adaptive to the situation that actually presents itself. Yeah. Are there any misconceptions about Snowflake and AI that you want to debunk? We are a real player. It used to be that snowflake used to be thought of as somebody that didn't really get AI.

But early on, we relied on things like more of a partnership oriented strategy for AI. But my big sort of observation realization is that AI is a platform change in the sense that it is a new way in which you and I and everybody else in the world is going to get to software, is going to get to applications. Once we had that realization, out came a bunch of product consequences, which is AI needs to be central to Snowflake. We need to make it super easy to both build applications, but also build the most important applications ourselves. Cortex analyst, for example, is a direct to business user application. We have never really done things like that before. It is driven by a strong belief that AI is going to disrupt how information is going to be consumed very, very broadly. And I am proud of having a world class team, from bottom to top, from foundation models to inference experts, to product engineers that integrate the AI, plus also the product engineers that are creating applications on top of AI.

That, combined with things like broad data access, which is pilates and iceberg, I think puts us in a very, very good position. Can we zoom out and ask a little bit about your, I guess, your hypotheses and your hot takes on the future of AI? Absolutely. I just think you are so well positioned. You probably built one of the first, if not the first, LLM native consumer applications at Neva. And now, obviously, from your seat at Snowflake, you see so much maybe first on the LLM kind of race to scale. What do you think about all that? Are we reaching the limits of scale? Like, what's next for those guys? I mean, obviously this can go in a couple of different directions. I talk to a lot of experts and there is a collective belief that there is a GPT five in the horizon. What I don't think anyone has a clear bar for is what that's going to represent. Yeah, GBD 40 was very cool, much faster. It also integrated multi modality natively in a way that's pretty amazing.

But when you think about reasoning capabilities, the ability to come up with plans for how to execute stuff, it didn't feel like it represented a step change. And while agents are very hot, similar to cortex, until Cortex analysts came along, people didn't really believe that you could build reliable. Talk to your data application. They were always kind of hit and miss. And remember, the bar is very high. If you're giving data to a business user, 75% accuracy is one out of four wrong. I think the big unknown is whether these models are going to represent a big step forward in things like multi step reasoning. And if they can, they're going to unleash like a whole new class of applications that you and I just cannot imagine right now.

On the other hand, I think when it comes to driving broad adoption, there is a lot that can be done with existing models. So many things that are useful for you and me every single day, whether it's a piece of mail that we are looking at or looking through a PDF. Just think about all the tedium that all of us have to go through. I think there is huge impact to be had simply in AI technology just permeating software as we know it, especially the user input part of software. So unlike other technologies, I think like there is enough that AI has already delivered that is going to have a meaningfully large impact on society is just going to take a while to run out. I sincerely hope we don't get to a phase where you need a billion dollars to train a great new model. I actually think that while what that model can do is cool, I think it also reduces the number of people that can have models like that to a very small number. And I think competition is just overall healthy.

But it's very hard to make a call. You mentioned this a little bit, but I'm curious to get your take on it a bit more. If GPT five is delayed or not a big step up, or whatever the case might be, or if you just imagine a world in which the current capabilities of the foundation models, that's what we've got. It comes down to how do we implement those? How do we optimize those, how do we tune those? One of the things that we hear from a lot of people building an AI, the first couple of weeks are like magic. Everything is amazing. This is great. Then the next few months are pretty painful. Oh shoot. It can't do this. Corner case. It can't do that corner case. It's not quite accurate enough, and people get really frustrated and sometimes they can engineer their way out of it, sometimes they can't.

But sometimes it leaves people feeling kind of disillusioned. This stuff's not as good as I thought it was. Maybe the time's not right. And so I'd love to get your take. If we froze the capabilities of the foundation models today, what sort of changes will we see in the enterprise landscape over the next handful of years? What sort of stuff will we not see? Because we're just not ready for it yet? To me, this is honestly the magic of software engineering. Part of what I feel we have implicitly accepted with chat GPT is it's sort of like, is it omniscience? You're like, it can do everything. They don't say it. In fact, they take pains to not say it. But just like Google search never tells you, that's a dumb query. Think about it, right? Kind of fun if it did, right? But there are lots of dumb queries that people type into it.

Google's like, oh, I typed lots of dumb queries. Yeah. They're like, oh, here are 100 million pages on the web, and here are the best pages for you, Pat, for your dumb query. And so I think it's like, some of it is good old fashioned AI enthusiasm. It can do everything, but some of it is just also plain dumb. You should not be doing that. To me, this is where things like, okay, let's actually make grounded chatbots the norm for interacting with information. The model is this application should tell you where it got the information from. It should be very easy for you to verify said piece of information and feel good that you're actually getting something. Similarly, you need a test framework. Like, Harrison talked about, an observability framework to do this on an ongoing basis.

But I think sometimes when it comes to things like chatbots, people forget, wait, there is such a thing as a set of regression tests. There is such a thing as acceptance criteria for software. Everything that we have, if somebody were to build a new application like one of your founders, your expectation is that they got their clue together and are actually testing stuff before they give it to customers. Somewhere in the world of AI, we're like, no, no, no, it doesn't matter. And these models react pretty violently to the addition of a period in a prompt. And so I think there needs to be this idea that you need good old fashioned software engineering and you need to measure the performance of these things. And so I think this is where it goes away from. These are hobby projects that can be hit or miss to hear somebody that can actually software engineer this for you. And we think of that as a core strength of what we bring to the table, which is you should be able to have a predictable way to say, this chat part is going to work, or this agent, like application.

This is the success rate of that it's going to have, or this is what cortex analyst is going to do for you in your domain so that you're like, okay, I feel good about deploying it. So even if GPT five did not happen, I think there is a lot of magic to be done, but it's also just work. Yeah, yeah. Well put. Well, what's the. I forget who said it. There's a quote that we use every now and then. People miss most great opportunities because they tend to be wearing coveralls and they look like work. You know, I think this is one of those where, like anything else, if you want it to be great, you gotta work pretty hard on it. You got to sweat it out.

And to me, this is also the place where the thinking of recall as something that you should tune thinking of recall as an important part of how you think about these applications. Any ML engineer worth their salt will promptly come and tell you, it's like, okay, I have an AUC curve for you. What are they trying to say? They're basically trying to say there is a trade off between how much you squeeze the model to do and how good it is. There's no perfect answer. That's really what the AUC curve represents. And the more we think of AI applications as also having this AUC curve, there are trade offs to be made between reliability and ability to respond. And that's a very conscious factor in how you should think about things. I think the better off we are going to be in terms of where can they deliver value.

Yeah, I'm going to go back to the point you said a little bit earlier about reasoning and that delivering the next big leap, hopefully for GPT five and Claude, et cetera, it seems like the approach that most folks are taking is bringing in search at inference time, and so there's more inference time, compute and kind of this Alphago style search stuff. I'm curious, just given you are one of the best people in the world at search, do you think that is the path to the promised land? On the research side for bringing reasoning into these general models, give me a little bit more context. I can certainly see how search plays a role in how these models operate, but can you just tell me a little bit more? Yeah. If you take the example of, if you take Alphago and you're trying to decide what move to do next, if you can kind of create a branching tree of here are all the possible moves from here and do a search over that, of here's what move I should do next. I think people are trying to bring that logic out of the gaming world and into domains like, I don't know if you saw Devin's cognition, where they're effectively searching over different things that you can do in your coding as well.

And so just like at inference time, just giving the model kind of the ability to search possible paths to decide what to do. Yeah, there have been a number of papers on this. I think even neurips had a bunch of papers about searching over domains as you come up with a plan. What I don't have, to me, it's important to understand, I'm forgetting the name of the Nuev's paper, but it also had the same problem they were doing. Tree search is that they fundamentally rely on a model, typically a neural network, being able to do things like grade a particular point in a state space, basically.

Alphago, for example, has pretty solid ideas about what is an advantageous position versus what is nothing, and the search is guided by that. What isn't clear in very open ended questions is, as you come up with alternatives for the search space, can you actually grade them effectively if it's like an open ended plan? Certainly a number of these techniques work well for games that have structure in which you can actually learn what does optimal mean? And you can begin to optimize towards it. What I don't have as good a feel for is, let's take something as simple as cooking. You would think it's simple, but if you take, I don't know, ten ingredients and 20 steps that you can take along the way, and various things that you can do in each of these 20 steps. And the steps themselves can be short, they can be long.

You quickly end up with, like, this crazy combinatorial explosion of different ways of doing things, and yet there is just one perfect recipe or two or three. That's the part, honestly, I don't have a good feel for in terms of, like, how do you even begin to measure the jump in terms of cognitive ability? It's easy in structured environments, but, like, out in the real world, where you're trying to do some pretty complex things, I think it becomes trickier. We've built prototypes for basically, like, agent analysts, but it's again, a structured space. So what we do, one thing. I've done numbers, like, pretty much all my life. I used to do whatever household finances for my dad when I was ten, like we did in a notebook. And over the past 20 years, every day I get this email that tells me how my company did the previous day used to be called bean counters at Google? Every day you got a report card, every few weeks something would go wrong.

Like you made less money somewhere. And we would start this predictable problem, like predictable exercise of some poor analyst would go drill down into a bunch of different things, blah blah, blah, blah, blah, look at slice stuff. And then they would come back with like, oh, Sridhar. It was like Easter in Germany and Ascension Day in Brazil. And that's why our numbers were off. And it took like a decade to model all of these complex things in the world into like a prediction model. So you're like, okay, I can begin to predict, but if you think about it, the analysis that they do is constraint. It's pretty much, if a metric is wrong, go slice it by ten different dimensions. Go look at the results, see where likely the problem is.

Certainly we have built prototypes of this AI analysts that can remove 60, 70% of the work that is needed in actually diagnosing problems. It's pretty free form, but you can tell a language model, these are my attributes. Oh, go call cortex analysts with all of these parameters, get the output, take a look at it, and then tell me what I should do next so you can begin to automate some of it so that this is actually useful so you can do things like that. But a much more open ended problem of here are 100 different things, incomparable things you can do and how do you judge and how do you prune? I think that's the part I honestly don't have good intuition for. Totally. I want to ask about search in a different sense, if that's okay. You obviously have an incredible point of view on search, given your time at Google and at Neva. And it seems like right now the consumer world is watching excitedly and nervously about. Is there going to be a new kind of search? King crowned.

I'm curious, your take on the whole AI search space right now. How about a hot take on perplexity? Do you have a hot take on perplexity? Look, I'm happy for perplexity. And it reminds you again that right time, right place matters a lot at Neva, which converged on to a view of what search should be that was very similar to perplexity. We were just two, three years early, and timing ends up being everything. You can think of perplexity as like a consumer manifestation of how we want to deal with information. Let's face it, I want to look through an eight page doc to find the two lines that I really care about. Said, no one but that's search.

In that sense, it's absolutely the right place. I think the more important question is whether the business of search, which is carefully preserved with business contracts, not with consumer choice. Consumer choice is fiction in a whole bunch of things that we do. We eat what's put in front of us and we will search with the default search engine that came in our browsers. We might resist it, but on aggregate with humanity, that's the reality of the world. I would say that that is the bigger challenge because search is mostly locked up by a few players that control the entry points. But I think that's the fundamental problem, which is it is very difficult to break into the business of search. Consumers don't like doing stuff.

And this also gets to one of the kind of broader questions in the world of AI right now, which is incumbents versus startups. And historically the battle is can the incumbents with distribution build cool products before the startups with cool products build distribution? And I think search is a great example of that. You might have the coolest products in the world. It's awfully hard to change consumer behavior. That's right. AI is an interesting test case for this, because so much of the coolness of the products is available through the open source world or through third party models. And so it feels like it might be a scenario in which incumbents are advantaged versus the startups. But do you have a point of view on that? I would take a two different lens to this one.

One is what you said about models, open source models plus players like meta that basically have infinite budgets under willing to open source models. I think the world of creating models from scratch, unless you have an attached hyperscaler, an attached business looks very, very hard. Yeah. And so I think, as I said, I hope this doesn't go to like an ergo three GPT, five class models that the world has, because I think that's a bad ending for the world. So I would definitely say that foundation model companies without a strong business to accompany them, it can be a product like, I think OpenAI has created a pretty solid product. It's not just a foundation model. Yeah, I think that's one thing to keep in mind.

I'd answer your second question of sort of disruption slash innovation from a historical lens. I think of every generation of Silicon Valley companies as learning from the previous ones. They are smarter, they know the ways in which things can be disrupted, and they lean in pretty heavily. We all know, for example, the IBM to the mid range computer sort of disruption and then the decks and SGIs of the world, then getting disrupted by the Microsofts of the world, and then the web coming along, leading to the rise of companies like Google are mobile. I would say that in each and every one of these transitions, powerful incumbents with very large pockets have shown an ability to lean in sooner, lean in faster.

At Google, for example, when I was there, we leaned in very heavily into the home assistance because Alexa was going to take over the world. That was going to be the way in which you and I and everybody else searched. We were terrified and we put a pile of money into it and nothing came off it and it didn't matter. Why? Because the cost of a disruption is way higher than the amount of investment that you have to make. I would say, now this is generation five or something to that effect. I'd say all the incumbents are very aware of what can be disrupted and they lean into it. There's a bunch of strategic thinkers, as I told you, I think of AI as basically shuffling the tiles on enterprise software and a part of me goes like, no way Snowflake is going to be leading the charge when it comes to AI, not waiting for it to develop.

But I think you see every enterprise AI company lean in the same way. This to me would be the question about how much disruption is AI going to drive in consumer software. Certainly there'll be new categories. To me, if I were a startup, I'd feel a lot more comfortable that I'm creating a new category. Image creation like done in a mass scale, clearly amazing. But the same goes for videos, same goes for voice. There's a bunch of specialization that you can do here, adapt them to marketing. New things feel like a much safer bet in the AI world than take your pick. I can do XyZ faster because I am AI enabled. I don't think of that as having a whole lot of legs.

Yeah. Do you think chat GPT has a chance of becoming the next Google? And to your point, on consumer choice being a mirage and business deals are where this stuff gets locked down. I'm curious what you think of the Apple chat GPT deal. I think chat GPT, I mean, the phone is a pretty interesting place to me. The phone, because it's a controlled environment, actually offers enormous potential for consumers.

I tell people something as ridiculous as copying, I don't know, an address like from your calendar or a piece of email over to Uberegh. So dumb, so hard. You would think Citi would do this. Copy the address from this email from Pat and stick it into Uber so I can get an Uber. So to me, I think there's a huge amount of potential, again, in mundane applications, and because the mobile ecosystem is a pretty closed one, where Apple can mandate things like, you must have APIs that make it possible to access your functionality using language models, or else you might not get any traffic, that sounds like a pretty good incentive for everybody to kind of get in line. So I think there's a huge amount of potential there. I honestly wish there was more innovation in this space because, again, all of this is super doable technology. You and I can argue about, should this be done in the cloud, what can be done on the phone? But, like, as a consumer, do you care? I like, we have great connections. I'm kind of like, if this thing works only when I'm connected to the Internet, I'll take it. And so to me, those are sort of, those are details.

I actually think chat GPT is an amazing product. There's underlying technology, but in so many different ways, they've actually created a stunningly beautiful product experience that spans the gamut from, you know, they've turned pretty much like, visually illiterate people like me into budding artists. I tell people it's like I'm good with words. I can talk all day long, I can write all day long. And the magic that I can do with chat gp is truly amazing. That or even, even things like I, you know, for example, like, I'm on this language cake, I'm learning Hindi. And at some point I was like, oh, I'm struggling with these numbers, but off comes a prompt that says, hey, I want a CSV that translates numbers, just a string of numbers, to Hindi. And can you do that? Can you just give me a CSV file that I can import into quizlet that literally is faster for me to type than to describe to you? I type it in, out comes a CSV file. In 10 seconds, I download it into Quizlet. I have a quiz.

Pretty much everything that I used to do with python scripts on structured data, I just do, like, with English, you just upload the CSV file and you're like, oh, add these two columns, do this other thing formatted into this nice table and get it out. For me, it's magic. So I think there's absolutely a there there in terms of, like, is it a great product and a great business? But, you know, being the king of search is like, a few more zeros. They don't come easy to people. Yeah. All right, should we close with a couple of quick fire questions. Rapid fire questions. Yeah, let's do it. Okay.

Who do you admire most in the world of AI? Who do I admire most in the world of AI? I admire the people that are, you know, like working on things like foundation models that are able to do it on the cheap without the infinity of resources. So for example, people like Arthur or Danny, I think they've gotten Danny Yoga, Tama from Rekha. I think they've gotten just like a remarkable amount of things done or from our own team, folks like Samyaman, Yushang. To me, they represent so much creativity because I go and tell them, ah, limited budget and what can you, you know, what, what can you do? I think there are, there are a set of just like amazing, earnest people that are driving research under tight constraints. So there's obviously lots and lots of people, but it's the, it's, it's the doers that are doing the work imagining our future that I'm a huge fan of.

What's your favorite AI application? Chat? GPD, by far. Easy one, easy one. Just the utility that I get from it day in and day out is just truly remarkable. Okay, follow up then. What's an AI app that you wish existed like an actual talk to your phone that can actually mediate between apps? That would be super cool, because remember, as I said, just flipping between applications, doing very little things. Such a pain.

All right, we're going to end on an optimistic question. What is the best thing that can happen in the world of AI over the next five or ten years? What would you be most excited to see coming out of the world of AI? Software, which you can think of as encoding our thinking, capturing our ability to think and act in real world situation, clearly has been transformational over the past 50 plus years. Know, years. To me, AI as a, as an enabler of access both to the act of creating software, but using software to all of the people in the world would be a significant step up. And as I said, I don't think it's like lots of fancy new technology that you need. The newer technology can certainly help newer classes of applications. I was very proud of the fact that we put Google search, thanks to things like Android, into the hands of pretty much every human being on the planet. You can be cynical about technology, but it's a genuine step forward for humanity.

To me, just AI models as the new layer between humans and software, and software and software is actually a significant step forward just in having this functionality be vastly more accessible to lots more people as I said both in the creation aspects but also in the consumption aspect. I think that's a pretty cool thing to look forward to. Awesome. Thank you, Sridhar. Thanks for doing this. Thank you, Pat. Thank you Sonia.

Artificial Intelligence, Technology, Innovation, Data Cloud, Enterprise Ai, Ai Integration, Sequoia Capital