ENSPIRING.ai: The AI opportunity - Sequoia Capital's AI Ascent 2024 opening remarks

ENSPIRING.ai: The AI opportunity - Sequoia Capital's AI Ascent 2024 opening remarks

The video features a discussion led by members of Team Sequoia, focusing on the evolving landscape of AI and its transformative capabilities in various sectors. Over the last year, AI has advanced from being an emerging technology to having a tangible impact across different industries, such as customer service and legal services. This transformation is likened to previous technological shifts like the cloud transition, showcasing AI's potential to create significant business model innovations. The presenters emphasize AI's growth potential and its ability to replace services with software, opening up opportunities for value creation on an unprecedented scale.

The video suggests that we are entering a new phase where AI goes beyond mere utility to become integral to daily operations in numerous fields. Examples of real-world applications and advancements in AI, such as companies using AI for customer service roles and legal service automation, highlight the profound changes AI is bringing to the market. This shift highlights AI's movement from theoretical applications to tangible market fits, positioning it as a considerable force in future technology landscapes.

Main takeaways from the video:

💡
AI's development is at an early stage, similar to the early days of mobile apps; there is significant room for growth and innovation.
💡
generative ai is leading to new business models by transforming how software can interact with and replace traditional services.
💡
Despite substantial investments in AI, there is significant potential for AI applications to evolve and disrupt existing market structures.
Please remember to turn on the CC button to view the subtitles.

Key Vocabularies and Common Phrases:

1. hype cycle [haɪp ˈsaɪkəl] - (noun) - A graphical representation of the maturity, adoption, and social application of specific technologies. - Synonyms: (technology curve, innovation cycle)

over the last twelve months we've sort of been through this contracted form of the hype cycle.

2. generative ai [ˈʤɛnərətɪv eˌaɪ] - (noun) - A type of artificial intelligence technology that can produce content such as text, images, and more. - Synonyms: (AI creation, content-generating AI)

The first is the ability to create, hence the name, generative ai.

3. tectonic shift [tɛkˈtɑːnɪk ʃɪft] - (noun) - A significant or dramatic change or development, often used in the context of societal or technological progress. - Synonyms: (seismic change, radical transformation)

that was a major tectonic shift in the technology landscape.

4. durable business [ˈdʊrəbəl ˈbɪznəs] - (noun) - A stable and sustainable business that is likely to survive and thrive in the long term. - Synonyms: (stable enterprise, sustaining company)

to actually solve real world problems in a unique and compelling way that you can build a durable business around.

5. platonic form [pləˈtɑːnɪk fɔrm] - (noun) - An abstract, perfect concept or idea from which all manifestations are derived, based on Plato's philosophy. - Synonyms: (ideal form, conceptual model)

Here's my fellow greek plato, 2500 years ago, who said this idea of a platonic form is what we all ascribe to, are all striving for that.

6. value creation [ˈvæljuː kriˈeɪʃən] - (noun) - The process by which businesses increase the worth of products or services by improving them to meet consumer preferences. - Synonyms: (wealth generation, worth development)

we were standing at the precipice of the single greatest value creation opportunity mankind has ever known.

7. inference [ˈɪnfərəns] - (noun) - The process of deriving logical conclusions from premises known or assumed to be true. - Synonyms: (deduction, reasoning)

that means the balance of compute to begin shifting from pre training over to inference.

8. cognitive tasks [ˈkɑːɡnɪtɪv tɑːsks] - (noun) - Tasks that involve the mental processes of perception, memory, judgment, and reasoning. - Synonyms: (mental tasks, intellectual activities)

more capable of higher level cognitive tasks like planning and reasoning over the next year.

9. deflationary [dɪˈfleɪʃənɛri] - (adjective) - Relating to a reduction in the general level of prices in an economy, often associated with an increase in the value of money. - Synonyms: (cost-reducing, price-lowering)

all the areas where we've had this type of progress in the past have been deflationary.

10. neural network [ˈnʊrəl ˈnɛtwɜrk] - (noun) - A computer system modeled on the human brain's interconnected network of neurons, capable of learning and pattern recognition. - Synonyms: (AI network, artificial neuron system)

eventually the entire company might start working like a neural network.

The AI opportunity - Sequoia Capital's AI Ascent 2024 opening remarks

My name is Pat Grady. I'm one of the members of Team Sequoia. I'm here with my partners, Sonia and Constantine, who will be your MC's for the day. And along with all of our partners at Sequoia, we would like to welcome you to AI ascent. There's a lot going on in the world of AI. We have an objective to learn a few things while we're here today. We have an objective to meet a few people who can be helpful in our journey while we're here today, and hopefully we'll have a little bit of fun. So just to frame the opportunity, what is it? Well, a year ago it felt like this magic box that could do wonderful, amazing things. I think over the last twelve months we've sort of been through this contracted form of the hype cycle. We had the peak of inflated expectations, we had the trough of disillusionment. We're crawling back out into the plateau of productivity. And I think we've realized that what LLMs, what AI really brings to us today are three distinct capabilities that can be woven into a wide variety of magical applications. The first is the ability to create, hence the name, generative ai. You can create images, you can create text, you can create video, you can create audio, you can create all sorts of things. Not something software has been able to do before. So that's pretty cool. The second is the ability to reason. Could be one shot, could be multi step agentic type reasoning, but again, not something software has been able to do before because it can create, because it can reason. We've sort of got the right brain and the left brain covered, which means that software can also, for the first time, interact in a human like capacity. And this is huge because this has profound business model implications that we're going to mention on the next slide.

So what? A lot of times we try to reason by analogy when we see something new. And in this case, the best analogy that we can come up with, which is imperfect for a million reasons, but still useful, is the cloud transition. Over the last 20 years or so, that was a major tectonic shift in the technology landscape that led to new business models, new applications, new ways for people to interact with technology. And if we go back to some of the early days of that cloud transition, this is circa about 2010. The entire pie, the entire global tAM for software is about 350 billion, of which this tiny slice, just $6 billion, is cloud software. Fast forward to last year. The TAM has grown from about 350 to 650, but that slice has become 400 billion of revenue. That's a 40% CAGR over 15 years. That's massive growth. Now, if we're going to reason by analogy, cloud was replacing software with software because of what I mentioned about the ability to interact in a human like capability. One of the big opportunities for AI is to replace services with software. And if that's the tam that we're going after, the starting point is not hundreds of billions. The starting point is possibly tens of trillions. And so you can really dream about what this has a chance to become. And we would posit, and this is a hypothesis, as everything we say today will be, we would posit that we were standing at the precipice of the single greatest value creation opportunity mankind has ever known.

Why now? One of the benefits of being part of sequoia is that we have this long history, and we've gotten to sort of study the different waves of technology and understand how they interact and understand how they lead us to the present moment. We're going to take a quick trip down memory lane. So, 1960s, our partner, Don Valentine, who founded Sequoia, was actually the guy who ran the go to market for Fairchild Semiconductor, which gave Silicon Valley its name with Silicon based transistors. We got to see that happen. We got to see the 1970s, when systems were built on top of those chips. We got to see the 1980s, when they were connected up by networks with PCs as the endpoint and the advent of packaged software. We got to see the 1990s, when those networks went public facing in the form of the Internet, changed the way we communicate, changed the way we consume. We got to see the two thousands when the Internet matured to the point where it could support sophisticated applications, which became known as the cloud. And we got to see the 2010s, where all those apps showed up in our pocket in the form of mobile devices and changed the way we work. And so why do we bother going through this little build? Well, the point here is that each one of these waves is additive with what came before. And the idea of AI is nothing new. It dates back to the 1940s. I think neural nets first became an idea in the 1940s, but the ingredients required to. To take AI from idea, from dream, into production, into reality, to actually solve real world problems in a unique and compelling way that you can build a durable business around. The ingredients required to do that did not exist until the past couple of years. We finally have compute that is cheap and plentiful. We have networks that are fast and efficient and reliable. Seven of the 8 billion people on the planet have a supercomputer in their pockets. And thanks in part to Covid, everything has been forced online and the data required to fuel all of these delightful experiences is readily available. And so now is the moment for AI to become the theme of the next ten, probably 20 years. And so we have as strong conviction as you could possibly have in a hypothesis that has not yet proven that the next couple of decades are going to be the going to be the time of AI.

What shape would that opportunity take? Again, we're going to analogize to the cloud transition and the mobile transition. These logos on the left side of the page, those are most of the companies born as a result of those transitions that got to a billion dollars plus of revenue. The list is not exhaustive, but this is probably 80% or so of the companies formed in those transitions that got to a billion plus of revenue, not valuation revenue. The most interesting thing about this slide is the right side. And it's not what's there, it's what isn't there. The landscape is wide open. The opportunity set is massive. We think if we were standing here ten or 15 years from today, that right side is going to have 40 or 50 logos in it. Chances are it's going to be a bunch of the logos of companies that are in this room. This is the opportunity. This is why we're excited. And with that, I will hand it off to Sonia.

Thanks, Pat. Wow, what a year. Chat GPT came out a year and a half ago. I think it's been a whirlwind for everybody here. It probably feels like just about all of us have been going nonstop with the ground shifting under our feet constantly. So let's take a pause, zoom out and take stock on what's happened so far. Last year we were talking about how AI was going to revolutionize all these different fields and provide amazing productivity gains. A year later, it's starting to come into focus. Who here has seen this tweet from Sebastian at klarna? Show of hands? It's pretty incredible. Klarna is now using OpenAI to handle two thirds of customer service inquiries. They've automated the equivalent of 700 full time agents jobs. We think there are tens of millions of call center agents globally. And one of the most exciting areas where we've already seen AI find product market fit is in this customer support market, legal services. A year ago, the law was considered one of the least tech forward industries, one of the least likely to take risks. Now we have companies like Harvey that are automating away a lot of the work that lawyers do from day to day, grunt work and drudgery all the way to more advanced analysis or software engineering. I'm sure a bunch of people in this room have seen some of the demos floating around on Twitter recently. It's remarkable that we've gone from a year ago AI, theoretically writing our code to entirely self contained AI software engineers. And I think it's really exciting. The future is going to have a lot more software, and AI isn't all about revolutionizing work. It's already increasing our quality of life. Now, the other day I was in a zoom with Pat and I noticed that he looked a little bit suspicious, didn't speak the entire time. And having reflected on it more, I'm pretty sure that he actually sent in his virtual AI avatar and was actually hitting the gym, which would exploit a lot.

Hi, this is Pat Grady. This is definitely me. I'm definitely here and not at the gym right now. It even gets the facial scrunches right. This is courtesy of Hae Jen. It's pretty amazing. This is how far technology has come in a year. It's scary to think about. It's scary and exciting to think about how this all plays out in the coming decade. All kidding aside, two years ago, when we thought that generative ai might usher in the next great technology shift, we didn't know what to expect. Would real companies come out of it? Would real revenue materialize? I think the sheer scale of user pull and revenue momentum has surprised just about everybody. generative ai, we think, is now clocking in around $3 billion of revenues in aggregate. And that's before you count all the incremental revenue generated by the FAANG companies and the cloud providers in AI. To put 3 billion in context, it took the SaaS market nearly a decade to reach that level of revenue. generative ai got there its first year out the gate. So the rate and the magnitude of the sea change make it very clear to us that generative ai is here to stay and the customer pull in. AI isn't restricted to one or two apps, it's everywhere. I'm sure everyone's aware of how many users chat GPT has. But when you look at the revenue and the usage numbers for a lot of AI apps, both consumer companies and enterprise companies, startups and incumbents, many AI products are actually striking a chord with customers and starting to find product market fit across industries. And so we find the diversity of use cases that are starting to hit a really exciting.

The number one thing that has surprised me, at least about the funding environment over the last year has been, how uneven the share of funding has been. If you think of generative ai as a layer cake, where you have foundation models on the bottom, you have developer tools and infra above, and then you have applications on top. A year ago, we had expected that there would be a cambrian explosion in the application layer due to the new enabling technology in the foundation layer. Instead, we've actually found that new company formation and capital has formed in an inverse pattern. More and more foundation models are popping up and raising very large funding rounds, while the application layer feels like it is just getting going. Our partner David is right here and posed a thought provoking question last year with his article AI's $200 billion question. If you look at the amount of money that companies are pouring into GPU's right now, we spent about $50 billion on Nvidia GPU's just last year, and everybody's assuming if you build it, they will come. AI is a field of dreams. But so far, remember on the previous slide, we've identified about $3 billion or so of AI revenue, plus change from the cloud providers. We've put 50 billion into the ground, plus energy, plus data center costs and more. We've gotten three out. And to me, that means the math isn't mathing yet. The amount of money it takes to build this stuff has vastly exceeded the amount of money coming out so far. So we got some very real problems to fix still. And even though the usage and even though the revenue and the user numbers in AI look incredible, the usage data says that we're still really early. And so if you look at, for example, the ratio of daily to monthly active users, or if you look at one month retention, generative ai apps are still falling far short of their mobile peers. To me, that is both a problem and an opportunity. It's an opportunity because AI right now is a once a week, once a month kind of tinkering phenomenon, for the most part for people. But we have the opportunity to use AI to create apps that people want to use every single day of their lives. When we interview users, one of the biggest reasons they don't stick on AI apps is the gap between expectations and reality. So that magical Twitter demo becomes a disappointment when you see that the model just isn't smart enough to reliably do the thing that you asked it to do. The good thing is, with that $50 billion plus of GPU spend last year, we now have smarter and smarter base models to build on. And just in the last month. We've seen Sora, we've seen cloud three, we saw Grok over the weekend. And so as the level of intelligence of the baseline rises, we should expect AI's product market fit to accelerate. So, unlike in some markets where the future of the market is very unclear, the good thing about AI is you can draw a very clear line to how those apps will get predictably better and better.

Let's remember that success takes time. We said this at last year's AI ascent, and we'll say it again if you look at the iPhone, some of the first apps in the v one of the App store were the beer drinking app, or the lightsaber app, or the flip cup app, or the flashlights. Kind of the fun, lightweight demonstrations of new technology. Those eventually became either native apps, aka the flashlight, et cetera, or utilities and gimmicks. The iPhone came out in 2007. The app store came out in 2008. It wasn't until 2010 that you saw Instagram and DoorDash 2013. So it took time for companies to discover and harness the net new capabilities of the iPhone in creative ways that we couldn't just imagine. Yet we think the same thing is playing out in AIH. We think we're already seeing a peek into what some of those next legendary companies might be. Here are a few of the ones that captured our attention recently, but I think it's much broader than the set of use cases on this page. As I mentioned, we think customer support is one of the first handful of use cases that's really hitting product market fit in the enterprise. As I mentioned with the Klarna story, I don't think that's an exception to the rule. I think that is the rule. AI friendship has been one of the most surprising applications for many of us. Took a few months of thinking for us to wrap our heads around, but I think the user and the usage metrics in this category imply very strong user love and then horizontal enterprise knowledge. We'll hear more from glean and dusk later today. We think that enterprise knowledge is finally starting to become unlocked. So here are some predictions for what we'll see over the coming year.

Prediction number 120 24 is the year that we see real applications take us from copilots that are kind of helpers on the side and suggest things to you and help you to agents that can actually take the human out of the loop entirely. AI that feels more like a coworker than a tool. We're seeing this start to work in domains like software engineering, customer service, and we'll hear more about this topic today. I think both Andrew Ng and Harrison Chase are planning to speak on it. Prediction number two one of the biggest knocks against LLMs is that they seem to be parroting the statistical patterns in text and aren't actually taking the time to reason and plan through the tasks at hand. That's starting to change with a lot of new research like inference time, compute and game gameplay style value iteration, like what happens when you give the model the time to actually think through what to do. We think that this is the this is a major research thrust for many of the foundation model companies and we expect it to result in AI that's more capable of higher level cognitive tasks like planning and reasoning over the next year. And we'll hear more about this later today from Noam Brown of OpenAI.

Prediction number three, we are seeing an evolution from fun consumer apps or prosumer apps, where you don't really care if the AI says something wrong or crazy occasionally, to real enterprise applications where the stakes are really high like hospitals and defense. The good thing is that there's different tools and techniques emerging to help bring these LLMs sometimes into the five nine's reliability range from RLHF to prompt training to vector databases. And I'm sure that's something that you guys can compare notes on later today. I think a lot of folks in this room are doing really interesting things to make LLMs more reliable in production. And finally, 2024 is the year that we expect to see a lot of AI prototypes and experiments. Experiments go into production. And what happens when you do that? That means latency matters, that means cost matters. That means you care about model ownership, you care about data ownership, and it means we expect the balance of compute to begin shifting from pre training over to inference. So 2024 is a big year. There's a lot of pressure and expectations built into some of these applications as they transition into production, and it's really important that we get it right.

With that, I'll transition to Constantine, who will help us dream about AI over at an even longer time horizon. Thank you.

Thank you, Sonia, and thank you everyone for being here today. Pat just set up the so what. Why is it so important? Why are we all in the room? And Sonia just walked us through the what now? Where are we in the state of AI? This section is going to be about what's next. We're going to take a step back and think through what this means in the broader concept of technology and society at large. So there are many types of technology revolution there are communication revolutions like telephony. There are transportation revolutions, like the locomotive. There are productivity revolutions, like the mechanization of food harvest. We believe that AI is primarily a productivity revolution, and these revolutions follow a pattern. It starts with a human, with a tool that transitions into a human, with a machine assistant, and eventually that moves into a human with a machine network. The two predictions that we're going to talk about in this section both relate to this concept of humans working with machine networks.

Let's look at a historical example. The sickle has been around as a tool for the human for over 10,000 years. The mechanical reaper, which is a human and a machine assistant, was invented in 1831, a single machine system being used by a human. Today, we live in an era where we have a combine harvester. A combine harvester is tens of thousands of machine systems working together as a complex network. We're starting to use language and AI to describe this language, like individual machine, participants in the system might be called an agent. We're talking about this quite a bit today. The way the topology and the way that the information is transferred between these agents, we're starting to talk about as reasoning. For example. In essence, we're creating very complicated layers of abstraction above the primitives of Aih. I'll talk about two examples today, two examples that we're experiencing right in front of us in knowledge work. The first is software. So software started off as a very manual process. Here's Ada Lovelace, who wrote logical programming with pen and paper and was able to do these computations, but without the assistance of a machine. We've been living in an era where we have significant machine assistance for computation, not just the computer, but the integrated development environment, and increasingly more and more technologies to accelerate development of software. We're entering a new era in which these systems are working together in a complex machine network. What you see is a series of processes that are working together in order to produce complex engineering systems. And what you would see here is agents working together to produce code, not one at a time, but actually in unison and harmony.

The same pattern is being applied in writing. Very commonly, writing was a human process, human and a tool. Over time, this has progressed to human and a machine assistant, and now we have a human that's actually leveraging not one, but a network of assistants. I'll tell you in my own personal workflow now, anytime I call an AI assistant, I'm not just calling GPT four, I'm calling Mistral large, I'm calling Claude three, I'm having them work together and also against each other to have better answers. This is the future that we're seeing right in front of us. So what? What does this type of revolution mean for everyone in this room, and frankly, everyone outside of this room? In cold, hard economic terms, what this means is significant cost reduction. So this chart is the number of workers needed at an S and P 500 company to generate 1 million of revenue. It's going down rapidly. We're entering an era where this will continue to decline. What does that mean? Faster and fewer. The good news is it's not so that we can do less. It's so that we can do more. And we'll get to that in the next set of predictions. Also fortunate is all the areas where we've had this type of progress in the past have been deflationary. I'll call out computer software and accessories. The process of computer software, because we're constantly building on each other, has actually gone down in cost over time. Televisions are also here. But some of the most important things to our society, education, college tuition, medical care, housing, they've gone up far faster than inflation. And it's perhaps a very happy coincidence that artificial intelligence is poised to help drive down costs in these and many other crucial areas. So that's the first conclusion about the long term future of artificial intelligence as a massive cost driver of productivity revolution that's going to be able to help us do more with less in some of the most critical areas of our society.

The second is related to what is it really doing? One year ago, on the stage, we had Jensen Hwang make a powerful prediction. He said that in the future, pixels are not going to be rendered. They're going to be generated. Any given image, even information will be generated. What did he mean by this? Well, as everyone in this room knows, historically, images have been stored as rote memory. So let's think about the letter a ASCII character number 97. Okay? That is stored as a matrix of pixels, either the presence or absence. If we use a very simple black and white presence or absence of those pixels, well, we're entering a period in which we already are representing concepts like the letter a, not as rote storage, not as a presence or absence of pixels, but as a concept, a multi dimensional point. I mean, the image to think about here is the concept of an a, which is generalizable to any given format for that letter a. So many different typefaces in this multidimensional space. We're sitting at the center. And where do we go from here. Well, the powerful thing is the computers are now starting to understand not just this multi dimensional point, not just how to take it and render it and generate that image like Jensen was talking about. We are now at the point where we're going to be able to contextualize that understanding. The computer is going to understand the a, be able to render it, understand it's an Alphabet, understand it's an english Alphabet, and understand what that means. In the broader context of this rendering, computer's going to look at the word multi dimensional and not even think about the a, but rather understand the full context of why that's being brought up. And amazingly, this future is how we think, how humans think. No longer are we going to be storing the rote pixels in a computer memory. That's not how we think. I wasn't taught about the letter a as the presence or absence of a pixel on a page. Instead, we're going to be thinking about that as a concept powerfully. This is how we thought about it philosophically for thousands of years.

Here's my fellow greek plato, 2500 years ago, who said this idea of a platonic form is what we all ascribe to, are all striving for that. You have this concept, in this case of a letter a, or this concept of software engineering that we actually are able to build a model around. So what now? We've talked about the second pattern, this idea that we're going to have generalization inside computing itself. What does that mean for each of us? Well, it's going to mean a lot for company building today. We're already integrating this into specific processes and KPI's. Sonia just mentioned how Klarna is using this in order to accelerate their KPI's around customer support. They know that they have certain KPI's that they can drive towards, and they can have a system that's actually retrieving information, generating great customer experiences tomorrow. And this is already happening alongside new user interfaces that might be a different interface for how the support is actually being communicated. And this is what I'm personally incredibly excited about, is because of this future in which concepts are rendered, because of this future in which everything is generated, eventually the entire company might start working like a neural network. Let me break that down in a specific example. This is a caricature. As with everything in this presentation, in reality everything is continuous. These are all discrete. This is a caricature of the customer support process. You have customer service that has certain KPI's. These are driven by text to voice language, generation, customer personalization and the like. This feeds into sub patterns, subtrees that you're optimizing. And eventually you're actually going to have a fully connected graph here. You're actually going to have feedback from the language generation to the end KPI for the servicing of the customers. This is going to be at some point a layer of abstraction where customer support is managed, optimized and improved by the neural network.

Now let's think about unique customers, another part of the important job of building a business. Well, again, you have the primitives of artificial intelligence, from language generation to a growth engine to add customization and optimization. This will all feed into each other. Once again, the powerful conclusion here is eventually these layers of abstraction will become interoperable to the point where the entire company is able to function like a neural network. Here comes the rise of the one person company. The one person company is going to enable us not to do less, but to do more. More problems can be tackled by more people to create a better society. So what's next? The reality is the people in the room here are going to decide what's next. You are the ones who are building this future. We personally are very excited about the future because we think that AI is positioned to help drive down costs and increase productivity in some of the most crucial areas in our society. Better education, healthier populations, more productive populations. And that's the purpose of convening this group today. You all are going to be able to talk about how are we able to take our technologies, abstract away complexity, mundane details, and actually build something that's much more powerful for the future. I'll hand it off to Sonia to introduce our first speaker. Thank you.

Artificial Intelligence, Innovation, Technology, Generative Ai, Business Models, Future Of Work, Sequoia Capital