ENSPIRING.ai: LangChains Harrison Chase on Building the Orchestration Layer for AI Agents - Training Data
The video interview features Harrison Chase, founder and CEO of LangChain, as he delves into the world of AI agents, a rapidly evolving technology. Chase explains the concept of agents in relation to large language models (LLMs), which involves LLMs deciding the control flow of applications. Unlike traditional generation chains with fixed steps, agents provide flexibility by letting LLMs choose actions to take, often using tools and memory to execute those choices. As agents represent the future progression from copilot models to more autonomous systems, Chase highlights their utility in simplifying processes and allowing more creative human input.
Chase discusses the various steps involved in building agents, from rudimentary processes to more complex, autonomous frameworks. Highlighting his company's role, he positions LangChain as a middle point in the agent spectrum, facilitating the creation of customizable agents. He also touches on the evolution of LangChain, emphasizing innovations such as Lang graph for flexible agent building. The conversation delves into the current agent landscape including advancements and hype cycles, noting the resurgence of interest as more structured, specific cognitive architectures become practical.
Main takeaways from the video:
Please remember to turn on the CC button to view the subtitles.
Key Vocabularies and Common Phrases:
1. agent [ˈeɪ.dʒənt] - (noun) - A component within an AI system that uses large language models to decide the actions or control flow of an application. - Synonyms: (representative, intermediary, factor)
So maybe first, just to set the table, what exactly are agents?
2. orchestration [ˌɔːrkɪˈstreɪʃən] - (noun) - The arrangement and coordination of the elements of a complex product or process, particularly within AI frameworks. - Synonyms: (coordination, arrangement, management)
A lot of what we're focused on recently is being this orchestration layer that enables the creation of these agents.
3. agentic [əˈdʒɛntɪk] - (adjective) - Related to the ability of allowing AI systems to make decisions autonomously, often associated with LLMs. - Synonyms: (autonomous, decision-making, self-regulating)
Is agentic behavior more about one versus the other?
4. cognitive architecture [ˈkɒɡnɪtɪv ˈɑːrkɪtɛktʃər] - (noun) - The design or structure of a cognitive model that guides how information flows and decisions are made in AI systems. - Synonyms: (mental design, reasoning framework, intelligence structure)
These are all different variants of cognitive architectures.
5. non-deterministic [nɒn-dɪˈtɜːrmɪˌnɪstɪk] - (adjective) - Describes a process where outcomes are not determined by prior states or predictable paths, often associated with AI's unpredictable behaviors. - Synonyms: (unpredictable, probabilistic, random)
And specifically, you probably want a human to review things more often than you want them to review, like a software test or something like that.
6. bespoke [bɪˈspoʊk] - (adjective) - Custom-made or tailored to specific requirements, often used in the context of personalized solutions or products. - Synonyms: (customized, tailored, made-to-order)
And so we see these more, like, if you think about it as a graph that you're drawing out, we see more and more basically custom and bespoke graphs as people kind of try to constrain and guide the agent along their application
7. ecosystem [ˈiːkoʊˌsɪstəm] - (noun) - A complex network or interconnected system, often used to describe networks within technological or business fields. - Synonyms: (network, framework, environment)
Harrison is a legend in the agent ecosystem as the product visionary.
8. constrained [kənˈstreɪnd] - (adjective) - Subject to limits or restrictions, commonly used in AI contexts where systems need to operate within set boundaries. - Synonyms: (restricted, limited, bounded)
They're actually quite simple to build, but we see them going off the rails a lot, and we see people wanting more constrained things.
9. autonomous [ɔːˈtɒnəməs] - (adjective) - Capable of operating independently without human intervention, particularly relevant in the context of AI systems. - Synonyms: (independent, self-governing, self-reliant)
And at the other extreme, you've got these autonomous agent type things, and then there's just this whole spectrum in between
10. spectrum [ˈspɛktrəm] - (noun) - A wide range or assortment, often used to denote the varying levels or continuous sequence in a particular category or subject. - Synonyms: (range, array, scale)
And at the other extreme, you've got these autonomous agent type things, and then there's just this whole spectrum in between
LangChains Harrison Chase on Building the Orchestration Layer for AI Agents - Training Data
It's so early on that, like, it's so early on, there's so much to be built. Yeah, like, you know, GPT five is going to come out and it'll probably make some of the things you did not relevant, but you're going to learn so much along the way. And this is, I strongly, strongly believe, like a transformative technology. And so the more that you learn about it, the better.
Hi, and welcome to training data. We have with us today Harrison Chase, founder and CEO of LangChain. Harrison is a legend in the agent ecosystem as the product visionary who first connected LLMs with tools and actions. And LangChain is the most popular agent building framework in the AI space. Today, we're excited to ask Harrison about the current state of agents, the future potential, and the path ahead. Harrison, thank you so much for joining us, and welcome to the show. Of course, thank you for having me.
So maybe just to set the stage, agents are the topic that everybody wants to learn more about. And you've been at the epicenter of agent building pretty much since the LLM wave first got going. And so maybe first, just to set the table, what exactly are agents? I think defining agents is actually a little bit tricky and people probably have different definitions of them, which I think is pretty fair because it's still pretty early on in the lifecycle of everything LLMs and agent related. The way that I think about agents is that it's when an LLM is kind of like deciding the control flow of an application.
So what I mean by that is if you have a more traditional kind of like rag chain or retrieval augmented generation chain, the steps are generally known ahead of time. First you're going to maybe generate a search query, then you're going to retrieve some documents, then you're going to generate an answer and you're going to return that to a user. And it's a very fixed sequence of events. I think when I think about things that start to get agentic, it's when you put an LLM at the center of it and let it decide what exactly it's going to do. Maybe sometimes it will look up a search query, other times it might not. It might just respond directly to the user. Maybe it will look up a search query, get the results, look up another search query, look up two more search queries, and then respond. And so you kind of have the LLM deciding the control flow.
I think there are some other maybe more buzzwordy things that fit into this. So, like tool usage is often associated with agents and I think that makes sense, because when you have an LLM deciding what to do, the main way that it decides what to do is through tool usage. So I think those kind of go hand in hand, there's some aspect of memory that's commonly associated with agents. And I think that also makes sense, because when you have an LLM deciding what to do, it needs to remember what it's done before. And so tool usage and memory are kind of loosely associated. But to me, when I think of an agent, it's really having an LLM decide the control flow of your application.
And, Harrison, a lot of what I just heard from you is around decision making. And I've always thought about agents as sort of action taking. Do those two things go hand in hand? Is agentic behavior more about one versus the other? How do you think about that? I think they go hand in hand. I think a lot of what we see agents doing is deciding what actions to take, for all intents and purposes. And I think the biggest difficulty with action taking is deciding what the right actions to take are. So I do think that solving one kind of leads naturally to the other.
And after you decide the action as well, there's generally the system around the LLM that then goes and executes that action and kind of, like, feeds it back into the agent. So I do think they go kind of hand in hand. So, Harrison, it seems like the main distinction, then, between an agent and something like a chain is that the LLM itself is deciding what step to take next, what action to take next, as opposed to these things being hard coded. Is that a fair way to distinguish agents? Yeah, I think that's right. And there's different gradients as well.
So as an extreme example, you could have basically a router that decides between which path to go down. And so there's maybe just like a classification step in your chain. And so the LLM is still deciding what to do, but it's a very simplistic way of deciding what to do. And at the other extreme, you've got these autonomous agent type things, and then there's just this whole spectrum in between. So I'd say that's largely correct, although I'll just note that there's a bunch of nuance and gray area, as there is with most things in the LLM space these days. Got it. A spectrum from control to fully autonomous decision making and logic. All of those are kind of on the spectrum of agents.
Interesting. What role do you see LangChain playing in the agent ecosystem? I think right now we're really focused on making it easy for people to create something in the middle of that spectrum. And for a bunch of reasons, we've seen that that's kind of the best spot to be building agents in at the moment. So we've seen some of these more fully autonomous things get a lot of interest and prototypes out the door, and there's a lot of benefits to the fully autonomous things. They're actually quite simple to build, but we see them going off the rails a lot, and we see people wanting more constrained things, but a little bit more flexible and powerful than chains.
And so a lot of what we're focused on recently is being this orchestration layer that enables the creation of these agents, particularly these things in the middle between chains and autonomous agents. And I can dive into a lot more about what exactly we're doing there. But at a high level, being that piece of orchestration framework is kind of where we imagine laying chain city. Got it. So there's chains, there's autonomous agents, there's a spectrum in between, and your sweet spot is somewhere in the middle, enabling people to build agents.
Yeah. And obviously that's changed over time, so it's fun to reflect on the evolution of Lang chain. So I think when LangChain first started, it was actually a combination of chains. And then we had this one class, this agent executor class, which was basically this autonomous agent thing. We started adding in a few more controls to that class, but eventually we realized that people wanted way more flexibility and control than we were giving them with that one class. Recently we've been really heavily invested in Lang graph, which is an extension of LangChain that's really aimed at customizable agents that sit somewhere in the middle.
And so kind of like our focus has evolved over time as the space has as well. Fascinating. Maybe one more final kind of setting the stage question. One of our core beliefs is that agents are the next big wave in AI, and that we're moving as an industry from copilots to agents. I'm curious if you agree with that take and why or why not? Yeah, I generally agree with that take. I think the reason why that's so exciting to me is that a copilot still relies on having this human in the loop. And so there's a little bit of almost like an upper bound on the amount of work that you can have done by an external kind of like by another system. And so it's a little bit limiting in that sense.
I do think there's some really interesting thinking to be done around what is the right ux and human agent interaction patterns. Um, but I do think they'll be more along the lines of an agent doing something and maybe checking in with you, as opposed to a copilot that's constantly kind of, like, in the loop. I just think it's. I just think it's more powerful and gives you more leverage if the more that they're doing, which cut, which is very paradoxical as well, because it comes, the more, the more you let it do things by itself, there's more risk that it's messing up or going off the rails. And so I think striking this right balance is going to be really, really interesting.
I remember back in, I think it was March ish of 2023, there were a few of these autonomous agents that really captured everyone's imaginations, baby. AGI, auto GPT, a few of these. And I remember Twitter was very, very excited about it. And it seems like that first iteration of an asian architecture hasn't quite met people's expectations, I think. Why do you think that is? And where do you think we are in the agent hype cycle now? Yeah, I think maybe thinking about the agent hype cycle first. I think Auto GPT was definitely the start, and then it's one of the most popular GitHub projects ever. So one of the peaks of the hype cycle, I think. And I'd say that started in the spring 2023 to summer of 2023 ish, then I personally feel like there was a bit of lull down trend from the late summer to basically the start of the new year in 2024.
And I think starting in 2024, we've started to see a few more realistic things come online. I'd point out some of the work that we've done at Linkchain with elastic, for example, they have an elastic assistant, an elastic agent in production. And so we're seeing that. We saw the Klarna customer support bot come online and get a lot of hype. We've seen Devon, we've seen Sierra, these other companies start to emerge in the agent space. And so I think with that hype cycle in mind, talking about why the auto GPT style architecture didn't really work, it was very general and very unconstrained. And I think that made it really exciting and captivated people's imaginations.
But I think practically, for things that people wanted to automate, to provide immediate business value, there's actually a lot, it's a much more specific thing that they want these agents to do, and there's really a lot more rules that they want the agents to follow or specific ways they want them to do things. And so I think in practice, what we're seeing with these agents is they're much more kind of like custom cognitive architectures is kind of like what we call them, where there's a certain way of doing things that you generally want an agent to do, and there's some flexibility in there, for sure. Otherwise you would just code it. But it's a very directed way of thinking about things. And that's most of the agents and assistants that we see today.
And that's just more engineering work, and that's just more kind of like trying things out and seeing kind of like what works and what doesn't work. And it's harder to do, so it just takes longer to build. And I think that's kind of why that's why that didn't exist a year ago or something like that. Since you mentioned cognitive architectures, I love the way that you think about them. Maybe, can you just explain what is a cognitive architecture? Is there a good mental framework for how we should be thinking about them? Yeah. So the way that I think about a cognitive architecture is basically, what's the system architecture of your LLM application?
And so what I mean by that is, if you're building an LLM application, there's some steps in there that use LLMs. What are you using these LLMs to do? Are you using them to just generate the final answer? Are you using them to route between two different things? Do you have like a pretty complex one with a lot of different branches and maybe some cycles repeating, or do you have a pretty loop? Would you basically run this LLM in a loop? These are all different variants of cognitive architectures. Cognitive architectures is just a fancy way of saying from the user input to the user output, what's the flow of data, of information, of LLM calls that happens along the way?
And what we've seen more and more, especially as people are trying to get agents actually into production, is that the flow is specific to their application and their domain. So there's maybe some specific checks they want to do. Right off the bat, there's maybe three specific steps that it could take after that. And then each one maybe has an option to loop back or has two separate sub steps. And so we see these more, like, if you think about it as a graph that you're drawing out, we see more and more basically custom and bespoke graphs as people kind of try to constrain and guide the agent along their application. The reason I call it a cognitive architecture is just, I think a lot of the power of LLMs is around reasoning and thinking about what to do. And so I would maybe have like a cognitive mental model for how to do a task. And I'm basically just encoding that mental model into some kind of software system, some architecture that way.
Do you think that's the direction the world is going? Because I heard two things from you there. One was it's very bespoke, and second was it's fairly brute force. It's fairly hard coded in a lot of ways. Do you think that's where we're headed? Or do you think that's a stopgap and at some point more elegant architectures or a series of defaults, sort of reference architectures will emerge? That is a really, really good question, and one I spend a lot of time thinking about, I think. So at an extreme you could make an argument that if the models get really, really good and reliable at planning, then the best thing you could possibly have is just this. For loop that runs in a loop, calls the LLM, decides what to do, takes the action and loops again.
And like all of these constraints on how I want the model to behave, I just put that in my prompt and the model follows that kind of like explicitly. I do think the models will get better at planning and reasoning for sure. I don't quite think they'll get to the level where that will be the best way to do things for a variety of reasons. One, I think efficiency, if you know that you always want to do step a after step b, you can just put that in order and to reliability as well. These are still non deterministic things we're talking about, especially in enterprise settings. You probably want a little bit more comfort that if it's always supposed to do step a after step b, it's actually always going to do step a over step b, or after step b. I think it will get easier to create these things. Like I think they'll maybe start to become a little bit less and less complex.
But actually this is maybe a hot take or interesting take that it had. You could say so. The architecture of just running it in a loop you could think of as a really simple but general cognitive architecture. And then what we see in production is like custom and complicated, kind of like cognitive architectures. I think there's a separate access, which is like complicated but generic custom or complicated but generic cognitive architectures. And so this would be something like a really complicated, like, planning step and reflection loop or tree of thoughts or something like that. And I actually think that quadrant will probably go away over time, because I think a lot of that generic planning and generic reflection will get trained into the models themselves, but there will still be a bunch of not generic training or not generic planning, not generic reflection, not generic control loops that are never going to be in the models, basically, no matter what. And so I think those two ends of the spectrum I'm pretty bullish on.
I guess you can almost think about it as the LLM does, the very general agentic reasoning, but then you need domain specific reasoning, and that's the sort of stuff that you can't really build into one general model. 100%. I think a way of thinking about the custom cognitive architectures is you're taking the planning responsibility away from the LLM and putting it onto the human. And some of that planning, you'll move more and more towards the model and more and more towards the prompt. But I think they'll always be like. I think a lot of tasks are actually quite complicated in some of their planning. And so I think it will be a while before we get things that are just able to do that super, super reliably off the shelf. It seems like we've simultaneously made a ton of progress on agents in the last six months or so. Like, I was reading a paper, the Princeton SwE paper, where their coding agents can now solve 12.5% of GitHub issues versus, I think, 3.8% when it was just rag.
So it feels like we've made a ton of progress in the last six months, but 12.5% is not good enough to replace even an intern. And so it feels like we still have a ton of room to go. I'm curious where you think we are both for general agents and also for your customers that are building agents. Are they getting to, I assume not five nines reliability, but are they getting to kind of like the thresholds they need to deploy these agents out to actual customer facing deployments? Yeah, so the SWE agent is, I would say, a relatively generalish agent in that it is expected to work across a bunch of different GitHub repos. I think if you look at something at v zero by Vercel, that's probably much more reliable than 12.5%. And so I think that speaks to, like, yeah, there are definitely custom agents that not five nines of reliability, but that, like, are being used in production. So, like, elastic. I think we've talked publicly about how they've done. I think multiple agents at this point, and I think this week is RSA, and I think they're announcing something new at RSA. That's an agent. And, yeah, those are.
I don't have the exact numbers on reliability, but they're reliable enough to be shipped into production. General agents are still tough. Yeah. This is where kind of, like, longer context windows, better planning, better reasoning will help those general agents. You shared with me this great Jeff Bezos quote, which is like, focus on what makes your beer better. And I think it's referring to the fact that at the 20th century, breweries were trying to make their own electricity. Generate their own electricity. I think similar question a lot of companies are thinking through today. Do you think that having control over your cognitive architecture really makes your beer taste better, so to speak, metaphorically or like, or do you see control that the model, and just build kind of UI and product?
I think it maybe depends on the type of cognitive architecture that you're building. Going back to some of the discussions earlier, if you're building a generic cognitive architecture, I don't think that makes your beer tastes better. I think the model providers will work on this general planning. I think, like, well, work on these general cognitive architectures that you can try off the bat. On the other hand, if your cognitive architectures are basically you codifying a lot of the way that your support team thinks about something, or internal business processes, or the best way that, you know, to kind of, like, develop code or develop this particular type of code or this particular type of application. Yeah, I think that absolutely makes your beer taste better, especially if we're going towards a place where these applications are doing work, then the logic, the bespoke kind of business logic, or mental models, I'm anthropomorphizing these lms a lot right now, but the models for these things to do the best work possible, 100%, I think that's the key thing that you're selling in some capacity. I think Ux and UI and distribution and everything absolutely still plays a part. But, yeah, I draw this distinction between general versus custom.
Harrison, before we get into some of the details on how people are building these things, can we pop up a level real quick? Our founder, Don Valentine, was famous for asking the question, so what? And so my question to you is, so what? Let's imagine that autonomous agents are working flawlessly. What does that mean for the world? How is life different if and when that occurs? I think at a high level it means that as humans, we're focusing on just a different set of things. So I think there's a lot of repeated kind of work that goes on in a lot of industries at the moment. And so I think the idea of agents is a lot of that will be kind of automated away, leaving us to think maybe higher level about what these agents should be doing and maybe leveraging their outputs to do more creative or building upon those outputs to do more higher leverage things.
Basically, I think you could imagine bootstrapping an entire company where you're outsourcing a lot of the functions that you would normally have to hire for. And so you could play the role of a CEO with an agent for marketing, an agent for sales, something like that, and allow you to basically outsource a lot of this work to agents, leaving you to do a lot of the interesting strategic thinking, product thinking. And maybe this depends a little bit on what your interests are, but I think at a high level, it will free us up to do what we want to do and what we're good at and automate a lot of the things that we might not necessarily want to do. Are you seeing any interesting examples of this today, sort of live and in production? I mean, I think the biggest. There's two kinds of categories or areas of agents that are starting to get more traction. One's customer support, one's coding.
So I think customer support is a pretty good example of this. I think, you know, oftentimes people need customer support. We need customer support at Lingchain. If we could hire agents to do that, that would be really powerful. Coding is interesting because I think there's some aspects of coding that. I mean, yeah, this is maybe a more philosophical debate, but I think there's some aspects of coding that are really creative and do require really, I mean, lots of product thinking, lots of positioning and things like that. There's also aspects of coding that limit some of the, or not limit, but get in the way of a lot of the creativity that people might have. So if my mom has an idea for a website, she doesn't know how to code that up, but if there was an agent that could do that, she could focus on the idea for the website and basically the scoping of the website, but automate that. And so I'd say customer support, absolutely, that's having an impact today.
Coding, there is a lot of interest there. I don't think we're at. I don't think it's as mature as customer support, but in terms of areas where there is a lot of people doing interesting things. That would be a second one to call out. Your comment on coding is interesting because I think this is one of the things that has us very optimistic about AI. It's this idea of closing the gap from idea to execution, or closing the gap from dream to reality, where you can come up with a very creative, compelling idea, but you may not have the tools at your disposal be able to, to be able to put it into reality. And AI seems like it's well suited for that. I think Dylan and Figma talks about this a lot too.
Yeah, I think it goes back to this idea of like automating away the things that get in the way of making. I like the phrasing of idea to reality. It automates away kind of like the things that you don't necessarily know how to do or want to think about, but are needed to create whatever you want to create a. I think it also, one of the things that I spend a lot of time thinking about is what does it mean to be a builder in the age of generative AI and in the age of agents. So what it means to be a builder of software today means you either have to be an engineer or hire engineers or something like that. But I think what it means to be a builder in the age of agents and generative aih just allows people to build a way larger set of things than they could build today because they have at their fingertips all this other knowledge and all these other builders they can hire and use for very, very cheap.
I mean, I think some of the language around commoditization of intelligence or something like that, as these LLMs are providing intelligence for free, I think does speak to enabling a lot of these new builders to emerge. You mentioned reflection and chain of thoughts and other techniques. Maybe. Can you just say a word on what we've learned so far about what some of these, I guess, cognitive architectures are capable of doing for a gentec performance and maybe just, I'm curious what you think are the most promising cognitive architectures? Yeah, I think there's, maybe it's worth talking a little bit about why the auto GPT things didn't work, because I think a lot of the callgraph architectures are kind of emerged to counteract some of that. I guess way back when there was basically the problem that LlMsdev couldn't even reason well enough about a first step to do and what they should do as the first step.
And so I think prompting techniques like chain of thought turned out to be really helpful there. They basically gave the LLM more space to think about and think step by step about what they should do for a specific kind of single step. Then that actually started to get trained into the models more and more. And they kind of did that by default, as that kind of like is basically, everyone wanted the models to do that anyways. And so, yeah, you should train that into the models, I think. Then there was a great paper by Chan Yu called React, which basically was the first cognitive architecture for agents or something like that. And the thing that it did there was one, it asked the LLM to predict what to do.
That's the action. But then it added in this reasoning component. And so it's kind of similar to chain of thought in that it basically added in this reasoning component. He put it in a loop. He asked us to do this reasoning thing before each step, and you kind of run it there. And so that was kind of like. And actually, that's that explicit reasoning step has actually become less and less necessary as the models have that trained into them. Like, just like they have kind of like the chain of thought trained into them. That explicit reasoning, Stephen, has become less necessary.
So if you see people doing kind of, like, react style agents today, they're oftentimes just using function calling without kind of the explicit thought process. That was actually in the original react paper. But it's still this loop that has become synonymous with the react paper. So that's a lot of the difficulties initially with agents. And I wouldn't entirely describe those as cognitive architectures. I describe those as prompting techniques. But, okay, so now we've got this working.
Now, what are some of the issues? The two main issues are basically planning, and then kind of like realizing that you're done. And so by planning, I mean, like, when I think about what to do things subconsciously or consciously, I put together a plan of the order that I'm going to do the steps in, and then I kind of, like, go and do each steps. And basically, models struggle with that. They struggle with long term planning, they struggle with coming up with a good long term plan. And then if you're running it in this loop, at each step, you're kind of doing a part of the plan, and maybe it finishes or maybe it doesn't finish. And so there's this. If you just run it in this loop, you're implicitly asking the model to first come up with a plan, then kind of, like, track its progress on the plan and continue along that.
So I think some of the planning cognitive architectures that we've seen have been, okay, first let's add an explicit step where we ask the LLM to generate a plan. Then let's go step by step in that plan, and we'll make sure that we do each step. And that's just a way of enforcing that. The model generates a long term plan and actually does each step before going on. And it doesn't just generate a five step plan. Do the first step and then say, okay, I'm done, I finished, or something like that. And then I think a separate but kind of related thing is this idea of reflection, which is basically like, has a model actually done its job well?
I could generate a plan where I'm going to go get this answer. I could go get an answer from the Internet. Maybe it's just completely the wrong answer, or I got bad search results or something like that. I shouldn't just return that answer. I should think about whether I got the right answer or whether I need to do something. Again, if you're just running it in a loop, you're asking the model to do this implicitly. So there have been some cognitive architectures that have emerged to overcome that, that basically add that in as an explicit step where they do an action or a series of actions, and then ask the model to explicitly think about whether it's done it correctly or not.
And so planning and reasoning are probably two of the more popular generic kind of cognitive architectures. There's a lot of custom cognitive architectures, but that's all super tied to business logic and things like that. But planning and reasoning are generic ones. I'd expect these to become more and more trained into the models by default, although I do think there's a very interesting question of how good will they ever get in the models, but that's probably a separate, longer term conversation. Harrison, one of the things that you talked about at AI ascent was UX, which we would normally think about as being on the opposite end of the spectrum from architecture. The architecture is behind the scenes. The UX is the thing out in front of. But it seems like we're in this interesting world where the UX can actually influence the effectiveness of the architecture by allowing you, for example, with Devin, to rewind to the point in the planning process where things started to go off track.
Can you just say a couple of words about UX and the importance of it in agents or LLMs more generally, and maybe some interesting things that you've seen there? Yeah, I'm super fascinated by UX, and I think there's a lot of really interesting work to be done here. I think the reason it's so important is because these lms still aren't perfect and still aren't reliable and have a tendency to mess up. And I think that's why chat is such a powerful UX for some of the initial interactions and applications. You can easily see what it's doing. It streams its backs, its response. You can easily correct it by responding to it. You can easily ask follow up questions. And so I think chat has clearly emerged as the dominant UX at the moment.
I do think there are downsides to chat. It's generally like one AI message, one human message. The human is very much in the loop. It's very much a copilot esque type of thing. And I think the more and more that you can remove the human out of the loop, the more it can do for you and it can kind of work for you. And I just think that's incredibly powerful and enabling. However, again, going LLMs are not perfect and they mess up. So how do you kind of, like, balance these two things?
I think some of the interesting ideas that we've seen talking about, Devin, are this idea of basically having a really transparent list of everything the agent has done, right. You should be able to know what the agent has done. That seems like step one. Step two is probably being able to modify what it's doing or what it has done. So if you see that it messed up step three, you can maybe rewind there, give it some new instructions, or even just edit its kind of decision manually and go from there. I think other interesting ux patterns besides this rewind and edit. One is the idea of inbox, where the agent can reach out to the human as needed.
So you've maybe got like, you know, ten agents running in parallel in the background. And every now and again, it maybe needs to ask the human for clarification. And so you've got like an email inbox where the agent is sending you, like, help, help me. I'm at this point, I need help or something like that. And you kind of go and help it at that point. A similar one is like reviewing its work. Right. And so I think this is really powerful for, we've seen a lot of agents for writing different types of things, doing research style agents. There's a great project researcher which has some really interesting architectures around agents.
And I think that's a great place for this type of review. You can have an agent write a first draft, and then I can review it and I can leave comments, basically. And there's a few different ways that can actually happen. Uh, you know what the, the most, um, maybe like the least involved way is? I just leave like a bunch of comments in one go, send those all to the agent, and then it goes and fixes all of them. Another ux that's really, really interesting. Is this like collaborative, um, at the same time, so like Google Docs, um, but a human and an agent working at the same time. Like I leave a comment, the agent fixes it while I'm making another comment or something like that. I think, I think that's a separate ux that is pretty complicated to think about setting up and getting working and yeah, I think that's interesting.
There's one other kind of ux thing that I think is interesting to think about, which is basically just like, how do these agents learn from these interactions, right? We're talking about a human kind of like correcting the agent a bunch or giving feedback. It would be so frustrating if I had to give the same piece of feedback 100 different times. Right? That would suck. And so what's the architecture of the system that enables it so that it can start to learn from that, I think is really interesting. And I think all of these are, all of these are still to be figured out. Like, we're super early on in the game for figuring out a lot of these things, but this is a lot of what we spend a lot of time thinking about. Well, actually, that reminds me, you are, I don't know if you know this or not, but you're sort of legendary for the degree to which you are present in the developer community and paying very close attention to what's happening in the developer community and the problems that people are having in the developer community.
So there are the problems that Lang gene sort of directly addresses and you're building a business to solve, and then I imagine you encounter a bunch of other problems that are just sort of out of scope. And so I'm curious, within the world of problems developers who are trying to build with LLMs or trying to build an AI are encountering today, what are some of the interesting problems that you guys are not directly solving that maybe you would solve if you had another business? Yeah, I mean, I think two of the obvious areas are at the model layer and at kind of like the database layer. So like, we're not building a vector database. I think it's really interesting to think about what the right storage is, but we're not doing that um, we're not building a foundation model, and we're also not doing fine tuning of models. Like, we want to help with the data curation bit. Absolutely.
Um, but we're not kind of like building the infrastructure for, for fine tuning for that. There's, there's fireworks and other companies like that. I think. I think those are really interesting. Um, I think, uh, those are probably at like the immediate infra layer in terms of what people are running, running into at this moment. I do think there's a second question there or a second thought process there, which is, if agents do become kind of like the future, what are other infra problems that are going to emerge because of that? And I think it's way too early for us to say what of these we will or won't do, because to be quite frank, we're not at the place where agents are reliable enough to have this whole economy of agents emerge. But I think identity verification for agents, permissioning for agents, payments for agents. There's a really cool startup for payment for agents. Actually, this is the opposite.
Agents could pay humans to do things. I think that's really interesting to think about. If agents do become prevalent, what is the toy in infrared that is going to be needed for that, which I think is a little bit separate than what's the things that are needed in the developer community for building LLM applications? Because I think LLM applications are here. Agents are starting to get here, but not fully here. I think it's just different levels of maturity for these types of companies. Harrison, you mentioned fine tuning and the fact that you guys aren't going to go there. It seems like the two kind of prompting and I, cognitive architectures and fine tuning are almost substitutes for each other. How do you think about the current state of how people should be using prompting versus fine tuning, and how do you think that plays out?
Yeah, I don't think that fine tuning and cognitive architectures are substitutes for each other. And the reason I don't think they are, and I actually think they're kind of complimentary in a bunch of senses, is that when you have a more custom cognitive architecture, the scope of what you're asking each agent or each node or each piece of the system to do becomes much more limited. And that actually becomes really, really interesting for fine tuning. Maybe actually, on that point, can you talk a little bit about Lang Smith and Lang graff? Like Pat had just asked you, what problems are you not solving? I'm curious, what problems are you solving? And as it relates to all the problems with agents that we were talking about earlier, the things that you were doing to, I guess to make managing state more manageable, to make the agents more kind of controllable, so to speak, how do your products help people with that?
Yeah, so maybe even backing up a little bit and talking about LangChain when it first came out, I think the LangChain open source project really solved and tackled a few problems there. I think one of the ones is basically standardizing the interfaces for all these different components. So we have tons of integrations with different models, different vector stores, different tools, different databases, things like that. That's always been a big value prop of LangChain and why people use LangChain. In LangChain. There also is a bunch of higher level interfaces for easily getting started off the shelf with rag or SQL queue or things like that. And there's also a lower level runtime for dynamically constructing chains. And by chains I mean we can call them dags as well, like directed flows. And I think that distinction is important because when we talk about lang graph and why lang graph exists, it's to solve a slightly different orchestration problem, which is you want these customizable and controllable things that have loops of both are still in the orchestration space.
But I draw this distinction between a chain and these cyclical loops. I think with Langraph. And when you start having cycles, there's a lot of other problems that come into play, one of the main ones being this persistence layer so that you can resume, so that you can kind of have them running in the background in kind of like an async manner. And so we're starting to think more and more around deployment of these long running cyclical human in the loop type applications. We'll start to tackle that more and more. Then the piece that spans across all of this is Lang Smith, which we've been working on basically since the start of the company. And that's observability and testing for LLM applications. Basically, from the start we noticed that you're putting an LLM at the center of your system. LLMs are not deterministic. You got to have good observability and testing for these types of things in order to have confidence to put it in production.
So we started building langsmith works with and without LangChain. There's some other things in there, like a prompt hub, so that you can manage prompts, a human annotation queue to allow for this human review, which I actually think is crucially one like, I think in all of this, it's important to ask, so what's actually new here? And I think the main thing that's new here is these LLMs. And I think the main new thing about LLMs is they're non deterministic. So observability matters a lot more. And then also testing is a lot harder. And specifically, you probably want a human to review things more often than you want them to review, like a software test or something like that. And so a lot of the tooling ratio, and Langsmith kind of helps at that, actually, on that.
Harrison, do you have a heuristic for where existing observability, existing testing, existing fill in the blank will also work for LLMs, versus where LLMs are sufficiently different that you need a new product or you need a new architecture, you need a new approach. Yeah, I think I've thought about this a bunch on the testing side from the observability side, I feel like it's almost like, I feel like it's almost more obvious that there's something new that's needed here. And I think that's maybe that's just because of these multi step applications. Like, you just need a level of observability to get these insights. And I think a lot of the, like, datadog, I think is really aimed. Datadog is great. Kind of like monitoring, but for like specific traces. Um, I don't think you get the same level of insights that you can easily get with something like Langsmith, for example.
And I think a lot of people spend time looking at specific traces because they're trying to debug things that went wrong on specific traces, because there's all this non determinism that happens when you use an LLM. And so observability has always kind of felt like, um, that there's, there's something new to kind of like, be built there. Testing is really interesting, and I've thought about this a bunch. I think there's two maybe new unique things about testing. One is basically this idea of pairwise comparisons. So when I run software tests, I don't generally compare the results of, it's either pass or fail for the most part. And if I am comparing them, maybe I'm comparing the latency spikes or something, but it's not necessarily pairwise of two individual unit tests. But if we look at some of the evals for LLMs, the main eval that's trusted by people is this LlMsis kind of like arena chatbot arena style thing, where you literally judge two things.
Side by side. And so I think this pairwise thing is pretty important and pretty distinctive from kind of like traditional software testing. I think another component is basically depending on how you set up evals, you might not have 100% pass rate at any given point in time. It actually becomes important to track that over time and see that you're improving, or at least not regressing. I think that's different than software testing because you generally have everything passing, and then the third bit is just a human in the loop component. So I think you still want humans to be looking at the results of wants, maybe the wrong word, because there's a lot of downsides to it. It takes a lot of human time to look at these things, but those are generally more reliable than having some automated system.
If you compare that to software testing, software can test whether two equals two just as well as I can tell that two equals two by looking at it. And so figuring out how to put the humans in the loop for this testing process is also really interesting and unique and new. I think I have a couple of very general questions for you. Cool. I love general questions. Who do you admire most in the world of aih? That's a good question. I mean, I think what OpenAI has done over the past year and a half is incredibly impressive.
So I think Sam, but also everyone there, I think across the board, I have a lot of admiration for the way they do things. I think Logan, when he was there, did a fantastic job at some of bringing these concepts to folks. Sam obviously deserves a ton of credit for a lot of the things that has happened there. Lesser known, but David Dohan is a researcher that I think is absolutely incredible. He did some early model cascades papers, and I chatted with him super early on in Lingchain, and hes been incredibly, just influential in the way that I thinks about things. And so I have a lot of admiration for the way that he does things separately. I'm touching all different possible answers for this, but I think Zuckerberg and Facebook, I think they're crushing it with llama and a lot of the open source. And I also think as a CEO and as a leader, the way that he and the company have embraced that has been incredibly impressive to watch. So I have a lot of admiration for that. Speaking of which, is there a CEO or a leader who you try to model yourself after or who you've had learned a lot about your own leadership style from?
It's a good question. I think I definitely think of myself as more of kind of like a product centric kind of like CEO. And so I think, like Zuckerberg has been interesting to watch there. Brian Chesky, I saw him talk, or I listened to him talk at the Sequoia base camp last year and really admired the way that he kind of thought about product and thought about kind of like company building. And so Brian's usually my go to answer for that, but I can't say I've gone incredibly into the depths of everything that he's done. If you have one piece of advice for current or aspiring founders trying to build an AI, what would your one piece of advice for them be? Just build and just try building stuff. It's so early on that it's so early on. There's so much to be built. Yeah. GPT five is going to come out and it'll probably make some of the things you did not relevant, but you're going to learn so much along the way. And this is, I strongly, strongly believe, like a transformative technology. And so the more that you learn about it, the better.
One quick anecdote on that, just because I got a kick out of that answer. I remember at our first AI ascent in early 2023, when we were just starting to get to know you better, I remember you were sitting there pushing code the entire day. People were up on stage speaking and you were listening, but you were sitting there pushing code the entire day. And so when the advice is just build, you're clearly somebody who takes your own advice, I think. Well, that was the day OpenAI released plugins or something, and so there was a lot of scrambling to be done, and I don't think I did that at this year's sequoia ascent. So I'm sorry to disappoint and regress in that capacity. Thank you for joining us. We really appreciate it.
Technology, Artificial Intelligence, Innovation, Agent Building, Langchain, Ai Agents, Sequoia Capital
Comments ()