ENSPIRING.ai: Unleashing the Power of Spatial Intelligence in a 3D World

ENSPIRING.ai: Unleashing the Power of Spatial Intelligence in a 3D World

The video highlights the fundamental importance of visual-spatial intelligence which is as essential as language. Recent advancements in artificial intelligence (AI) involving compute technology, data understanding, and algorithmic innovation have ushered in a new era of possibilities. The discussion emphasizes the evolution of AI from its early stages through the AI winter and into the current flourishing of modern AI, including the rise of consumer AI companies and technological growth.

Among the highlights are the academic journeys of prominent figures in AI, starting with their early interests and contributions that have shaped the field today. They discuss pioneering work in language models, deep learning, and the eventual ascent of generative artificial intelligence (GenAI). The narrative includes personal insights and significant milestones like the Imagenet epoch, which made computer vision more viable.

Main takeaways from the video:

💡
The importance of compute power and data-driven models in AI development.
💡
The transition from supervised learning to sophisticated generative models.
💡
The potential for spatial intelligence to enhance our interaction with both virtual and physical worlds.
💡
New media forms and applications for 3D world generation that AI can enable.
💡
Collaborative efforts and influential work among leading scientists in bridging the gap between academic research and practical AI applications.
Please remember to turn on the CC button to view the subtitles.

Key Vocabularies and Common Phrases:

1. Cambrian [ˈkæm.bri.ən] - (adj.) - Relating to the earliest period in the Paleozoic era, marked by the development of diverse and complex life forms.

We're in the middle of a Cambrian explosion in almost a literal sense.

2. Discriminative [dɪˈskrɪmɪˌneɪtɪv] - (adj.) - Characterized by the tendency to recognize and understand differences.

We saw the beginnings of Discriminative computer vision, where you could take pictures and understand what's in them in a lot of different ways.

3. Generalization [ˌdʒen.rə.ləˈzeɪ.ʃən] - (n.) - The ability to apply learned knowledge or skills to new situations.

There was an overlooked element of AI that is mathematically important to drive Generalization.

4. Ontology [ɒnˈtɒl.ə.dʒi] - (n.) - A branch of metaphysics dealing with the nature of being, or a set of concepts within a domain and their relationships.

You would have to come up with this Ontology of concepts that we want to discover.

5. Algorithmics [ˌæl.ɡəˈrɪð.mɪks] - (n.) - The study or practice of discovering and applying algorithms.

So I think there's actually two epochs that to me, feel quite distinct in the Algorithmics here.

6. Reconstruction [ˌriː.kənˈstrʌk.ʃən] - (n.) - The process of building or forming something again that has been damaged or destroyed.

We actually have a long history in an area of research called Reconstruction, 3d Reconstruction.

7. Deluge [ˈdeɪ.luːdʒ] - (n.) - A large amount of something, especially something unpleasant.

The two GPU's and the Deluge of data.

8. Multidisciplinary [ˌmʌl.tiˈdɪs.ə.plɪˌner.i] - (adj.) - Involving two or more academic disciplines.

Working with the smartest young people and Multidisciplinary talents.

9. Continuum [kənˈtɪn.juəm] - (n.) - A continuous sequence in which adjacent elements are not perceptibly different from each other, but the extremes are quite distinct.

For people like us, it's already happening in the Continuum.

10. Affordance [əˈfɔː.dəns] - (n.) - A quality or property of an object that defines its possible uses or makes clear how it can or should be used.

It opens up the door for us to process data in different ways, to get different kinds of outputs out of it.

Unleashing the Power of Spatial Intelligence in a 3D World

Visual spatial intelligence is so fundamental, it's as fundamental as language. We've got these ingredients compute deeper understanding of data, and we've got some advancement of algorithms. We are in the right moment to really make a bet and to focus and just unlock that.

Over the last two years, we've seen this kind of massive rush of consumer AI companies and technology, and it's been quite wild. But you've been doing this now for decades, and so maybe walk through a little bit about how we got here, kind of like your key contributions and insights along the way. So it is a very exciting moment, right? Just zooming back. AI is in a very exciting moment.

I personally have been doing this for two decades plus, and we have come out of the last AI winter. We have seen the birth of modern AI. Then we have seen deep learning taking off, showing us possibilities, like playing chess. But then we're starting to see the deepening of the technology and the industry adoption of some of the earlier possibilities, like language models.

And now I think we're in the middle of a Cambrian explosion in almost a literal sense, because now, in addition to texts, you're seeing pixels, videos, audios, all coming out with possible AI applications and models. So it's a very exciting moment.

I know you both so well, and many people know you both so well because you're so prominent in the field, but not everybody grew up in AI, so maybe it's kind of worth just going through your quick backgrounds just to kind of level set the audience. Yeah, sure. So I first got into AI at the end of the, my undergrad. I did math and computer science for undergrad at Caltech. That was awesome.

But then towards the end of that, there was this paper that came out that was at the time a very famous paper, the cat paper from Hong Nek Lee and Andrew Ng and others that were at Google Brain at the time. And that was like, the first time that I came across this concept of deep learning. And to me, it just felt like this amazing technology.

And that was the first time that I came across this recipe that would come to define the next more than decade of my life, which is that you can get these amazingly powerful learning algorithms that are very generic, couple them with very large amounts of compute, couple them with very large amounts of data, and magic. Things started to happen when you compile those ingredients.

So I first came across that idea, like, around 2011, 2012 ish, and I just thought like, oh, my God, this is going to be what I want to do. So it was obvious you got to go to grad school to do this stuff, and then sort of saw that Fei Fei was at Stanford, one of the few people in the world at the time who was kind of on that train.

And that was just an amazing time to be in the deep learning and computer vision specifically, because that was really the era when this went from these first nascent bits of technology that were just starting to work and really got developed and spread across a ton of different applications.

So then, over that time, we saw the beginnings of language modeling. We saw the beginnings of Discriminative computer vision, where you could take pictures and understand what's in them in a lot of different ways. We also saw some of the early bits of what we would now call Genai generative modeling, generating images, generating text.

A lot of those core algorithmic pieces actually got figured out by the academic community. During my PhD years, there was a time I would just wake up every morning and check the new papers on archive and just be read. It was unwrapping presents on Christmas that every day, you know, there's going to be some amazing new discovery, some amazing new application or algorithm somewhere in the world.

What happened is, in the last two years, everyone else in the world kind of came to the same realization, using AI to get new Christmas presents every day. But I think for those of us that have been in the field for a decade or more, we've sort of had that experience for a very long time.

Obviously, I'm much older than Justin. I come to AI through a different angle, which is from physics, because my undergraduate background was physics. But physics is the kind of discipline that teaches you to think audacious questions and think about what is the remaining mystery of the world.

Of course, in physics, this atomic world, you know, universe and all that. But somehow that kind of training, thinking, got me into the audacious question that really captured my own imagination, which is intelligence. So I did my PhD in AI and computational neuroscience at Caltech. So Justin and I actually didn't overlap. But we share the same alma mater at Caltech and the same advisor at Caltech.

Yes, same advisor, your undergraduate advisor, my PhD advisor, Pietro Perona, and my PhD time, which is similar to your PhD time, was when AI was still in the winter in the public eye, but it was not in the winter in my eye, because it's that pre spring hibernation. There's so much life. Machine learning, statistical modeling was really gaining.

Gaining power, and we. I think I was one of the native generation in machine learning and AI, whereas I look at just this generation is the native deep learning generation. So machine learning was the precursor of deep learning.

And we were experimenting with all kinds of models. But one thing came out at the end of my PhD and the beginning of my assistant professor time. There was a overlooked elements of AI that is mathematically important to drive Generalization. But the whole field was not thinking that way, and it was data, because we were thinking about the intricacy of bayesian models or whatever, kernel methods and all that.

But what was fundamental that my students and my lab realized, probably earlier than most people, is that if you let data drive models, you can unleash the kind of power that we haven't seen before. And that was really the reason we went on a pretty crazy bet on imagenet, which is, you know what? Just forget about any scale we're seeing now, which is thousands of data points at that point.

NLP community has their own datasets. I remember UC Irvine data set or some dataset. In NLP, it was small. Computer vision community has their datasets, but all in the order of thousands or tens of thousands were like, we need to drive it to Internet scale. And luckily, it was also the coming of age of Internet.

So we were riding that wave, and that's when I came to Stanford. So these epochs are what we often talk about. Like Imagenet is clearly the epoch that created, or at least maybe made popular and viable computer vision.

In the Genai wave, we talk about two kind of core unlocks. One is the transformers paper, which is attention. We talked about stable diffusion. Is that a fair way to think about this? Which is there's these two algorithmic unlocks that came from academia or Google, and that's where everything comes from.

Or has it been more deliberate, or have there been other kind of big unlocks that brought us here that we don't talk as much about? Yeah, I think the big unlock is compute. Like, I know the story of AI is often the story of compute, but even no matter how much people talk about it, I think people underestimate it. Right. And the amount of growth that we've seen in computational power over the last decade is astounding.

The first paper that's really credited with the breakthrough moment in computer vision for deep learning was Alexnet, which was a 2012 paper where a deep neural network did really well on the Imagenet challenge and just blew away all the other algorithms that Fei Fei had been working on, the types of algorithms that you've been working on more in grad school, that Alexnet was a 60 million parameter deep neural network.

And it was trained for six days on two GTX 580s, which was the top consumer card at the time, which came out in 2010. So I was looking at some numbers last night, just to put these in perspective. The newest, the latest and greatest from Nvidia is the GB 200. Do either of you want to guess how much raw compute factor we have between the GTX 580 and the GB 200? Shoot.

No. What? Go for it. It's in the thousands. So I ran the numbers last night, like that two week training run, that of six days on two GTX 580s. If you scale, it comes out to just under five minutes on a single GB 200.

Justin is making a really good point. The 2012 Alex. That paper on Imagenet challenge is literally a very classic model. And that is the convolutional neural network model. And that was published in 1980s, the first paper I remember as a graduate student learning that.

And it more or less also has six, seven layers. Practically the only difference between Alexnet and the convnet. What's the difference? Is the GPU's. The two GPU's and the Deluge of data. Yeah, well, so that's where I was going to go, which is like. So I think most people now are familiar with, like, quote, the bitter lesson.

And the bitter lesson says is if you make an algorithm, don't be cute. Just make sure you can take advantage of available compute, because the available compute will show up, right? And so, like, you just, like, need a wife. Like, to.

On the other hand, there's another narrative, which seems to me to be, like, just as credible, which is like, it's actually new data sources that unlock deep learning, right? Like, imagenet is a great example, but, like, a lot of people, like, self attention is great from transformers, but they'll also say, this is a way you can exploit human labeling of data, because, like, it's the humans that put the structure in the sentences. And if you look at clip, they'll say, well, we're using the Internet to actually, like, have humans use the alt tag to label images, right?

And so, like, that's a story of data. That's not a story of compute. And so is it just, is the answer just both or is like one more than the other, or. I think it's both, but you're hitting another really good point. So I think there's actually two epochs that to me, feel quite distinct in the Algorithmics here.

So, like, the imagenet era is actually the era of supervised learning. So in the era of supervised learning. You have a lot of data, but you don't know how to use data on its own. Like the expectation of Imagenet and other data sets of that time period was that we're going to get a lot of images, but we need people to label everyone and all of the training data that we're going to train on.

Like a person, a human labeler has looked at everyone and said something about that image. And the big algorithmic unlocks. We know how to train on things that don't require human labeled data. As the naive person in the room that doesn't have an AI background, it seems to me if you're training on human data like the humans have labeled it, it's just not explicit. I knew you were gonna say that, Marty. I knew that.

Yes, philosophically that's a really important question, but that actually is more true in language than pixels. Fair enough. Yeah, right. Yeah, yeah, yeah. But I do think it's an important distinction because clip really is human labeled. I think intention is.

Humans have like figured out the relationships of things and then you learn them. So it is human labeled. Just more implicit than explicit. Yeah, it's still human labeled. The distinction is that for this supervised learning eradic, our learning tasks were much more constrained.

So you would have to come up with this Ontology of concepts that we want to discover. Right. If you're doing an imagenet like Fei Fei and your students at the time spent a lot of time thinking about which thousand categories should be in the imagenet challenge. Other datasets of that time, like the cocoa dataset for object detection, they thought really hard about which 80 categories we put in there.

So let's walk to Genai. When I was doing my PhD before that you came. So I took machine learning from Andreeng and then I took like Bayesian, something very complicated from Daphne Caller. And it was very complicated for me. A lot of that was just predictive modeling.

Yeah. And then like, I remember the whole kind of vision stuff that you unlocked. But then the generative stuff has shown up, like I would say in the last four years, which is to me very different. Like, you're not identifying objects, you're not predicting something, you're generating something.

And so maybe kind of walk through, like the key unlocks that got us there and then why it's different and if we should think about it differently. And is it part of a Continuum? Is it nothing? It is so interesting. Even during my graduate time, generative model was there. We wanted to do generation.

Nobody remembers, even with letters and numbers. We were trying to do some. Jeff Hinton has had generated papers. We were thinking about how to generate. In fact, if you think from a probability distribution point of view, you can mathematically generate, it's just nothing we generate would ever impress anybody, right?

So this concept of generation, mathematically, theoretically, is there, but nothing worked. So then I do want to call out Justin's PhD. Justin was saying that he got enamored by deep learning, so he came to my lab. Justin's PhD, his entire PhD, is a story, almost a mini story, of the trajectory of the field.

He started his first project in data. I forced him to. He didn't like it. In retrospect. I learned a lot of really useful things. I'm glad you say that now. So we moved Justin to deep learning. And the core problem there was taking images and generating words. Well, actually, it was even about.

There were. I think there were three discrete phases here on this trajectory. So the first one was actually matching images and words. Right? Like, we have. We have an image, we have words, and can we say how much they allow? So, actually, my first paper, both of my PhD, and, like, ever, my first academic publication ever, was the image retrieval with scene graphs.

And then we went into the generated, taking pixels, generating words, and Justin and Andre really worked on that, but that was still a very, very lossy way of. Of generating and getting information out of the pixel world. And then in the middle, Justin went off and did a very famous piece of work. And it was the first time that someone made it real time, right? Yeah.

Yeah. So the story there is. There was this paper that came out in 2015, a neural algorithm of artistic style led by Leon Gaddis. And it was like, the paper came out, and they showed these real world photographs that they had converted into a van Gogh style. And we are kind of used to seeing things like this in 2024, but this was in 2015.

So this paper just popped up on archive one day, and it blew my mind. I just got this genii brainworm in my brain in 2015, and it did something to me, and I thought, oh, my God, I need to understand this algorithm. I need to play with it. I need to make my own images into van Gogh. So then I read the paper, and over a long weekend, I re implemented the thing and got it to work.

It was actually very simple algorithm. So my implementation was, like 300 lines of Lua, because at the time, it was pretty. It was Lua. This was pre pytorch. So we were using Lua torch, but it was like, very simple algorithm. But it was slow, right? So it was an optimization based thing. Every image you want to generate, you need to run this optimization loop, run this gradient descent loop for every image that you generate.

The images were beautiful, but I just wanted to be faster, and Justin just did it. And it was actually, I think, your first taste of an academic work having an industry impact.

A bunch of people had seen this artistic style transfer stuff at the time, and me and a couple others at the same time came up with different ways to speed this up, but mine was the one that got a lot of traction. Right. So I was very proud of Justin.

But there's one more thing. I was very proud of Justin. To connect to Genai is that before the world understand Gen AI, Justin's last piece of work in PhD, which I knew about it because I was forcing you to do it, that one was funny, was actually inputting language and getting the whole picture out. It's one of the first Genai work.

It's using Gan, which was so hard to use, but the problem is that we are not ready to use a natural piece of language. So Justin, you heard he worked on scene graph, so we have to put a scene graph language structure. So the sheep, the grass, the sky in a graph way. It literally was one of our photos. And then he and another very good master student, Grimm, they got that again to work.

So you can see from data to matching, to style transfer to generative images, we're starting to see you ask if this is abrupt change. For people like us, it's already happening in the Continuum, but for the world, the results are more abrupt.

So I read your book, and for those that are listening, it's a phenomenal book. I really recommend you read it. And it seems for a long time, a lot of, and I'm talking to you, Fei. Fei, like, a lot of your research has been, and your direction has been towards kind of spatial stuff and pixel stuff and intelligence. And now you're doing world labs, and it's around spatial intelligence.

And so maybe talk through, like, you know, has this been part of a long journey for you? Like, why did you decide to do it now? Is it a technical unlock? Is it a personal unlock? Just kind of, like, move us from that kind of melo of AI research to world labs? Sure.

For me is it is both personal and intellectual. Right. My entire. You talk about my book. My entire intellectual journey is really this passion to seek north stars, but also believing that those north stars are critically important for the advancement of our field.

So at the beginning, I remembered after graduate school, I thought my North Star was telling stories of images, because for me, that's such an important piece of visual intelligence. That's part of what you call AI or AGI. But when Justin and Andre did that, I was like, oh, my God, that was my live stream. What do I do next? So it came a lot faster. I thought it would take 100 years to do that.

But visual intelligence is my passion, because I do believe for every intelligent being, like people or robots or some other form, knowing how to see the world, reason about it, interact in it, whether you're navigating or manipulating or making things, you can even build civilization upon it. Visual spatial intelligence is so fundamental, it's as fundamental as language, possibly more ancient and more fundamental in certain ways. So it's very natural for me that world labs is our north Star, is to unlock spatial intelligence.

The moment, to me, is right to do it. Like Justin was saying, compute. We've got these ingredients, we've got compute, we've got a, a much deeper understanding of data, way deeper than image that days, compared to that, those days, we're so much more sophisticated, and we've got some advancement of algorithms, including co founders in World Lab, like Ben Mildenhall and Christoph Lasner. They were at the cutting edge of nerve that we are in the right moment to really make a bet and to focus and just unlock that.

So I just want to clarify for folks that are listening to this, which is you're starting this company, World lab. Spatial intelligence is kind of how you're generally describing the problem you're solving. Can you maybe try to crisply describe what that means? Yeah.

So spatial intelligence is about machines ability to perceive, reason and act in three d and three d, space and time. To understand how objects and events are positioned in 3d space and time, how interactions in the world can affect those 4d positions over spacetime, and both sort of perceive, reason about, generate, interact with, really take the machine out of the mainframe or out of the data center and putting it out into the world and understanding the world with all of its richness.

So, to be very clear, are we talking about the physical world, or are we just talking about an abstract notion of world? I think it can be both. I think it can be both. And that encompasses our vision long term. Even if you're generating worlds, even if you're generating content, doing that positioned in 3d with 3d has a lot of benefits.

Or if you're recognizing the real world, being able to put 3d understanding into the real world as well is part of it. Great. So, I mean, just for everybody listening, the two other co founders, Ben Melvin hall and Christoph Flassner, are absolute legends in the field at the same level. These four decided to come out and do this company now. And so I'm trying to dig to why now is the right time.

Yeah, I mean, this is again part of a longer evolution for me. But really, after PhD, when I was really wanting to develop into my own independent researcher, both for my later career, I was just thinking, what are the big problems in AI and computer vision? And the conclusion that I came to about that time was that the previous decade had mostly been about understanding data that already exists. But the next decade was going to be about understanding new data.

And if we think about that, the data that already exists was all of the images and videos that maybe existed on the web already, and the next decade was going to be about understanding new data. Right? Like people have smartphones, smartphones are collecting cameras, those cameras have new sensors. Those cameras are positioned in the 3d world. It's not just you're going to get a bag of pixels from the Internet and know nothing about it and try to say if it's a cat or a dog.

We want to treat these treat images as universal sensors to the physical world. And how can we use that to understand the structure of the world, either in physical spaces or generative spaces?

So I made a pretty big pivot post PhD, into 3d computer vision, predicting 3d shapes of objects with some of my colleagues at fair at the time. Then later, I got really enamored by this idea of learning 3d structure through 2d, because we talk about data a lot, 3d data is hard to get on its own. But because there's a very strong mathematical connection here, our 2d images are projections of a 3d world, and there's a lot of mathematical structure here we can take advantage of.

So even if you have a lot of 2d data, there's a lot of people have done amazing work to figure out how can you back out the 3d structure of the world from large quantities of 2d observations? And then in 2020, you asked about big breakthrough moments. There was a really big breakthrough moment from our co founder, Ben Mildenhall at the time with his paper Nerf neural radiance fields.

And that was a very simple, very clear way of backing out 3d structure from 2d observations that just lit a fire under this whole space of 3d computer vision. I think there's another aspect here that maybe people outside the field don't quite understand. That was also a time when large language models were starting to take off.

So a lot of the stuff with language modeling actually had gotten developed in academia. Even during my PhD, I did some early work with Andrej Karpathy on language modeling in 2014. LSTM. I still remember lstms, rnn's grus. This was pre transformer. But then at some point around the GPT-2 time, you couldn't really do those kind of models anymore in academia because they took a way more resourcing.

But there was one really interesting thing, the Nerf approach that Ben came up with, like, you could train these in a couple hours on a single GPU. So I think at that time, there was a dynamic here that happened, which is that I think a lot of academic researchers ended up focusing a lot of these problems because there was core algorithmic stuff to figure out and because you could actually do a lot without a ton of compute, and you could get state of the art results on a single GPU because of those dynamics.

There was a lot of research. A lot of researchers in academia were moving to think about what are the core algorithmic ways that we can advance this area as well. Then I ended up chatting with Fei, Fei Moore, and I realized that we were actually. She's very convincing. She's very convincing. Well, there's that. But like, you know, you were talking about trying to like, figure out your own independent research trajectory from your advisor. Well, it turns out we had ended up. Oh, no.

Kind of concluding on converging on similar things. Okay, well, from my end, I want to talk to the smartest person. I called Justin. There's no question about it. I do want to talk about a very interesting technical issue, or technical story of pixels that most people working language don't realize is that pre gen AI era in the field of computer vision.

Those of us who work on pixels, we actually have a long history in an area of research called Reconstruction, 3d Reconstruction, which is, you know, it dates back from the seventies. You know, you can take photos because humans have two eyes, right? So in general, it starts with stereo photos, and then you try to triangulate the geometry and make a 3d shape out of it.

It is a really, really hard problem to this day. It's not fundamentally soft because there's correspondence and all that. And then, so this whole field, which is an older way of thinking about 3d, has been going around and it has been making really good progress. But when Nerf happened, when nerve happened, in the context of generative methods, in the context of diffusion models, suddenly Reconstruction and generation start to really merge now, like, within really, a short period of time.

In the field of computer vision, it's hard to talk about Reconstruction versus generation anymore. We suddenly have a moment where if we see something or if we imagine something, both can converge towards generating it. And that's just, to me, a really important moment for computer vision.

But most people missed it because we're not talking about it as much as LLMs, right? So in pixel space, there's Reconstruction, where you reconstruct, like a scene that's real, and then if you don't see the scene, then you use generative techniques, right? So these things are kind of very similar throughout this entire conversation.

You're talking about languages and you're talking about pixels. So maybe it's a good time to talk about how, like, spatial intelligence and what you're working on contrasts with language approaches, which, of course, are very popular now. Like, is it complementary? Is it orthogonal? Yeah, I think they're complementary.

I don't mean to be too leading here. Like, maybe just contrast them like everybody says. Like, listen, I know OpenAI and I know, and I know multimodal models, and a lot of what you're talking about is, like, they've got pixels and they've got languages, and doesn't this kind of do what we want to do with spatial reasoning?

Yeah, so I think to do that, you need to open up the black box a little bit of how these systems work under the hood. So, with language models and the multimodal language models that we're seeing nowadays, their underlying representation under the hood is a one dimensional representation. We talk about context lengths, we talk about transformers, we talk about sequences attention.

Fundamentally, their representation of the world is one dimensional. So these things fundamentally operate on a one dimensional sequence of tokens. So this is a very natural representation when you're talking about language, because written text is a one dimensional sequence of discrete letters. So that kind of underlying representation is the thing that led to LLMs.

And now the multimodal LLMs that we're seeing now, you kind of end up shoehorning the other modalities into this underlying representation of a 1d sequence of tokens. Now, when we move to spatial intelligence, it's kind of going the other way, where we're saying that the three dimensional nature of the world should be front and center in the representation. So at an algorithmic perspective, that opens up the door for us to process data in different ways, to get different kinds of outputs out of it, and to tackle slightly different problems.

So even at a course level, you kind of look at outside and you say, oh, multimodal lms can look at images too. Well, they can, but I think that they don't have that fundamental 3d representation at the heart of their approaches. I totally agree with Justin. I think talking about the 1D versus fundamentally 3d representation is one of the most core differentiation.

The other thing is slightly philosophical, but it's really important, for me at least, is language is fundamentally a purely generated signal. There's no language out there. You don't go out in the nature and there's words written in the sky for you. Whatever data you feed in, you pretty much can just somehow regurgitate, with enough generalizability, the same data out.

And that's language to language. But 3d world is not. There is a 3d world out there that follows laws of physics, that has its own structures due to materials and many other things, and to. To fundamentally back that information out and be able to represent it and be able to generate it is just fundamentally quite a different problem.

We will be borrowing similar ideas or useful ideas from language and LLMs, but this is fundamentally philosophically, to me, a different problem. Right? So language, one D and probably a bad representation of the physical world because it's been generated by humans and it's probably lossy.

There's a whole nother modality of generative AI models, which are pixels, and these are 2d image and 2d video. And, like, one could say that, like, if you look at a video, it looks, you know, you can see 3d stuff because, like, you can pan a camera or whatever it is. And so, like, how would like spatial intelligence be different than, say, 2D Video here?

When I think about this, it's useful to disentangle two things. One is the underlying representation, and then two is kind of the user facing affordances that you have. And here's where you can get sometimes confused, because fundamentally, we see 2d.

Our retinas are 2d structures in our bodies, and we've got two of them. So fundamentally, our visual system perceives 2d images. But the problem is that depending on what representation you use, there could be different affordances that are more natural or less natural.

So even if you are, at the end of the day, you might be seeing a 2d image or a 2d video. Your brain is perceiving that as a projection of a 3d world. So there's things you might want to do, like move objects around, move the camera around.

In principle, you might be able to do these with a purely 2d representation and model, but it's just not a fit to the problems that you're asking the model to do right, like modeling the 2d projections of a dynamic 3d world is a function that probably can be modeled.

But by putting a 3d representation into the heart of a model, there's just going to be a better fit between the kind of representation that the model is working on and the kind of tasks that you want that model to do. So our bet is that by threading a little bit more 3d representation under the hood, that'll enable better affordances for users.

And this also goes back to the North Star for me. Why is it spatial intelligence? Why is it nothing? Flat pixel intelligence is because I think the arc of intelligence has to go to what Justin calls affordances and the arc of intelligence.

If you look at evolution, the arc of intelligence eventually enables animals and humans, especially human as an intelligent animal, to move around the world, interact with it, create civilization, create life, create a piece of sandwich, whatever you do in this 3d world and translating that into a piece of technology, that native 3d ness is fundamentally important for the floodgate of possible applications, even if some of them, the serving of them, looks 2d, but it's innately three d.

To me, I think this is actually a very subtle and incredibly critical point. And so I think it's worth digging into. And a good way to do this is talking about use cases. And so, just to level set this, we're talking about generating a technology, let's call it a model that can do spatial intelligence.

So maybe in the abstract, what might that look like kind of a little bit more concretely? What would be the potential use cases that you could apply this to? So I think there's a couple different kinds of things we imagine these spatially intelligent models able to do over time.

And one that I'm really excited about is world generation. We're all used to something like a text to image generator, or starting to see text to video generators, where you put an image, put in a video, and out pops an amazing image or an amazing two second clip. But I think you could imagine leveling this up and getting 3d worlds out.

So one thing that we could imagine spatial intelligence helping us with in the future are up leveling these experiences into 3d, where we're not getting just an image out or just a clip out, but you're getting out a full, simulated, but vibrant and interactive 3d world. For gaming? Maybe for gaming, right? Maybe for gaming, maybe for virtual photography, like you name it. I think even if you got this to work, there'd be a million applications. For education? Yeah, for education.

I guess one of my things is that like we in some sense, this enables a new form of media, right? Because we already have the ability to create virtual, interactive worlds, but it costs hundreds and hundreds of millions of dollars and a ton of development time. And as a result, like, what are the places that people drive this technological ability is video games, right?

Because if we do have the ability as a society to create amazingly detailed virtual, interactive worlds that give you amazing experiences, but because it takes so much labor to do so, then the only economically viable use of that technology in its form today is games that can be sold for $70 a piece to millions and millions of people to recoup the investment.

If we had the ability to create these same virtual, interactive, vibrant 3d worlds, you could see a lot of other applications of this, because if you bring down that cost of producing that kind of content, then people are going to use it for other things. What if you could have an interactive, sort of a personalized 3d experience that's as good and as rich, as detailed as one of these AAA video games that cost hundreds of millions of dollars to produce, but it could be catered to, like this very niche thing that only maybe a couple people would want that particular thing that's not a particular product or a particular roadmap.

But I think that's a vision of a new kind of media that would be enabled by spatial intelligence in the generative realms. If I think about a world, I actually think about things that are not just scene generation. I think about stuff like movement and physics. And so, like, in the limit, is that included? And then the second one is absolutely, if I'm interacting with it, like, are there semantics?

And I mean by that, like, if I open a book, are there like, pages and are there words in it? And do they mean, like, are we talking like a full depth experiment, or are we talking about like, kind of a static scene? I think I'll see a progression of this technology over time. This is really hard stuff to build. So I think the static, the static problem is a little bit easier. But in the limit, I think we want this to be fully dynamic, fully interactable. All the things that you just said, I mean, that's the definition of spatial intelligence.

Yeah. So there is going to be a progression. We'll start with more static, but everything you've said is in the roadmap of spatial intelligence. I mean, this is kind of in the name of the company itself, world labs. Like, the world is about building and understanding worlds. And this is actually a little bit of inside baseball.

I realized after we told the name to people. They don't always get it, because in computer vision and Reconstruction and generation, we often make a distinction or a delineation about the kinds of things you can do. And kind of the first level is objects, right? Like a microphone, a cup, a chair. Like, these are discrete things in the world.

And a lot of the imagenet style stuff that Fei Fei worked on was about recognizing objects in the world, then leveling up the next level of objects. I think of as scenes, like scenes are compositions of objects. Like now we've got this recording studio with a table and microphones and people in chairs.

It's some composition of objects, but then we envision worlds as a step beyond scenes. Scenes are kind of maybe individual things, but we want to break the boundaries, go outside the door, step up from the table, walk out from the door, walk down the street, and see the cars buzzing past and see the leaves on the trees moving, and be able to interact with those things.

Another thing that's really exciting, because just to mention the word new media, with this technology, the boundary between real world and virtual, imagined world or augmented world, or predicted world, is all blurry. You really.

The real world is 3d, right? So in the digital world, you have to have a 3d representation to even blend with the real world. You cannot have a 2d. You cannot have a be able to interface with the real 3d world in an effective way with this. It unlocks it. So the use cases can be quite limitless because of this.

Right? So the first use case that Justin was talking about would be like the generation of a virtual world for any number of use cases. The one that you're just alluding to would be more of an augmented reality, right? Yes. Just around the time World lab was being formed, Vision Pro was released by Apple.

And they use the word spatial computing. They almost stole our. But we're spatial intelligence, so spatial computing needs spatial intelligence. That's exactly right. So we don't know what hardware form it will take. It'll be goggles, glasses, contact lenses, contact lenses.

But that interface between the true real world and what you can do on top of it, whether it's to help you to augment your capability to work on a piece of machine and fix your car, even if you are not a trained mechanic, or to just being a Pokemon go for entertainment, suddenly this piece of technology is going to be the operating system basically for AR VR mix rhino in the limit.

Like, what does an AR device need to do? It's this thing that's always on. It's with you, it's looking out into the world, so it needs to understand the stuff that you're seeing and maybe help you out with tasks in your daily life. But I'm also really excited about this blend between virtual and physical that becomes really critical.

If you have the ability to understand what's around you in real time, in perfect 3d, then it actually starts to deprecate large parts of the real world as well. Like right now, how many differently sized screens do we all own for different use cases? Too many. You've got your phone, you've got your iPad, you've got your computer monitor, you've got your tv watch, you've got your watch.

These are all basically different side screens because they need to present information to you in different contexts and in different positions. But if you've got the ability to seamlessly blend virtual content with the physical world, it kind of deprecates the need for all of those. It just ideally seamlessly blends the information that you need to know in the moment with the right mechanism of giving you that information.

Another huge case of being able to blend the digital virtual world with the 3d physical world is for ailing agents to be able to do things in the physical world. And if humans use this mix, our devices to do things. Like I said, I don't know how to fix a car, but if I have to, I put on this goggle or glass and suddenly I'm guided to do that.

But there are other types of agents, namely robots, any kind of robots, not just humanoid. And their interface by definition is the 3d world, but their compute, their brain by definition is the digital world. So what connects that from the learning to behaving between a robot brain to the real world brain? It has to be spatial intelligence.

So you've talked about virtual worlds, you've talked about more of an augmented reality, and now you've just talked about the purely physical world, basically, which would be used for robotics. For any company, that would be like a very large charter, especially if you're going to get into each one of these different areas.

How do you think about the idea of deep, deep tech versus any of these specific application areas? We see ourselves as a deep tech company, as the platform company that provides models that can serve different use cases.

Of these three, is there any one that you think is kind of more natural early on that people can kind of expect the company to lean into? I think it suffices to say that devices are not totally ready.

Actually, I got my first VR headset in grad school and just like, that's one of these transformative technology experiences. You put it on, you're like, oh my God, this is crazy. And I think a lot of people have that experience the first time they use VR. So I've been excited about this space for a long time, and I love the vision pro.

Like, I stayed up late to order one of the first ones, like the first day it came out. But I think the reality is it's just not there yet as a platform for mass market appeal. So very likely as a company, we'll move into a market that's more ready then.

I think there can sometimes be simplicity in generality, right? Like, if you. We have this notion of being a deep tech company, we believe that there is some underlying fundamental problems that need to be solved really well, and if solved really well, can apply to a lot of different domains. We really view this long arc of the company as building and realizing the dreams of spatial intelligence writ large.

So this is a lot of technology to build, it seems to me. Yeah, I think it's a really hard problem. I think sometimes from people who are not directly in the AI space, they just see it as like, AI is one undifferentiated mass of talent.

And for those of us who have been here for longer, you realize that there's a lot of different kinds of talent that need to come together to build anything in AI. In particular this one. We've talked a little bit about the data problem. We've talked a little bit about some of the algorithms that I worked on during my PhD. But there's a lot of other stuff we need to do this too. You need really high quality, large scale engineering. You need really deep understanding of the 3d world.

There's actually a lot of connections with computer graphics because they've been attacking a lot of the same problems from the opposite direction. So when we think about team construction, we think about how do we find expert, like absolute top of the world, best experts in the world at each of these different subdomains that are necessary to build this really hard thing.

When I thought about how we form the best founding team for world Labs, it has to start with a group of phenomenal, Multidisciplinary founders. And of course, Justin is natural for me. Justin, cover your years as one of my best students and one of the smartest technologists.

But there are two other people I have known by reputation, and one of them, Justin, worked with that I was drooling for. Right. One is Ben Mildenhall. We talked about his seminal work in nerve.

But another person is Christoph Lasner, who has been reputated in the community of computer graphics. And especially, he had the foresight of working on a precursor of the gaussian splat representation for 3d modeling five years before the gaussian splat takeoff.

And when we heard about. When we talk about the potential possibility of working with Christoph Lesnar, Justin just jumped off his chair. Ben and Christoph are legends.

And maybe just quickly talk about kind of like, how you thought about the build out of the rest of the team, because, again, there's a lot to build here and a lot to work on, not just in kind of AI or graphics, but, like, systems and so forth.

Yeah. This is what, so far, I'm personally most proud of, is the formidable team. I've had the privilege of working with the smartest young people in my entire career, right from the top universities, being a professor at Stanford. But the kind of talent that we put together here at world Labs is just phenomenal.

I've never seen the concentration, and I think the biggest differentiating element here is that we're believers of spatial intelligence. All of the Multidisciplinary talents, whether it's system engineering, machine learning, infra to generative modeling, to data, to graphics, all of us, whether it's our personal research journey or technology journey or even personal hobby, we believe that spatial intelligence has to happen at this moment with this group of people, and that's how we really found our founding team. And that focus of energy and talent is really just humbling to me. I just love it.

So, I know you've been guided by a north star. So something about north stars is like, you can't actually reach them because they're in the sky, but it's a great way to have guidance. So, how will you know when you've accomplished what you've set out to accomplish?

Or is this a lifelong thing that's going to continue kind of infinitely? First of all, there's real north stars and virtual north stars. Sometimes you can reach virtual north stars. Fair enough.

Good enough. In the world model. Exactly. Like I said, I saw one of our north star that would take 100 years, was storytelling of images. And Justin and Andre, you know, in my opinion, solved it for me.

So. So we could get to our north star. But I think, for me, is when so many people and so many businesses are using our models to unlock their needs for spatial intelligence, and that's the moment I know we have reached a major milestone.

Actual deployment, actual impact. Yeah, I don't think we're ever going to get there. I think that this is such a fundamental thing. Like the universe is a giant I evolving four dimensional structure and spatial intelligence writ large is just understanding that in all of its depths and figuring out all the applications to that.

So I think we have a particular set of ideas in mind today, but I think this journey is going to take us places that we can't even imagine right now. The magic of good technology is that technology opens up more possibilities and unknowns. So we will be pushing and then the possibility these will be expanding. Brilliant.

Thank you, Justin. Thank you, Fei Fei. This was fantastic. Thank you, Martin. Thank you, Martine.

Artificial Intelligence, Technology, Innovation, Spatial Intelligence, Computer Vision, Fei-Fei Li