ENSPIRING.ai: Peter Thiel Just Blew Joe Rogan’s Mind

ENSPIRING.ai: Peter Thiel Just Blew Joe Rogan’s Mind

The video explores the evolution and future implications of Artificial Intelligence (AI) in our society. It emphasizes the shift from the theoretical debates about superintelligence and surveillance in the 2010s to the revolutionary achievement of AI systems like ChatGPT passing the Turing test. The discussion highlights the profound significance of this milestone in AI research, likening its impact to that of the Internet's rise in the late 1990s, and delves into its potential societal reshaping in economic, cultural, and political dimensions.

This is a crucial watch for understanding the trajectory of AI and its ongoing influence on modern civilization. The discourse suggests that despite technological stagnation in the physical world, advancements in the digital realm are monumental and impacts the economy and daily life significantly. There is also a discussion about the notion of "artificial general intelligence" and its ambiguous definition, challenging its relevance compared to existing AI achievements.

Main takeaways from the video:

💡
AI has achieved a long-sought milestone with systems passing the Turing test, indicating the depth of advancement achieved.
💡
The progress in AI may be analogous to the evolution of the Internet, suggesting transformative potential over the coming decades.
💡
The societal focus is shifting to how AI applications can be integrated, impacting various aspects of life including economics and politics.
💡
There is an ongoing debate about the true measure of scientific progress between digital innovations and progress in physical technologies.
💡
An exploration of "climate science" suggests a critical view on how certain scientific disciplines may diverge from traditional scientific debate and rigor.
Please remember to turn on the CC button to view the subtitles.

Key Vocabularies and Common Phrases:

1. rebuttal [rɪˈbʌtəl] - (noun) - A counter-statement or contradiction; an argument that opposes another. - Synonyms: (refutation, contradiction, counterargument)

And then there was the CCP, Chinese communist rebuttal...

2. surveillance [sərˈveɪləns] - (noun) - Close observation, especially of a person or group, often by the government or armed forces. - Synonyms: (monitoring, observation, watch)

It was just surveillance, facial recognition technology.

3. approximation [əˌprɒksɪˈmeɪʃən] - (noun) - An estimate or near calculation, not exact but close enough to be useful. - Synonyms: (estimate, guess, rough calculation)

And it's a somewhat fuzzy test because obviously you have an expert and a non expert. Does it fool you all the time, or some of the time. How good is it? But to first approximation...

4. suppression [səˈprɛʃən] - (noun) - The action of putting an end to something forcibly or suppressing something. - Synonyms: (restriction, curtailment, restraint)

...like psychological suppression people had where they were not thinking.

5. repression [rɪˈprɛʃən] - (noun) - The action of subduing someone or something by force; mental defense mechanism. - Synonyms: (control, restraint, suppression)

So I'm tempted to give almost a psychological repression theory of the 2010 debates.

6. ambiguous [æmˈbɪɡjuəs] - (adjective) - Open to more than one interpretation; not having a clear meaning. - Synonyms: (uncertain, equivocal, unclear)

...a hopelessly vague concept, which general intelligence could be just a generally smart human being. So is that just a person with an iq of 130, or is it super intelligence? It's an ambiguous thing.

7. rearrange [ˌriːəˈreɪndʒ] - (verb) - To change the order or position of something. - Synonyms: (reorganize, reshuffle, alter)

It's going to rearrange the economic, cultural, political structure of our society in extremely dramatic ways.

8. underestimate [ʌndərˈɛstɪmeɪt] - (verb) - To assess someone or something as having less ability than they really have. - Synonyms: (undervalue, underrate, belittle)

I think bitcoin was a big invention, whether it was good or bad, but it was a pretty big deal, and it was systematically underestimated for at least the first 1011 years

9. stagnation [stæɡˈneɪʃən] - (noun) - The state of not flowing or moving; lack of activity, growth, or development. - Synonyms: (inaction, inactivity, dormancy)

In an era of relative tech stagnation is that when something does happen, we don't even know how to process it.

10. malthusian [mælˈθjuːziən] - (adjective) - Relating to the theories of Thomas Malthus, suggesting that population growth will outpace agricultural production, leading to societal issues. - Synonyms: (pessimistic, neo-malthusian, overpopulation)

And maybe there's some sort of a malthusian calculus that's more about resources than about pollution.

Peter Thiel Just Blew Joe Rogan’s Mind

When you look to the future and you try to guess as to how all this is going to turn out with AI, what do you think we're looking at over the next five years? If you think about the AI discussion in the 2010s, pre OpenAI chat GPT, and the revolution of the last two years. But the discussion was maybe anchored on two visions of what AI meant, and one was Nick Bostrom, Oxford professor who wrote this book, a super Intelligence 2014. And it was basically AI was going to be this super duper intelligent thing, godlike intelligence, way smarter than any human being.

And then there was the CCP, Chinese communist rebuttal, the Kai Fu Li book from 2018, AI superpowers. It defined AI as was fairly low tech. It was just surveillance, facial recognition technology. We would just have this sort of totalitarian, Stalinist monitoring. It didn't require very much innovation. It just required that you apply things. And basically the subtext was China is going to win because we have no ethical qualms in China about applying this sort of basic machine learning to measuring or controlling the population. And those were, say, two extreme competing visions of what AI would mean in the. That maybe were the anchors of the AI debate.

And then what happened, in some sense, with chat GPT in late 22, early 23, was that the achievement you got, you did not get superintelligence. It was not just surveillance tech. You actually got to the holy grail of what people would have defined AI as from 1950 to 2010. For the previous 60 years before the 2010s, people have always said AI, the definition of AI is passing the Turing test and the Turing test.

It basically means that the computer can fool you into thinking that it's a human being. And it's a somewhat fuzzy test because obviously you have an expert on the computer, a non expert. Does it fool you all the time, or some of the time. How good is it? But to first approximation, the Turing test, we weren't even close to passing it in 2021. And then chat GPT basically passes the Turing test, at least for, say, an IQ 100 average person.

It's passed the Turing test. And that was the Holy Grail. That was the holy grail of AI research for the previous 60 years. And so I know there's probably some psychological or sociological history where you can say that this weird debate between Bostrom about superintelligence and Kai fu li about surveillance tech was like this, almost like psychological suppression people had where they were not thinking. They lost track of the Turing test, of the Holy Grail because it was about to happen, and it was such a significant, such an important thing that you didn't even want to think about it. So I'm tempted to give almost a psychological repression theory of the 2010 debates.

But be that as it may, the Turing test gets passed, and that's an extraordinary achievement. And then where does it go from here? There probably are ways you can refine these. It's still going to be a long time to apply it. There's this AGI discussion where we get artificial general intelligence, which is a hopelessly vague concept, which general intelligence could be just a generally smart human being. So is that just a person with an iq of 130, or is it super intelligence? Is it godlike intelligence? So it's an ambiguous thing. But I keep thinking that maybe the AGI question's less important than passing the Turing test.

If we got AGI, if we got, let's say, super intelligence, that would be interesting to Mister God, because you'd have competition for being God. But surely the Turing test is more important for us humans because it's either a complement or a substitute to humans. It's going to rearrange the economic, cultural, political structure of our society in extremely dramatic ways. And I think maybe what's already happened is much more important than anything else that's going to be done, and then it's just going to be a long ways in applying it.

The analogy I'm always tempted to go to, and it's maybe AI. It's like the Internet in 1999, where on one level it's clear the Internet's going to be big and get a lot bigger, and it's going to dominate the economy, it's going to rearrange the society in the 21st century. And then at the same time, it was a complete bubble and people had no idea how the business models worked. Almost everything blew up. It didn't take that long in the scheme of things. It took 1520 years for it to become super dominant, but it didn't happen. Been 18 months, as people fantasized in 1999.

And maybe what we have in AI is something like this. It's figuring out how to actually apply it in all these different ways is going to take something like two decades. But that doesn't distract from it being a really big deal. Do you think that the lack of acknowledgement or the public celebration, or at least that this mainstream discussion, which I think should be everywhere, that we've passed the Turing test, do you think it's connected to the fact that this stuff accelerates so rapidly that even though we've essentially breached this new territory.

We still know that GPT five is going to be better, GPT six is going to be insane. And then they're working on these right now, and the change is happening so quickly, we're almost a little reluctant to acknowledge where we're at. I probably for 15 years or so often been on the side that there isn't that much progress in science or tech or not as much as Silicon Valley likes to claim. And even on the AI level, I think it's a massive technical achievement. It's still an open question. Is it actually going to lead to much higher living standards for everybody? The Internet was a massive achievement. How much? It didn't raise people's living standards? Much, much trickier question, but in this world, where not much has happened, one of the paradoxes of an era of relative tech stagnation is that when something does happen, we don't even know how to process it.

I think bitcoin was a big invention, whether it was good or bad, but it was a pretty big deal, and it was systematically underestimated for at least the first 1011 years. You could trade it. It went up smoothly for 1011 years. It didn't get repriced all at once, because we're in a world where nothing big ever happens, and so we have no way of processing it when something pretty big happens. The Internet was pretty big in 99. Bitcoin was moderately big, and I'd say passing the Turing test is really big. It's on the same scale as the Internet. And because our lived experiences that so little has felt like it's been changing for the last few decades, we're probably underestimating it.

It's interesting that you say that so little. We feel like so little has changed in our age. We've seen all the change, right? We saw the end of the cold war, we saw answering machines, we saw VHS tapes, then we saw the Internet than where we're at right now, which is like this bizarre moment in time where people carry the Internet around with them in their pocket every day. And these super sophisticated computers that are ubiquitous, everybody has one. There's incredible technology that's being ramped up every year. They're getting better all the time.

And now there's AI. There's AI on your phone. You could access chat, GPT, and a bunch of different programs on your phone. And I think that's an insane change. I think that's one of the most, especially with the use of social media. It's one of the most bizarre changes. I think our culture has ever the most bizarre. It can be a big change, culturally or politically, but the kinds of questions I'd ask is, how do you measure it economically? How much does it change GDP? How much does it change productivity?

The story I would generally tell for the last 50 years, since the 1970s, early seventies, is that we've been not absolute stagnation. We've been in an era of relative stagnation, where there has been very limited progress in the world of atoms, the world of physical things. And there has been a lot of progress in the world of bits, information, computers, Internet, mobile, Internet, now AI. What are you referring to when you say in the world of physical things? If we had to find technology, if we were sitting here in 1967, the year we were born, and we had a discussion about technology, what technology would have meant?

It would have meant computers. It would have also meant rockets. It would have meant supersonic airplanes. It would have meant new medicines. It would have meant the green revolution in agriculture, maybe underwater cities, because technology simply gets defined as that which is changing, that which is progressing. And so there was progress on all these fronts. Today, last 20 years, when you talk about technology, technology has been reduced to meaning computers. And that tells you that the structure of progress has been weird. There's been this narrow cone of very intense progress around the world of bits, around the world of computers, and then all the other areas have been relatively stagnant. We're not moving any faster.

The Concorde got decommissioned in 2003, or whenever. And then with all the low tech airport security measures, it takes even longer to fly to get through all of them, from one city to the next. The highways have gone backwards because there are more traffic jams. We haven't figured out ways around those. We're literally moving slower than we were 40 or 50 years ago. And then, of course, there's also a sense in which the screens and the devices have this effect, distracting us. When you're riding a hundred year old subway in New York City and you're looking at your iPhone, you can look at, wow, this is this cool new gadget.

But you're also being distracted from the fact that your lived environment hasn't changed in 100 years. There's a question. How important is this world of bits versus the world of atoms? I would say, as human beings, we're physically embodied in a material world. And so I would always say this world of atoms is pretty important. And when that's pretty stagnant, there's a lot of stuff that doesn't make sense.

I was an undergraduate at Stanford, late eighties. And at the time, in retrospect, every engineering area would have been a bad thing to go into. Mechanical engineering, chemical engineering, all these engineering fields where you're tinkering and trying to do new things because these things turned out to be stuck. They were regulated, couldn't come up with new things to do. Nuclear engineering, aero, astro engineering, people already knew those were really bad ones to go into. They were outlawed. You weren't going to make any progress in new nuclear reactor designs or stuff like that. Electrical engineering, which was the one that's adjacent to making semiconductors, that one was still okay. And then the only field that was actually going to progress a lot was computer science.

And again, it's been very powerful. But that was not the felt sense in the 1980s. In the 1980s, computer science, when people use the word science, I'm in favor of science, I'm not in favor of science in quotes. And it's always a tell that it's not real science. And so when we call it climate science or political science or social science, you're just making it up and you have an inferiority complex to real science or something like physics or chemistry. And computer science was in the same category as social science or political science. It was a fake field for people who found electrical engineering or math way too hard.

You don't feel that climate science is a real science? There's several different things. One could say it's possible climate change is happening. It's possible we don't have great accounts of why that's going on. So I'm not questioning any of those things, but how scientific it is. I don't think it's a place where we have really vigorous debates. Maybe the climate is increasing because of carbon dioxide emissions. Temperatures are going up, maybe it's methane, maybe it's people are eating too much steak, it's the cows flatulating.

And you have to measure how much is methane a greenhouse gas versus carbon dioxide. I don't think they're rigorously doing that stuff scientifically. And I think the fact that it's called climate science tells you that it's more dogmatic than anything that's truly science should be. Why? Dogma doesn't mean it's wrong. But why does the fact that it's called climate science mean that it's more dogmatic? Because if you said nuclear science, you wouldn't question it, right? It's.

Yeah, but no one calls it nuclear science, they call it nuclear engineering. Interesting. The only thing is I'm just making narrow linguistic science that is legitimately science at this point. People say computer science has worked. But in the 1980s, all I'm saying is it was in the same category as, let's say, social science, political science. It was a tell that the people doing it deep down knew they weren't doing real science.

There's certainly ideology that's connected to climate science, and then there's certainly corporations that are invested in this prospect of green energy and the concept of green energy, and they're profiting off of it and pushing these different things, whether it be electric car mandates or whatever.

It is. Like California, I think, 2035, they have a mandate that all new vehicles have to be electric, which is hilarious when you're connected to a grid that can't support the electric cars it currently has. After they said that within a month or two, Gavin Newsom asked people to not charge their Teslas because it was summer and the grid was. Yeah, look, it was all linked into all these ideological projects in all these ways.

And there's an environmental project, and maybe it shouldn't be scientific. The hardcore environmentalist argument is we only have one planet and we don't have time to do science. If we have to do rigorous science, and you can prove that we're overheating, it'll be too late. And so if you're a hardcore environmentalist, you don't want to have as high a standard of science. Yeah, my intuition is certainly when you go away from that, you end up with things that are too dogmatic, too ideological.

Maybe it doesn't even work, even if the planet's getting warmer. Maybe methane is a more dangerous greenhouse gas than carbon dioxide. We're not even capable of measuring that. We're also ignoring certain things, like regenerative farms that sequester carbon. And then you have people like Bill Gates saying that planting trees to deal with carbon is ridiculous. That's a ridiculous way to do it.

Like, how is that ridiculous? They literally turn carbon dioxide into oxygen. It is their food. That's what the food of plants is. That's what powers the whole plant life. And the way we have the symbiotic relationship with them. And the more carbon dioxide is, the greener it is, which is why it's greener today on earth, has been in 100 years. Sure.

These are all facts that are inconvenient to people that have a very specific, narrow window of how to approach this. Sure. Although there probably are ways to steel man the other side, too, where maybe the original 1970s, I think the manifesto that's always very interesting from the other side was this book by the Club of Rome, 1972, the limits of growth.

We need to head towards a society in which there's very limited growth, because if you have unlimited growth, you're going to run out of resources. If you don't run out of resources, you'll hit a pollution constraint. In the 1970s, it was, you're going to have overpopulation, you're going to run out of oil. We had the oil shocks, and then by the nineties, it's morphed into more of the pollution problem with carbon dioxide, climate change, other environmental things.

There's been some improvement in oil, carbon fuels, with fracking, things like this. In Texas, it's not at the scale that's been enough to give an american standard of living to the whole planet. And we consume 100 million barrels of oil a day globally. Maybe fracking can add 10%, 10 million to that. If everybody on this planet has an american standard of living, it's something like 300 400 million barrels of oil. And I don't think that's there.

I always wonder whether that was the real environmental argument. We can't have an american standard of living for the whole planet. We somehow can't justify this degree of inequality, and therefore we have to figure out ways to dial back and tax the carbon, restrict it, and maybe there's some sort of a malthusian calculus that's more about resources than about pollution.

Artificial Intelligence, Technology, Innovation, Economics, Leadership, Surveillance, Tesla Tsla, Tesla Elon Musk, Tesla Model S, Tesla Model X, Tesla Model 3, Tesla Model Y, Tesla Stock, Tesla Compact Car, Tesla Profit, Tesla Earnings, Tesla Electric, Self Driving Podcast, Tesla Podcast, Tesla Self Driving Podcast, Self Driving Ai, I'M Not Driving, Tesla, Elon Musk, Donald Trump, Elon Musk Donald Trump, Donald Trump Elon Musk, Tesla Fsd, Fsd, Full Self Driving, Fsd V12, Ai, Artificial Intelligence, Tesla Ai, Farzad