ENSPIRING.ai: Decoding the Future of Politics and Global Technologies
Peter Thiel dives into the shifting landscape of political allegiances, expressing support for certain political figures while acknowledging his decision to withdraw from active political engagement this cycle. He discusses potential outcomes of the upcoming elections, expressing a belief that they might not be as close as anticipated, with Trump's potential victory posing both relief and disappointment. Thiel sheds light on the nuances of electoral processes, suggesting reforms to ensure fairness and prevent malpractice.
The discussion extends to global tensions, particularly regarding China's policies and the idea of decoupling from China economically. Thiel touches on Taiwan's strategic importance and potential conflicts, while also offering insights into the broader dynamics of global power shifts. In technology, he discusses AI's transformative impact, likening it to the internet boom of the late 1990s, and speculates on the future balance of power in this realm, emphasizing Nvidia's current dominance.
Main takeaways from the video:
Please remember to turn on the CC button to view the subtitles.
Key Vocabularies and Common Phrases:
1. Pithy [ˈpɪθi] - (adj.) - Concise and forcefully expressive.
Peter was the person who told me this really Pithy quote.
2. Elongated [ɪˈlɔːŋɡeɪtɪd] - (adj.) - Drawn out or extended in length.
It's not this two month elongated process.
3. Miscarriage [mɪsˈkærɪdʒ] - (n.) - A wrong or improper handling, as of justice or process.
And if you haven't paid it off by the time you're 65...
4. Confederation [kənˌfedəˈreɪʃən] - (n.) - An alliance or union of parties or groups.
And then if we now think about what actually happened.
5. Deficit [ˈdɛfəsɪt] - (n.) - The amount by which something, especially money, falls short.
And if you can find some way to meaningfully reduce the Deficit.
6. Surreal [səˈrēəl] - (adj.) - Bizarre; very strange; imaginative.
It was real surreal during those negotiations.
7. Thucydides' Trap [θyuːˈsɪdɪdiːz træp] - (n.) - The tendencies toward inevitable conflict between a rising and an established power.
That's sort of the sense of history that's strongly the sort of Thucydides trap.
8. Decouple [diːˈkʌpəl] - (v.) - Sever the connection of one thing from another.
I would still get the business out of China, I would get the computers out.
9. Catastrophic [ˌkætəˈstrɑfɪk] - (adj.) - Involving or causing great damage or suffering.
How does the world...those can both be true.
10. Contrarian [kənˈtreəriən] - (n.) - A person who opposes or rejects popular opinion, going against current practice.
My one contrarian view on the election is...
Decoding the Future of Politics and Global Technologies
Peter was the person who told me this really Pithy quote. In a world that's changing so quickly, the biggest risk you can take is not taking any risk. This guy is a tough nut to try to sort of explain. Change money with. PayPal was the first outside investor in Facebook back Palantir, which is, I believe they helped find Osama bin Laden, almost certainly the most successful technology investor in the world. I don't think the future is fixed is a question of agency. What I think works really well are sort of one of a kind companies. How do you get from zero to one? What great business is nobody building? Tell me something that's true that nobody agrees with you on.
All right, Peter, welcome back. It's good to see you. You don't do this too often, so we do appreciate it. But when you do do it, you're always super Candid, and we appreciate that as well. You fit right in here. You're sitting this year's political cycle out right into politics. Well, no, I mean, I think this is a question we all have, which is you were very active. You bet on JD in a major way. He delivered today. It was a very impressive discussion. Why aren't you involved this cycle where it's very Confounding to us? Because these are your guys, man. How much time do we have? Let's talk about this for 2 hours or something. I don't know. Look, I have a lot of conflicted thoughts on it. I am still very strongly pro Trump, pro JD. I've decided not to donate any money politically, but I'm supporting them in every other way possible, obviously. I think. I think there's. My pessimistic thought is that Trump is going to win and probably will win by a big margin. He'll do better than the last time, and it'll still be really disappointing because the elections are always a relative choice, and then once someone's president, it's an absolute and you get evaluated. Do you like Trump or Harris better? And there seems to be a lot of reasons that one would be more anti Harris than anti Trump. Again, no one's pro any of these people. It's all negative. Right. And. But then after they win, there will be a lot of buyers remorse and disappointment. And that's sort of. That's sort of the arc that I see of what's going to happen. And it's somewhat under motivating. I don't know, just to describe, describe it. I think, you know, I think it's. I think the odds are slightly in favor of Trump, but it's basically 50 50. My one contrarian view on the election is that it's not going to be close. Most presidential elections aren't. And one side just breaks. 2016, 2020, we're super close, but two thirds of the elections aren't. And you can't always line things up and figure it out. I think either the Kamala bubble will burst or maybe the Trump voters get really demotivated and don't show up. But I think one side is simply going to collapse in the next, in the next two months. And then, you know, if you want to get involved, you know, with all the headaches that come with being involved, if it makes a difference counterfactually and if it's a really close election, everything makes a difference. If it's, if it's not even close, I don't think it makes much of a difference if it is going to be close. By the way, if it's, if it's like going to be a razor thin, close election, then I'm pretty sure Kamala will win because, because they will cheat. They will fortify it. They will steal the ballots. And so, you know, if we can, if we can answer them in the event that it's close, I don't want to be involved. In the event that it's not close, I don't need to be involved. And so that's sort of, that's sort of a straightforward, right there. My jumping off point.
How much cheating on a percentage basis do you think happens every year? How much and do you think Trump, actually, you need to be careful with the verb. So, you know, cheating, stealing, that implies something happened. The dark at night. Okay. Misogyny. I think the verb you're allowed to use is fortify. Okay. Yeah, we don't want you on YouTube ballot harvesting. I mean, it was, you know, it's all sort of, there were all these rule changes. It was sort of done in plain daylight and. But, yeah, I think our elections are not, they're not perfectly clean. Otherwise we could examine it. We could have vigorous debate about it. Well, what would you change then? What should change? Because we all want everybody's votes to count. We want it to be clean. I'm talking about the audience. I don't know, at a minimum, you'd run them. You try to run elections the same way you do it in every other western democracy. You have one day voting. You have practically no Absentee ballots. You have. And it's, you know, it's one day where everything happens. It's not this two month elongated process, that's the way you do it in every other country. You have some somewhat stronger voter id and make sure the people who are voting have a right to vote, make it a national hobby. That's basically what you do in every other western democracy. And it used to be much more like that in the US. I mean, it's meaningfully decayed over the last 2030 years. 2030 years ago, 30, 40 years ago, you got the results on the day of the vote. And that sort of stopped happening a while ago.
What would make you not disappointed? So Trump gets elected. How do you, what's your counter narrative on? You know, we're a year or two years past the election, Trump is president. What makes you say, I'm surprisingly not disappointed. What takes place? Man? It's, you know, it's, I think there are some extremely difficult problems that it's, it's really hard to know how to, how to solve them. I wouldn't know what to do. But we have, you know, we have an incredibly big Deficit. And. Yeah, if you can, if you can find some way to meaningfully reduce the Deficit with no tax hikes and without GDP contraction, well, you would do it if you got a lot of GDP growth, maybe. Right. But if you could, if you could meaningfully reduce the Deficit with no tax hikes, that would be very impressive. I think we're sort of sleepwalking into Armageddon with Ukraine and the conflict in Gaza are just sort of the warm ups to the China Taiwan war. And so if Trump can find a way to head that off, that would be incredible. If they don't go to war in four years, that would be better than I would expect, possibly in relation to Taiwan, if Trump called you and asked, should we defend it or not in this acute case, would you advise to let Taiwan be taken by China in order to avoid a nuclear holocaust and world war three, or would you believe that we should defend it and defend free countries like that? Well, I think you're probably not supposed to say you're Peter Thiel. No, no, no. If, you know, if you. I think, look, I think there's so many ways our policies are messed up, but probably the one thing that's roughly correct on the Taiwan policy is that we don't tell China what we're going to do, and what we tell them is we don't know what we'll do and we'll figure it out when you do it, which probably has the virtue of being correct. And then I think if you had a red line at Kwamui Matsu, the island 5 miles off the coast of China. That's unbelievable. If you say we want some guardrails and we won't defend Taiwan, then they've get invaded right away. So I think the policy of not strategic ambition policy is, and maybe not even having a policy in some ways is relatively the best. I think anything precise, you say, that's going to just lead to war right away. But what do you believe? Worth defending or not worth stirring the conflict? Democracy in this tiny island worth going to war over or not? According to Peter Thiel, it's not worth world War three. And I still think it's quite Catastrophic if it gets taken over by the communists. How does the world. Those can both be true. How does the world divide if we end up in heightened escalation? Is China, Russia, Iran, France? Is that an axis that forms? Think out the next decade in kind of your base case, and I don't know what happens. Estimate how the world. I don't know what happens militarily if there's a China Taiwan invasion. I mean, maybe we roll over, maybe it escalates all the way to nuclear war, probably some very messy in between thing, sort of like what you have in the Ukraine. What I think happens economically is very straightforward. I think basically, with Russia and Germany, you had one Nord stream pipeline, and we have the equivalent of 100 pipelines between the US and China, and they all blow up. I met the TikTok CEO about a year ago, and maybe I wouldn't have said this now, but what I told him and felt was very honest advice was, you don't need to worry about the US. We're never going to do anything about TikTok. We're too incompetent. But if I were in your place, I would still get the business out of China. I would get the computers out, the people out. I'd completely decouple it from bike dance, because TikTok will be banned 24 hours after the Taiwan invasion. And if you think there's a 50 50 chance this happens, and that will destroy, you know, 100% of the value of the TikTok franchise, what was his reaction? You know, he said that they had done a lot of simulations and there were a bunch of companies in world War one and world War two that managed to sell things to both sides. He doesn't seem so bright to me. Do you think he's. No, he honestly. What is your take on him? He didn't disagree with my frame, and so I always find that flattering if someone basically agrees with my framing. He seemed perfectly bright to me, even though game theory, a lot of bright people. But, Peter, I saw you give me wrong about this. I saw you give a talk last summer with Barry Weiss, and you talked about this decoupling should be happening. You weren't saying should. You were recommending that every industry leader consider decoupling from China. I think your comment was, it's like picking up nickels in front of a freight train. Do you remember saying that? Well, I think there are a lot of different ways in which businesses are coupled to China. There were investors that tried investing. There are people who tried to compete within China. There are people who built factories in China for export, and there are different parts of that that worked to varying degrees. But, yeah, I certainly would not try to invest in a company that competed domestically inside China. I think that's virtually impossible. I think it's probably quite tricky even to invest in chinese businesses. And then there is sort of this model of building factories in China for export to the west, and it was a very big Arbitrage. These things do work, you know, I mean, I visited the Foxconn factory nine years ago, and it's, you know, you have people get paid a dollar and a half, $2 an hour, and they work 12 hours a day, and they live in a dorm room with two bunk beds where, you know, you get eight people in the dorm room, someone's sleeping in your bed while you're working, and vice versa, and you sort of realize they're really far behind us or they're really far ahead of us. And either way, it's not that straightforward to just shift the iPhone factories to the United States. So I sort of understand why a lot of businesses ended up there and why this is the arrangement that we have. But, yeah, my intuition for what is going to happen without making any normative judgments at all is it is going to decouple.
How inflationary will that be? It presumably is. It's presumably pretty inflationary. Yeah, that's probably the. I don't know. You'd have to sort of look at what the inelasticities of all these goods are. So if that's true, what's the policy reaction? Not that it may not be as inflationary as people think, because people always model trade in terms of pairwise, in terms of two countries. So if you literally have to move the people back to the US, that's insanely expensive. I don't know how much it would cost people to build an iPhone. Does India become China. I think India is sort of too messed up. But you shifted to Vietnam, Mexico, there are 5 billion people living in countries where the incomes are lower than China. And so probably the negative sum trade policy we should have with China is we should just shifted to other countries, which is a little bit bad for the US, extremely bad for China, and let's say really good for Vietnam. And that's kind of the negative sum policy that's going to manifest as this sort of decoupling happens. Let's talk about avoiding it for a second here. Trump seems to be extremely good with dictators and authoritarians. Kim Jong un seems like a big fan. I mean, that as a compliment, as a superpower. Right? Like, he doesn't have a problem talking to them, he connects with them, and they seem to like him. So what would be the path to him working with Xi to avoid this? Is there a path to avoid this? Because we were sitting here last year talking about this, and it just seems mind boggling that if everybody agrees that this is going to happen, that we can't figure out a way to make it not happen. Well, it's not just up to us. So. Yeah, there's. And so I don't know, it's obviously somewhat of a black box. We don't exactly, I feel we just have no clue what people in China think. But I think it's sort of the sense of history is strongly the sort of Thucydides trap idea that you have a rising power against an existing power, and it tends to, you know, it's Wilhelmine, Germany versus Britain before World War one. And, you know, it's, you know, Athens against Sparta, the rising power against the existing power. You tend to get conflict. That's. That's probably what, deep down, I think, is, is really, really far in the China DNA. So. So I'd say maybe. Maybe the first, I don't know, the meta version would be the first step to avoiding the conflict would be we have to start by admitting that China believes the conflict's happening. Right. And then if people like you are constantly saying, well, we just need to have some happy talk. Right? That is a recipe. That's a recipe for World War Two. I'm not advocating happy talk, necessarily. I get accused of being a bit more hawkish, obviously. In general, you know, I don't know. I'm not sure Trump should have talked to the north korean dictator. But, yeah, in general, it's probably a good idea to try to talk to people, even if they're really bad people. Most of the time. And it's certainly a very odd dynamic with the US and Russia at this point, where I think it is impossible for anybody in the Biden administration even to have a back channel communication with, with people like, I don't think Tucker Carlson counts as an emissary from the Biden administration. And if anybody gets tuckered or, I don't know what the verb is, who talks? You know, that's, that's, that seems, that seems worse than the alternative. Can we talk about technology? You have this. You have a speech where you talk about some of the misguided things we've done in the past in the name of technology and use, like, big data as an example of that. What is AI? Oh, man. That's sort of a big question. Yeah. I always had this riff where I don't like the buzzwords and machine learning. Big data, cloud computing. You know, I'm going to build a mobile app bringing the cloud to, you know, if you have sort of a concatenation of buzzwords, you know, my first instinct is just to run away as fast as possible. Some really bad group think. And, and for many years, I. My bias is probably that AI was one of the worst of all these buzzwords. It meant, you know, the next generation of computers, the last generation of computers, you know, anything in between. So it's meant all these very different things. If we, if we roll the clock back to the 2010s, you know, the. Probably the AI, to the extent you concretize, I would say the AI debate was maybe framed by the two books, the two canonical books that framed it. There was the Bostrom book, Superintelligence 2014, where AI was going to be this superhuman, super duper intelligent thing. And then the anti Boston book was the Kai Fu Li 2018 AI Superpowers. You can think of the CCP rebuttal to Bostrom, where basically AI was going to be surveillance tech, face recognition, and China was going to win because they had no qualms about applying this technology. And then if we now think about what actually happened, let's say, with the LLMs and chat GPT, it was really neither of those two. And it was this in between thing, which was actually what people would have defined AI as for the previous 60 or 70 years, which is passing the Turing test, which is, you know, the somewhat fuzzy line. It's a computer that can pretend to be a human, where they can fool you into thinking it's a human. And, you know, even with the fuzziness of that line, you could say that pre chat GPT wasn't passed, and then chat GPT passed it. And that seems. That seems very, very significant, and then obviously leads to all these questions. What does it mean? Is it going to complement people? Is it going to substitute for people? What does it do to the labor market? Do you get paid less? So there are all these questions, but it seems extremely important. And it's probably certainly the big picture questions, which I think Silicon Valley is always very bad at talking about, is like, what does it mean to be a human being? Sort of the. I don't know, the stupid 2022 answer would be that humans differ from all the other animals because we're good at languages. If you're a three year old or an 80 year old, you speak, you communicate, we tell each other stories. This is what makes us different. And so, yeah, I think there's something about it that's incredibly important and very disorienting. The question I always have as the narrower question I have as an investor, sort of, how do you make money with this stuff? And how do you make money? It's pretty confusing. And I think, I don't know. This is always where I'm anchored on the late nineties is sort of the formative period for me. But I keep thinking that AI in 2023, 2024 is like the Internet in 1999. It's really big. It's going to be very important. It's going to transform the world, not in six months, but in 20 years. And then there are probably all kinds of incredibly Catastrophic approximations where what businesses are going to make money, who's going to have monopoly, who's going to have pricing power is super unclear. Probably one layer deeper of analysis. If attention is all you need, and if you're not post economic, you need to pay attention to who's making money. And in AI, it's basically one company is making, Nvidia is making over 100% of the profits. Everybody else is collectively losing money. And so there's sort of a, you have to do some sort, you should try to do some sort of analysis. Do you go long, Nvidia? Do you go short? Is it my monopoly question? Is it a really durable monopoly? And then it's hard for me to know because I'm in Silicon Valley, and we haven't done anything in semiconductors for a long time, so I have no clue. Let's debuzz word the word AI and say it's a bunch of process automation. Let's just say that's version 0.1, where brains that are roughly the equivalent of a teenager can do a lot of manual stuff. Have you thought about what it means for 8 billion people in the world if there's an extra billion, that necessarily couldn't work in political or economic terms. I don't know. I don't know if this is the same, but this is, you know, the history of 250 years, the industrial revolution, was that it, you know, it adds to GDP. It frees people up to do more, more productive things. You know, maybe there's, you know, there was, yeah, there was a. I know there was a luddite critique in the 19th century of the factories that people were going to be unemployed and wouldn't have anything to do with because the machines would replace the people. You know, maybe the Luddites are right this time around. I'm probably. I'm probably pretty skeptical of it. But, yeah, it's extremely confusing. You know, where the gains and losses are, there probably are. You know, there's always sort of a hobby. You can always just use it on your hobby horses. So I don't know. My anti Hollywood or anti university hobby horse is that it seems to me that, you know, the AI is quite good at the woke stuff. And so if you want to be a successful actor, you should be maybe a little bit racist or a little bit sexist or just really funny, and you won't have any risk of the AI replacing you. Everybody else will get replaced. And then probably, I don't know. I don't know. Claudine Gay, the plagiarizing Harvard University president, you know, the AI is going to, you know, the AI will produce endless amounts of these sort of, I don't even know what to call them, woke papers. And they were all already sort of plagiarizing one another. They were always saying the same thing over and over again. They were using their own version, and FGI is just going to flood the zone with even more of that. And that, you know, I don't know. Obviously they've been able to do it for a long time and no one's noticed. But I think at this point, it doesn't seem promising from a competitive point of view. What are the areas, obviously, my hobby horses. So I'm just maybe just wishful thinking on my part. What are the areas of technology that you're curious about that your mind is like, wow, this is really, I have to learn more, pay attention. You know, I'm always, I always think you want to instantiate it more in companies than things. Or if you ask sort of like, where is innovation happening in our society and doesn't have to be this way, but it's mostly in a certain subset of relatively small companies. We have these relatively small teams of people that are really pushing the envelope, and that's sort of, that's sort of what I find inspiring about venture capital. And then obviously, you don't just want innovation. You also want to translate into good businesses. But that's where it happens. It doesn't happen in universities. It doesn't happen in government. There was a time it did somehow in this very, very weird, different country that was the United States in the 1940s. You know, somehow the army organized the scientists and got them to produce a nuclear bomb in Los Alamos in three and a half years. And the way the New York Times editorialized after that was, you know, it was sort of an anti libertarian write up. It was, you know, there were, you know, obviously, maybe if you'd left the prima donna scientists to their own, it would have taken them 50 years to build a bomb, and the army could just tell them what to do. And this will silence anybody who doesn't believe the government can do things. And they don't write editorials like that in the New York Times anymore. But I think. Yeah, but I think that's sort of. That's sort of where one should look. I think a crazy amount of it still happens in the United States, and we've episodically tried to do all this investing. We've probably tried to do too much investing in Europe over the years. It's always sort of a junket. It's a nice place to go on vacation as an investor, and it is very. I don't have a great explanation, but it's a very strange thing that so much of it is still. The US is somehow still a country where people do new things. Peter, is that a team? Organizational, social, evolutionary problem in the United States? What is the root cause of the failure to innovate in the United States relative to the expectation going back 70 years, 50 years, et cetera, from, you know, the rocket ships, and we're all going to live? Yeah, well, this is always, this always is always one of the big picture. And claims I have that we've been in an era of relative tech stagnation the last 40 or 50 years, or the tagline that we had and techno negative. They promised us flying cars. All we got was 140 characters, which is not an anti Twitter, anti x commentary, even though the way. The way I used to always qualify it was that at least, you know, at least it was at least a good company. You had, you know, 10,000 people who didn't have to do very much work, and because you smoke marijuana all day, very similar to Europe. And so I think that actually, that part actually did get corrected. But I think, like, what went wrong? Because you point out that it's not a technology trend tracker that you think about. It's about people and teams that innovate and drive to outcomes based on their view of the world and what's gone wrong with our view of the world and our ability to organize, to achieve the seemingly unachievable, with very rare exceptions. Obviously, EloN's here later, but, you know, it's overdetermined, the rough frame I always have. And again, it's not that there's been no innovation. There's been a decent amount of innovation in the world of bits. Computers, Internet, mobile, Internet, crypto, AI. So there are sort of all these world of bits, places where there was a significant, but sort of somehow narrow cone of progress, but it was everything having to do with atoms that was slow. And this was already the case when I was an undergraduate at Stanford in the late eighties. In retrospect, any applied engineering field was a bad idea. It was a bad idea to become a chemical engineer, a mechanical engineer. Aero Astro was terrible. Nuclear engineering. Everyone knew. I mean, no one did that. And there's something about the world of atoms that, from a libertarian point of view, you'd say, got regulated to death. There probably there's some set of arguments where the low hanging fruit got picked and got harder to find new things to do. Although I always. I always think that was just a sort of baby boomer excuse for covering up for the failures of that generation. And then I think maybe a very big picture part of it was that at some point in the 20th century, the idea got took hold that not all forms of technological progress were simply good and simply for the better. And there's, you know, there's something about the two world wars and the, you know, the development of nuclear weapons that gradually pushed people into this. This more risk averse society. And it didn't happen overnight, but, you know, maybe a quarter century, you know, after the nuclear bomb, by Woodstock it happened. By Woodstock it happened. Yeah, because that was the same summer we landed on the moon. Yeah. Woodstock was three weeks after that. Yeah, that was the tipping point. Progress stopped and the hippies took over. Ok, can we shift gears just to the domestic economy? What do you think is happening in the domestic economy? Just as backdrop, we've had something like 14 straight months of downward revisions to jobs. The revisions are supposed to be completely random, but somehow they've all been down. Probably doesn't mean anything. There's also what's happening with the yield curve, but I'll stop there. What's your take on what's happening in the economy? You know, it's. Man, it's always hard to know exactly. I suspect we're close to a recession. I've probably thought this for a while. It's being stopped by really big government spending. So in May of 2023, the projection for the Deficit in 20, fiscal year 24, which is October of 23 to September 24, was something like 1.51.6 trillion. The Deficit is going to come in about 400 billion higher. And so which sort of a crazy Deficit was projected and it was way off. And then somehow. And so if we had not found another 400 billion to add to this, this crazy Deficit at the top of the economic cycle, you're supposed to increase deficits in a recession, not at the top of the cycle. Things would be probably very shaky. There's some way where we have too much debt, not enough sustainable growth. Again, I always think it comes back to tech innovation. There probably are other ways to grow an economy without tech or intensive progress, but I think those don't seem to be on offer. And then that's where it's very deeply stuck. If you wind back over the last 50 years, there's always a question, why did people not realize that this tech stagnation had happened sooner? And I think there were two one time things people could do economically that had nothing to do with science or tech. There was a 1980s Reagan Thatcher move which was to massively cut taxes, deregulate, allow lots of companies to merge and combine. And it was sort of a one time way to make the economy a lot bigger, even though it was not something that really had the sort of compounding effect. So it led to one great decade, and then there was, you know, and that was sort of the right wing capitalist move. And then in the nineties, there was sort of a Clinton Blair center left thing, which was sort of leaning into globalization, and there was a giant global Arbitrage you could do, which also had a lot of negative externalities that came with it, but it sort of was a one time move. I think both of those are not on offer. I don't necessarily think you should undo globalization. I don't think you should raise taxes like crazy. But you can't do more globalization or more tax cuts. Here that's not going to be the win. And so I think you have to somehow get back to the future. We have time for a couple more questions. You, I think, saw that maybe these Ivy League institutions maybe weren't producing the best and brightest or weren't exactly hitting their mandate. And you created the teal fellows, and you've been doing that for a while. And I meet them all because they all have crazy ideas, and they pitch me for angel investment. What have you learned getting people to quit school, giving them $100,000, and then how many parents call you and get really upset that their kids are quitting school? It's. Well, I've learned a lot. I mean, it's. I don't know. I think the universities are far worse than I even thought when I started this thing. I think, yeah, it's, you know, I did this debate at Yale last week, resolved a higher education is a bubble, and you should go through all the different numbers and the, and then, you know, and again, I was careful to word it in such a way that I didn't have to, you know, and then people kept saying, well, what's your alternative? What should people do instead? And I said, nope, that was not the debate of, I'm not, you know, I'm not your guidance counselor. I'm not your career counselor. I don't know how to solve your problems. But if something's a bubble, you know, the first thing you should do is probably not, you know, lean into it in too crazy a way. And, you know, the student debt was 300 billion in 2000. It's basically close to 2 trillion at this point. So it's just been this sort of runaway, this runaway process. And then if you look at it. Bye, cohort. If you graduated from college in 1990, 712 years later, people still had student debt, but most of the people had sort of paid it down. But the first, by 2009, we started the Thiel fellowship in 2010, and it felt by 2009, was the first cohort where this really stopped. If you take the people graduated from college in 2009, and you fast forward twelve years to 2021, the median person had more student debt twelve years later than they graduated with because it's actually just, it's just compounding faster. And it was partially the global financial crisis. The people had less well paying jobs, they stayed in college longer. And the colleges, it's just sort of been this background thing where it's decayed in these really significant ways. And again, I think it's on some level, there are sort of a lot of debates in our society that are probably dominated by sort of a boomer narrative. And maybe the baby boomers were the last generation where college really worked. And they think, well, I worked my way through college and why can't an 18 year old go into college, do that today? And so I think the bubble will be done once the boomers have exited stage left. But does the government, it would be good if we figured something out before then. Does the government need to stop underwriting the loans? Because it's the lending. I think 90 plus percent of the capital in the student loan programs is funded by the federal government. And if you're an accredited university, you can take out a loan and go to it and accreditation in a rigid kind of free market system, you would have an underwriter that says, are you going to be able to graduate, make enough money to pay your loan off? Is this a good school? Are you going to get a good job? And then the market would figure out whether or not to give you a loan, would figure out what the rate should be and so on. But in this case, the government simply provides capital to support all this. And as a result, everything's gotten more expensive and the rigidity in the system that basically qualifies schools and the quality of those schools relative to the earning potential over time is gone. So we need the government to get out of student loan business. Yeah, but look, the place where I'm, I know I'm sort of, some ways I'm right wing, some ways I'm left wing on this. So the place where I'm left wing is, I do think a lot of the students got ripped off. And so I think there should be some kind of broad debt forgiveness at this point who should pick up the tag. But it's not just the taxpayers, it's the universities and it's the bondholders. Got it. Take a little bit out of those endowments and then obviously if you just make it the taxpayers, then you'll just, then the universities can just charge more and more. No incentive to reform whatsoever. But you've had also, I mean, if, you know, it's in 2005, it was under Bush 43 that the bankruptcy laws got rewritten in the US where you cannot discharge student debt even if you go bankrupt. And if you haven't paid it off by the time you're 65, your Social Security, wages, checks will be garnished. It's crazy. So, you know, I do think, but should we stop lending? Should the federal government get out of the student lending business? Well, if we say that, if we start, if we start with my place, where a lot of the student debt should be forgiveness, and then. And then reform the. We'll see how many people are willing to lend, you know, how much. How many of the colleges can pay for all the students. What's your sense? If it was a totally free market system, how many colleges would shut down because they wouldn't be able to, so there's no tuition support. It. It probably would be a lot smaller. It might. It might. You might not have to shut them down because a lot of them have gotten extremely uploaded. It's like Baumol's cost disease, where, you know, I don't know. I have no idea. But a place like UCLA, it probably has, you know, twice or three times as many bureaucrats as they had 30, 40 years ago. So there's sort of all. There's sort of all these sort of parasitic people that have sort of gradually accrued. And so there's probably a lot of. There would be a lot of rational ways to dial this back. But, yeah, you know, maybe we're going to need a new location for next year. If the only way to lose weight is to cut off your thumb, that's kind of a difficult way to go on a diet. Peter, three of your collaborators, long time collaborators, Elon Musk, Mark Zuckerberg, and Sam Altman, are arguably the three leading AI language model leaders. Which one is going to win? Rank them in order and tell us a little bit about each. Peter said he would answer any question. I said he would take any question. I didn't say to answer any questions. You said you would. Honestly said today you felt extremely honest and Candid. Yeah, but I've already been extremely honest and Candid, so I think I'm talking about. It's whoever I talked to last. Okay, why don't you. They're all very, very convincing people. So, you know, I. Great answer. I don't know, maybe talk a little bit about. I talked to Elon a while ago, and, you know, and it was just how ridiculous it was that Sam Altman was getting away with turning OpenAI from a nonprofit into a for profit. That was such a scam. If everybody was allowed to do this, everybody would do this, that it has to be totally illegal what Sam's doing, and it shouldn't be allowed at all. And that seemed really, really convincing in the moment. And then sort of half an hour later, I thought to myself, but, you know, actually, man, it was. It's been such a horrifically mismanaged place at OpenAI. With this preposterous nonprofit board they had, nobody would do this again. And so there actually isn't much of a moral hazard from it, so. But, yeah, whoever I talk to, I find very convincing in the moment. Will that space just get commoditized? I mean, do you see a path to monopoly there? Well, again, this is, again, where, you know, you should, attention is all you need. You need to pay attention to who's making money. It's Nvidia, it's the hardware, the chips layer, and, and then that's just, it's just what we, you know, it's not what we've done in tech for, for 30 years. Are they making 120% of the profits? They're, they're making, I think everybody else is losing money collectively. Yeah, everyone else is just spending money on, on the computer. So it's, it's one, it's one company that's making, I mean, maybe there's a few other people are making some money. I mean, I assume TSMC and ASML, but yeah, I think everyone else is collectively losing money. What do you think of Zuckerberg's approach to say, I'm so far behind this isn't core to my business. I'm going to open source it. Is that going to be the winning strategy? Is candy cap that for us? Again? I would say again, my big qualification is my model as AI feels like it does feel uncomfortably close to the bubble of 1999. So we haven't invested that much in it, and I want to have more clarity in investing. But the simplistic question is who's going to make money? I think a year ago, two years ago, in retrospect, Nvidia would have been a good buy. I think at this point, everybody, it's too obvious that they're making too much money. Everyone's going to try to copy them. On the chip side, maybe that's straightforward to do, maybe it's not, but that's, you know, I'd say probably you should. If you want to figure out the AI thing, you should not be asking this question about, you know, meta or open air or any of these things. You should really be focusing on the Nvidia question, the chips question. And the fact that we're not able to focus on that, that tells us something about how we've all been trained. You know, I think Nvidia got started in 1993. Yeah, I believe that was the last year where anybody in their right mind would have studied electrical engineering over computer science. Right. 94, Netscape takes off and then, yeah, it's probably a really bad idea to start a semiconductor company even in 93, but the benefit is there was going to be, no one would come after you. No talented people started semiconductor companies after 1993 because they all went into software, score their monopoly power. It's, I think it's quite strong because this history that I just gave you where none of us know anything about chips, and then I think the, you know, I think the risk, it's always, you know, if attention is all that you need, the qualifier to that is, you know, when you get started as an actress, as a startup, as a company, you need attention, then it's desirable to get more. And at some point, attention becomes the worst thing in the world. And there was the one day where Nvidia had the largest market cap in the world earlier this year, and I do think that represented a phase transition. Once that happened, they probably had more attention than was good. Hey, Peter, as we wrap here, your brain works in a unique way. You're an incredible strategist. You think very differently than a lot of the folks that we get to talk to with all of this. Are you optimistic for the future? Man, I always push back on that question. I think extreme optimism and extreme pessimism are both really bad attitudes and they're somehow the same thing, you know, extreme pessimism, nothing you can do. Extreme optimism, the future will take care of itself. So if you have to have one, it's probably, you want to be somewhere in between, maybe mildly optimistic, mildly pessimistic, but, you know, I believe in human agency and that it's up to us and it's not, you know, it's not some sort of winning a lottery ticket or some astrological chart that's going to decide things. And I believe in human agency and that's sort of an axis that's very different from optimism or pessimism.
What a great place. Extreme optimism, extreme pessimism. They're both excuses for laziness. What an amazing place to end. Ladies and gentlemen, give it up for Peter Thiel. Thank you. Thank you, Peter. Come on now. Wow, Peter.
Politics, Economics, Technology, Artificial Intelligence, China-Taiwan Conflict, Education
Comments ()