ENSPIRING.ai: I'm not afraid. You're afraid - Tristan Harris - Nobel Prize Summit 2023

ENSPIRING.ai: I'm not afraid. You're afraid - Tristan Harris - Nobel Prize Summit 2023

In this thought-provoking presentation, the speaker expands on the idea that social media and technology have created cultural climate change. They highlight how God-like technology, such as social media algorithms and artificial intelligence, has re-wired human attention and information flow, likening it to the first contact between humans and AI. The speaker examines the misalignment between our paleolithic brains and modern technology, drawing attention to the increasing complexity of problems outpacing our current institutions’ ability to respond effectively.

The speaker argues that society has failed to recognize the real problem, which lies not just in the content or misinformation but in the engagement-driven business models that fuel social media companies. These models have led to addiction to attention and dissemination of extreme, rather than moderate, voices. As technology continues to advance and AI capabilities grow, the speaker calls for a systemic upgrade to our institutions and a more humane interface that aligns with our innate human needs and limitations.

Main takeaways from the video:

💡
The speaker stresses the urgency of aligning technology’s rapid evolution with human and institutional capacity.
💡
Emphasizes the necessity to upgrade medieval institutions to match the complex nature of modern problems, beyond just immediate harms.
💡
Calls for addressing the dangerous race dynamics in technology, focusing on governance and the importance of matching power with wisdom.
Please remember to turn on the CC button to view the subtitles.

Key Vocabularies and Common Phrases:

1. sociobiologist [ˌsoʊsioʊbaɪˈɑlədʒɪst] - (noun) - A scientist who studies the biological basis of all forms of social behavior in both animals and humans. - Synonyms: (ethologist, behavioral scientist, anthropologist)

We often start our presentations with this quote by Eo Wilson, the Harvard sociobiologist, who said, the real problem of humanity is that we have paleolithic emotions or brains, medieval institutions, and God like technology.

2. paleolithic [ˌpeɪli.oʊˈlɪθɪk] - (adjective) - Relating to the early phase of the Stone Age, lasting about 2.5 million years, when primitive stone tools were used. - Synonyms: (ancient, prehistoric, primitive)

The real problem of humanity is that we have paleolithic emotions or brains, medieval institutions, and God like technology.

3. misalignment [ˌmɪsəˈlaɪnmənt] - (noun) - Incorrect or improper alignment, or a lack of coordination between entities. - Synonyms: (discrepancy, discord, disarrangement)

This is really important because as we're about to go into second contact with AI, with large language models, we haven't really fixed this first misalignment we lost

4. supercharge [ˈsuːpərˌtʃɑːrdʒ] - (verb) - To greatly amplify or boost the capability or intensity of something. - Synonyms: (boost, intensify, enhance)

That's going to supercharge so many of the things you've been hearing about and what I'll kind of talk about today.

5. doom scrolling [duːm ˈskroʊlɪŋ] - (noun) - The act of spending an excessive amount of screen time devoted to the absorption of negative news. - Synonyms: (consuming negative media, absorbing bad news, negative browsing)

In first contact with social media, I hope you can see these slides here we have information overload, doom scrolling, loneliness crisis...

6. polarization [ˌpoʊlərɪˈzeɪʃən] - (noun) - The division into two sharply contrasting groups or sets of opinions or beliefs. - Synonyms: (divergence, division, split)

polarization bots and deepfakes and basically the breakdown of our shared reality.

7. engagement [ɛnˈgeɪdʒmənt] - (noun) - The involvement or commitment to a cause or purpose, often referring to how users interact with content online. - Synonyms: (involvement, participation, interaction)

Because as other speakers have talked about today, the race to maximize engagement.

8. beautification [bjuːˌtɪfɪˈkeɪʃən] - (noun) - The process of making something more beautiful or attractive. - Synonyms: (adornment, enhancement, embellishment)

Snapchat adds beautification filters, Instagram will lose if they don't also add beautification filters.

9. inflate [ɪnˈfleɪt] - (verb) - To cause something to increase dramatically in size, amount, or value. - Synonyms: (amplify, enlarge, magnify)

So it's a race to who can inflate your ego and give you the most instant sharing as fast as possible.

10. diffuse [dɪˈfjuz] - (adjective) - Spread out over a large area; not concentrated. - Synonyms: (scattered, dispersed, decentralized)

But it doesn't deal with diffuse, chronic, and long term cumulative harms.

I'm not afraid. You're afraid - Tristan Harris - Nobel Prize Summit 2023

It's great to be here with all of you, and so great to see the broad range of topics that have been covered, because it's really more. It's so much more than just misinformation. We often think about social media as kind of creating the climate change of culture. And what I hope to do is sort of broaden and maybe zoom out a bit on what are the collective effects of technology and humanity. We often start our presentations with this quote by Eo Wilson, the Harvard sociobiologist, who said, the real problem of humanity is that we have paleolithic emotions or brains, medieval institutions, and God like technology. And it's so clear we repeat it in every presentation because I just feel like it so quickly summarizes the feeling that we have that our brains are not matched with the way that our technology is influencing our minds, which is what you've been hearing about all day today. And so oftentimes, we talk about the alignment problem in AI. How do we align AI with humanity? Well, how do we align our brains with our institutions? Having a steering wheel over something that's God like that's moving a million times faster, especially as we have large language models, AI's that are going to be moving way faster. Who here is sort of feeling the explosive rate of AI moving in the world? Right. That's going to supercharge so many of the things you've been hearing about and what I'll kind of talk about today.

Now, many of you here have probably seen the social dilemma. How many people here have actually seen the social dilemma? Okay, good. Maybe a good third of you. The social dilemma was a Netflix documentary I was a part of that was actually seen by more than 125 million people in 190 countries and 30 languages about the effects that social media had on humanity. It was rewiring the flows of attention and information in our society. And what we actually talk about is that people might have missed. That was actually first contact between humanity and AI. People say, how is AI going to affect humanity? Well, we actually already had an AI that rewired and was misaligned with humanity. Because when you swipe your finger on TikTok, or swipe your finger on Twitter, or swipe your finger on YouTube, you activated a supercomputer pointed at your brain to calculate based on what 3 billion other human social primates have watched, seen, or looked at. What is the perfect next thing to show you when you swipe? That was first contact. How did it go?

So, in first contact with social media, I hope you can see these slides here we have information overload, doom scrolling, loneliness crisis, the influencer culture sexualization of young kids, which angle makes me look the best? polarization bots and deepfakes and basically the breakdown of our shared reality. Collectively, these effects are something like the climate change of culture. It's so much better. If we just had good information in our information environment, we would still have doom scrolling. If you just had good information in our information environment, you'd still have a loneliness crisis because people would be by themselves on a screen scrolling by themselves and that would affect the belonging dynamics that Hari just spoke to. This is really important because as we're about to go into second contact with AI, with large language models, we haven't really fixed this first misalignment we lost.

How did we lose to this AI? How did we lose in this first contact? Well, what were the stories we were telling ourselves? It seems really aligned, right? Social media is going to connect people with their friends. We're going to give people what they want. We're going to show people the most personalized ads that are relevant to them, only the things that they would want to buy. These stories that we were telling ourselves are true, but that somehow hid what was underneath all that, which, as other speakers have talked about today, the race to maximize engagement. Because how much have you paid for your TikTok account in the last year? How much have you paid for your YouTube account in the last year? How much have you paid for your Facebook account in the last year? Zero. How are they worth a trillion dollars in market cap? People say it's your data, but they're actually selling also your attention.

So how do I get your attention? We add slot machine mechanics pull to refresh. How many likes did I get? Did I get more this time than last time? I just checked my email 5 seconds ago, but I'll check it again. It's not enough just to get your attention. If I'm a social media company, my goal is actually to get you addicted to getting attention from other people. Because a person who's validation seeking and wants attention from other people is more profitable than someone who does not care about attention from other people. So how do we do that? We add beautification filters to it, right? And what we call the race to the bottom of the brainstem. If Snapchat adds beautification filters, Instagram will lose if they don't also add beautification filters. TikTok was found to actually automatically add a beautification of between one and 5% without even asking people because we're all users. Mirror, mirror on the wall, which makes me look best of all. And we use that app right now.

These design decisions that, again, other speakers have been talking about all day also led to the creation of this. Because a button that instantly retweets reshares this content quickly is good at creating attention addicts, right? Because now TikTok is literally competing with Instagram. If you post this video on Instagram, and Instagram offers you 100 views on average, but if you post it on, on TikTok, you get 1000 views on average. If you're a teenager, where are you going to post your next video? TikTok. TikTok, the one that gives you the more reach. So it's a race to who can inflate your ego and give you the most instant sharing as fast as possible. And we've all seen the effects of that fake news spread six times faster than true news. And as other people have seen, this is the study from more in common. This has led to a massive over representation, a fun house mirror of the extreme voices to the moderate voices.

With this instant sharing, what's the difference between a moderate voice on the Internet versus an extreme voice? Do extreme voices post more? I'll just say it. They post more often, right? And moderate voices post infrequently. That's the first layer of the double whammy. The second is that when someone says something extreme, it goes more viral than when someone says something more moderate. So even though there's a very small number of very extreme voices out there, social media takes that like 5% of the population and then just spreads it out and stretches it out over the whole canvas and movie screen of humanity. And you run society through that funhouse mirror for about ten years and you quickly end up in a very distorted world. I don't know if you can read this. This is a thing that says social media summarized elegantly in two tweets. The top tweet says, and I'll just zoom in really quickly.

The much vaunted Pandora papers revealed that the patriotic Zelenskyy was storing payments from his top funder, Israel Igor Kolomowski, in offshore accounts. And this person was also a funder of a neo nazi battalion. And notice that that first tweet got 8000 retweets. The top one now underneath it says, I was the editor and co reporter of that story. You can look it up, you've completely twisted it. There's no link between this money and anything to do with it. And that got 58 tweets we could all just go home, because that kind of summarizes what the entire information environment looks like. And if that's the asymmetry of power that we have granted to every actor in the information ecosystem, we have been living in this funhouse mirror. And so we got to a world that looks like this, the climate change of culture. And we have another talk out there that I highly recommend.

Folks, I have a short amount of time today, but you should check out this talk we recently gave called the AI dilemma, that talks about this second contact, which, unfortunately, I won't have time to talk about today. So what's the solution to this problem? Well, is it content moderation? Is it fact checking? Is information true versus false? Well, what about all the sort of salaciousness that has partial truths that are spun? What about if we vilify the CEO's? One of the things that I think we really need to get good at is recognizing that the complexity of our problems actually has exceeded the institutions. The complexity of our world is going up. Pandemics. What's the right way to respond to pandemic? What's going on with COVID nuclear escalation? How do we deal with energy crises, debt crises, GDP to debt to GDP ratio, misinformation, the complexity of all these issues also combinatorically, right? And then simultaneously, our ability to respond to that complexity, both as our brains and our institutions. Right? So our paleolithic brains and our institutions collectively represents humanity's ability to respond to the complexity of our problems.

But what is runaway technology adding to this? Well, runaway technology, AI steepens the curve of complexity, because if synthetic biology was a threat before, AI supercharges the threat of synthetic biology. If misinformation was a threat before, then generative media supercharges the threat of misinformation, because people can publish. Just yesterday, two days ago, the Pentagon, there was a fake image of Pentagon being bombed. If I wanted to cause a bank run in the United States, if I'm Russia or China, I could easily just create photos of people standing in front of lines, of people standing in front of Wells Fargo, Chase bank, et cetera. I can devalue the us dollar like that. There's no department of Homeland Security or Patriot missile defense system that's going to stop someone from doing something like that. And there's a million more examples like that. By the way, the reason I'm going here is that I think we are often thinking too narrowly about how to solve these problems.

We're thinking about content, moderation, and fact checking, when really it's about how do we have wisdom, how do we have the bottom line, our brains and our institutions and our social trust and our social fabric actually match the speed and complexity of the evolutionary curve of technology and our issues. Think of it this way. If your immune system has a slower evolutionary pace than a virus has a fast evolutionary pace, which is going to win your immune system or the virus? The virus, our immune system are our institutions, our governance, our regulation, and that's why we have to upgrade those institutions. What this means is that the sense making and choice making of humanity, both individually and in terms of our regulation, in terms of how our institutions act, has to match the complexity of the world. And I wanted to reframe what we're here to do, which is to also say, in addition to dealing with the information ecosystem, we need an information ecosystem that is, I think, collapsing these lines together with the quality of our sense making and choice making, matches the complexity of choices that we face.

And one way, maybe, of saying that, to go back to EO Wilson, is that we need to embrace our paleolithic brains, upgrade our medieval institutions, and bind the race dynamics of the God like technology. And just to give some subtle examples of this, what do we mean by embrace our paleolithic brains? You've been hearing from people today talking about confirmation bias, Guisem's talk about the social rewards that we get belonging, that is embracing what it means to be human. You know, our name, the center for Humane Technology, is named as such because my co founder, Azerkin, his father was the inventor of the Macintosh project at Apple. He wrote a book called the Humane interface. And the word humane means having an honest, reflected view in the mirror of how does our brains really work? That's how he came up with and was conceiving of the Macintosh, because the Macintosh, different than the personal computer with a blinking green cursor and a command line interface, was a very non ergonomic way to match our brains.

But the Macintosh, with the mouse and the menu bar and drag and drop and icons, was a much more humane interface because it embraced how we really work on the inside. But now what we need to do is apply those kinds of insights to our brains have confirmation bias. We need to feel belonging rather than loneliness. So how would technology embrace these aspects of what it means to be human and design in a richer way if we were to get rid of the engagement based business models that sell our advertising? When you open up Facebook, Facebook could be ranked. Instead of which content should I show you? It could be ranked in terms of what are things that I can do with different communities around me in my physical environment. They could be supercharging the reinstantiation and the re flowering of the social fabric. Instead of showcasing virtual Facebook groups, they could be showcasing actual physical communities that people could spend time with each other. Because as many people have found out, when you spend physical time with people face to face, it automatically has healing properties.

There's building trust, building connection, upgrading our medieval institutions. One of the limits of our institutions right now is they deal with acute harms, discrete harms. This product hurt this person, but it doesn't deal with diffuse, chronic, and long term cumulative harms. Think climate change. Think forever chemicals in our atmosphere. Think climate change of culture, slow rolling increases in mental health issues, addiction, loneliness, et cetera. In society, we need institutions that deal with those long term, cumulative, diffuse issues rather than just the acute issues. Liability, for example, is something that we need for AI companies that's coming down the pipe. But liability only deals with an acute issue, like someone died while using a car or using a product.

And in terms of binding our God like technology, we need to actually recognize when there's a race. If I don't do it, I lose to the guy that will. If I don't use social media for my nonprofit to try to boost up my influence, I'll just lose to the other nonprofits that boost up their influence with social media. So it becomes a race to create this virtual amount of influence online if I don't deploy OpenAI to as many people as possible and deploy AI into Snapchat. Because now every kid on Snapchat has a myai bot at the top, a friend who will always answer your questions and will always be there when their other friends go to bed at 10:00 p.m. snapchat is going to win if they do that, versus the other companies that don't. So as we are deploying this God like technology, we need to actually recognize, not be upset at bad guys, bad CEO's, but get upset at bad games, bad races. And that would be another form of upgrading our institutions.

And just to sort of close a quote that we reference all the time is that you cannot have the power of godsen without the wisdom, love, and prudence of gods. If you have more power than you have wisdom or responsibility, that by definition means you are going to be creating effects in society that you can't see because you have more power than you have awareness of what those effects are. Think of it as people are now aware of things like a biosafety level four laboratory from the Wuhan Institute of Virology. If you have biosafety level four capabilities, where you're developing pathogens, you need biosafety level four safety practices. As we're developing AI, we are now inventing biosafety level ten level capabilities. But we haven't even invented what biosafety level ten practices would need to be. And one of the hardest things I think humanity is going to have to do in looking in the mirror is thinking about how do we bind and limit the power that we're handing out to the appropriate level of wisdom and responsibility. The biggest mistake we made with social media is handing out godlike powers of instant re sharing to everyone, which can be very helpful in certain cases, but it's helpful specifically because it's bound to wisdom when it's going well. And when it's not going well, it's because that power is not bound to wisdom. So those are some thoughts I wanted to leave you with today. Thank you very much.

Artificial Intelligence, Harvard, Technology, Social Media Impact, Cultural Climate Change, Human-Technology Interaction, Nobel Prize