ENSPIRING.ai: "AI and the Election - Deepfakes, Chatbots, and the Information Environment," Professor Andrew Hall

ENSPIRING.ai: "AI and the Election - Deepfakes, Chatbots, and the Information Environment," Professor Andrew Hall

In this compelling presentation, the speaker explores the intersection of artificial intelligence (AI) and its potential influence on democratic processes, particularly in the context of elections. The historical context is provided by tracing the impact of technological disruptions on information dissemination and democracy, beginning with the invention of the printing press to modern innovations like radio, television, and the internet. As these technologies have historically transformed societal communications, the current AI evolution presents new challenges and opportunities for adapting our understanding and engagement with information.

The speaker examines the current landscape of AI-generated content, such as deepfakes, and their potential influence on voter perception and potential misinformation in the political sphere. Despite predictions about widespread disruptive impacts, tangible examples remain limited, sparking a discussion on whether this is due to public skepticism or the nascent stages of AI's integration in political tactics. The possibility of AI being leveraged for post-election narratives around election fraud is highlighted, alongside concerns about verifying truth in an era where content authenticity is questioned.

Main takeaways from the video:

💡
Technological advancements have historically reshaped communication landscapes, influencing democratic processes.
💡
Present AI developments, including deepfakes, pose risks of misinformation but are currently met with public skepticism.
💡
Opportunities exist for using AI to enhance democratic engagement, such as through improved information synthesis and accessibility.
Please remember to turn on the CC button to view the subtitles.

Key Vocabularies and Common Phrases:

1. paradigm shift [ˈperəˌdaɪm ʃɪft] - (noun) - A fundamental change in approach or underlying assumptions. - Synonyms: (radical change, transformation, revolution)

And it seems like we're just at the dawn of a kind of paradigm shift in how we create content, how we distribute that content across the world and how we consume it.

2. democratization [dɪˌmɑːkrətaɪˈzeɪʃən] - (noun) - The action of making something accessible to all people; transition to a more democratic political regime. - Synonyms: (liberalization, opening, access)

It allowed for the democratization, in a sense, of information in a way that had previously not really been possible.

3. yellow journalism [ˈjɛloʊ ˈdʒɜːrnəlɪzəm] - (noun) - Journalism that is based upon sensationalism and crude exaggeration. - Synonyms: (sensationalism, tabloid journalism, scandal-mongering)

This is a very famous photo of what was called, this is now often referred to as yellow journalism.

4. populist [ˈpɑːpjəlɪst] - (adjective) - A political approach that strives to appeal to ordinary people who feel that their concerns are disregarded by established elite groups. - Synonyms: (popular, people-oriented, anti-establishment)

He was a populist, he was very, very anti Semitic.

5. salient [ˈseɪliənt] - (adjective) - Most noticeable or important. - Synonyms: (prominent, conspicuous, striking)

I think it's sort of fair to say we haven't. I would argue that this one is probably the most salient one we've had so far.

6. nostalgia [nəˈstældʒə] - (noun) - A sentimental longing or wistful affection for the past, typically for a period or place with happy personal associations. - Synonyms: (longing, reminiscence, homesickness)

He was sort of the first political figure. He was a priest, but he had a political valence.

7. bifurcated [ˈbaɪfərˌkeɪtɪd] - (adjective) - Divided into two branches or parts. - Synonyms: (split, divided, branched)

Our current media environment that is so bifurcated.

8. liar's dividend [ˈlaɪərz ˈdɪvɪˌdɛnd] - (noun) - The advantage gained by a person or group from denying allegations, leading to doubt in public perception. - Synonyms: (benefit of the doubt, skepticism advantage, disbelief payoff)

There's this idea of what's called the liar's dividend.

9. vigilant [ˈvɪdʒɪlənt] - (adjective) - Keeping careful watch for possible danger or difficulties. - Synonyms: (watchful, alert, attentive)

People are increasingly skeptical. They're learning perhaps that they should be vigilant.

10. speculation [ˌspekjəˈleɪʃən] - (noun) - The forming of a theory or conjecture without firm evidence. - Synonyms: (guesswork, theorizing, assumption)

We might think that could really, really matter. And if you look back at 2016, where we had a massive October surprise, not a deep fake, but James Comey doing his own version of a deep fake, that was, you know, had a huge effect.

"AI and the Election - Deepfakes, Chatbots, and the Information Environment," Professor Andrew Hall

Okay, so I'm going to talk to you about AI and the election. Obviously, this is a very timely topic and a special place in the world to be talking about this. And I think it's fair to say, I suspect you would all perhaps agree with me, that it seems like we're just at the dawn of a kind of paradigm shift in how we create content, how we distribute that content across the world and how we consume it. And that might have. That could have really big implications for how our democracy works. So that's what I want to talk to you about today. I am not going to provide you with all the answers, which I certainly don't have, about how to make this work well, but I want to start. I thought it would be helpful to step back and think about some historical context, because it turns out AI is very special, and some of the things about it will be completely new and different and alien to us. But in other ways, what we're experiencing is actually a repetition of something that's happened over and over again in human history, as our information ecosystems have been disrupted by new technology.

So I want to start. This is a really cool picture. This is actually a depiction of the first ever volume of the Encyclopedia Britannica being printed. Some of you may remember back when we used to buy the physical volumes of it. I had them as a kid. And the printing press was an incredibly disruptive technology when it first came out. Many of you may be familiar with this history, but when we developed the ability to repeatedly print books at scale, it allowed for the democratization, in a sense, of information in a way that had previously not really been possible. And pretty quickly, people started writing things that people in power at the time didn't like. And among other things, many historians argue, this actually directly caused the Reformation and massive changes to the structure of the Catholic Church in Western Europe. And it also came along with an immense period of turmoil and violence and efforts to control the printing press and to restrict the printing of ideas. And it also came along with the creation of a bunch of fake nonsense written down and printed into books to trick people. And as you can imagine, with a new technology, this new thing, the printed book, it took a while for people to adapt those who knew how to read at the time.

It took them a while to understand, if I read this, how do I know if it's true or not? How do I know if the name that claims to be the author of this book was really the person who even wrote it? How do we figure? How do we suss These things out. It was incredibly disruptive. It turned out I'm a professor, so I guess I'm kind of biased. Turned out to be one of the most important, glorious, amazing technologies ever developed. Hopefully you all agree with that. But adapting to it was not easy and did not always immediately lead to good things. This is another very famous. Now fast forward a couple hundred years. Here is another very challenging, but ultimately, I think quite positive technological disruption. This is in the early or late 19th century, early 20th century. We again with industrialization. We are now able to print things at much higher scale, much cheaper, not only because we took the printing press and made it into a piece of industrial machinery, but also because we learned how to make paper extremely cheaply and ink and so forth.

And so now we are able to print at a much higher volume and frequency than back when Gutenberg was printing one Bible every once in a while. And one of the first things that happened is people started pumping out what we might call today propaganda. They started pumping out their ideas to the public. In particular, rich people bought newspapers and used them to push out their ideas for better or worse. This is a very famous photo of what was called, this is now often referred to as yellow journalism, which many of you may be familiar with. And in particular, this was the day when the main sunk. And a certain group of people who were particularly keen on going to war with Spain said that it had been sunk by the Spanish. And this was pushed very hard in these newspapers owned by people who had that interest. To this day, we don't actually know, at least according to Wikipedia, when I did my research on this, we don't actually know who. It may. Many people at the time thought that the sinking of the Maine was an accident, but we're not actually sure it was an accident. To this day, we don't really know.

But one way or another, it's quite clear that we were in an era where the mass printing of newspapers was creating weird things in our information environment. And once again, that was both very good. Ultimately, the development of the mass newspaper. Again, I'm a professor, maybe I'm biased. I think was really, really good. But it took us a while to adapt. And this was actually the period of time in response to yellow journalism. This is the period of time where we actually developed modern journalistic practices, including journalism schools. The idea that you cite sources, that you have off the record conversations, all those norms developed partially in response to yellow journalism, again because of this disruptive new technology. Does anyone know who this is? I'm going to ask a few questions during this talk. Usually when I give this talk, no one knows who this is, but in this audience, I bet some people do. So wait, if someone can bring you a mic, I would love to hear an explanation of who this is. Maybe over here.

Yeah. So please tell us who this is and why I've put him here. Give it to somebody else. I think it was Father Coughlin who used to put out the most miserable journalism you can ever imagine. Yes. So he in particular, he was a radio phenomenon at the dawn of the radio era. He was sort of the first political figure. He was a priest, but he had a political valence. He was the first huge mainstream radio personality. He had 30 million weekly followers in the US at a time when the population of the US was 120 million. So he had an unbelievable megaphone, which he generally used to espouse views I think we would all not be super happy with today. And eventually. So he was a huge. He was a populist, he was very, very anti Semitic. This is the 1930s, he is pro Hitler and he had 30 million weekly listeners. And he was eventually forced off the radio by Roosevelt through legal action, which is itself a very interesting topic.

The radio was another extremely disruptive technology when it came out. And authoritarian leaders, Hitler, Mussolini, others, use the radio, very aggressively, again, to directly communicate to people and make a particular argument or try to shape the information environment in a certain way. Today, radio, I don't know, it's pretty good. It's a mix, I think. But again, we had to adapt and we've developed institutions and norms around and laws around how radio works, largely in response to what happened in the 1930s. This I am sure you're all familiar with. This is the first ever televised presidential debate. And again, not going to dwell on this point, but television was yet another hugely disruptive change to how American voters learned about their politicians, heard what they were talking about, and we had to learn a whole new set of behaviors around how do we judge people when we see them on tv? And people point to this debate in particular as being unfair or something. Because people, the argument that's made is basically JFK wore makeup and he looked really good and Nixon was all sweaty.

And there's often this sort of irrationality kind of implicit in the argument that basically people shouldn't judge based on those things. But because television was new and this is the first debate ever, that's what they focused on. And we had to learn over time and we had to develop a system for orchestrating things on television in a more informative way. You can be the judge of whether you think that's worked well or not today. Our most recent televised presidential debates haven't been super impressive to me. Ok, this is a great one. This is one of the first New York Times issues shown on the Internet, the World Wide Web, as it was called. I love that the headline here is Europe betting on self regulation to control the Internet. It's just a great time capsule. And again, this was a very disruptive moment where now suddenly I told you that after yellow journalism, there had been the rise of this very professionalized journalistic class, these fancy newspapers, and now the Internet was a huge threat to those newspapers.

And as I'm sure many of you experience are familiar with, it's been a very lengthy process for newspapers to figure out how to work with the Internet, how to make money on the Internet in a place where users don't want to pay for things and so forth. And so this was another really big disruption. Then we had what's now referred to as the Arab Spring. Okay. And this was sort of the dawn of the second Internet era, where in the first Internet era, it was all about people making websites and newspapers moving online and stuff like that. And the effects on the information environment actually seemed not that strong. Then we had social media, which really further strengthened this phenomenon that you can trace all the way back to the printing press of allowing more people to speak directly to large groups of people. And in the case of social media, it was allowing everyday people to do this very, very cheaply with very little effort. And at first we encountered this, we being people sort of here in the US thinking in a very particular way about what free speech looks like, what information looks like, what democracies look like, experienced this as a very, very positive thing.

And in particular, the Arab Spring was held up as this incredibly optimistic moment where social media allowed people who had been censored in their home countries not only to speak out about their officials who they didn't think we were doing a good job, but to coordinate with one another to oppose that person and to depose them. And this happened in a number of countries. And social media was sort of credited with helping to improve the information environment. Sounds funny to say this now, in a way. And so it was a time of extreme optimism. Of course, then that has flipped. Right? And now we feel like, particularly in environments where maybe we think the status quo information environment was pretty good, we worry that social media is actually encouraging the opposite. Right. It's again, kind of like, with the radio, like I talked about before, it's now allowing different people to try to control the information environment for their own ends. Okay, that was a very quick surface level history of an incredibly deep topic, which is how technology has disrupted our information environment and thereby changed our democracy.

Now, I want to bring us to the current age and the evolution that we're just at the beginning of. And I want to do it with this picture, which to me feels like this happened 100 years ago. And I was astonished to discover that this, when I prepared for this talk, that this is six months ago. Can anyone tell me what this picture is and why I put it there? So it's a fake picture of Princess Kate with her children. She had disappeared for a while. People couldn't find her. So the palace produced this photo, but then people analyzed it and figured out that it was fake. Yeah, yeah.

And the AP pulled it back. The AP announced, we ran this picture and we've now determined we shouldn't have run it. And the reason I want to bring this up, there's a couple of features of this that I think are really interesting for thinking through how we're already seeing our information environment change. One thing is that what caused the hysteria, first of all, people were genuinely very worried and wondering where she had been. And so this picture was a big deal when it came out. And then when the AP pulled it back and said the word they particularly used was that it had been manipulated, it, whether accidentally or intentionally, it triggered a suspicion that people have that things can now be completely fatal. In a way, it would have been very hard for them to be in the past. Not that we actually. The history, by the way of fake images goes back hundreds and hundreds of years, but there's a sense that it's easier to do now, which leaves us quite suspicious. And what's weird and notable about this particular instance, this photo's not really fake. She was, in fact, alive. And she did, in fact, take probably 100 pictures with her three kids.

As someone with four kids, I can tell you one picture is never going to capture all of them smiling. See how all the kids are looking at the camera and smiling? That never happens. And so what actually happened with this picture Is they took 100 pictures and then they used Photoshop to make the kids all look like they were smiling in one picture. It was a completely anodyne manipulation. And the palace was totally unprepared for the phenomenon that occurred when AP called it manipulated. And then many people interpreted that to mean this is A deep fake. She's not even alive right now. And so I want to make two points from that. One is that the biggest effect from these deep fakes may not be that we see deep fakes, though that may well happen. It's that we start to believe things are fake even when they're not. And second, that this potential solution people argue for of watermarking or labeling content as AI generated is not going to be what's going to get us out of this mess. Right. Because this would be labeled as AI generated, you would probably reasonably respond by thinking that means, oh my God, the palace just released a completely fake image of this really important person.

But no, it's AI generated in a way that's not actually important or interesting in some sense. And so the nature of the problem and the solution to the problem is quite different than what we might have thought at the beginning. Okay. That is the motivation and the background. I will just tell you extremely briefly about myself. I already had such a great introduction. So I work here at the gsb. I study democracy in tech. I also work in tech, so. So I have a very, I think, particular perspective from going back and forth between studying these issues around democracy and technology and working with tech companies to try to fix them. I'm also a very proud alum of the class of 09. Apparently it's our reunion too. And at the GSB I helped to co chair the AI committee and I one of the leaders, along with Susan Athey and several of our colleagues for the business and beneficial technology pillar in vgs. So I spend a lot of time thinking about these issues and I want to just quickly I'm going to run down three things. I'm going to make sure to leave a bunch of time for questions at the end. I also strongly encourage you to interrupt at any time.

Please ask questions. This always is better if it's interactive. Just do please wait for the microphone to get to you because otherwise people on video can't hear you. Ok, so I just want to run through three things quickly with you. Where are the deepfakes? How are people adapting already? I told you, historically we see these adaptations occur. How are we seeing that so far? And then what can we start thinking about trying to do to make these things work together better? Okay. Where are the deepfakes? There actually haven't been very many people. I think kind of expected we'd be awash with them by now. I think it's sort of fair to say we haven't. I would argue that this one is probably the most salient one we've had so far. And it's kind of, I would say, pretty low down the list in terms of its impact. So just a quick recap on this one for you. During the hurricane in Florida, I actually don't know the exact details. Someone circulated this photo saying this was a girl in the floods in the hurricane. And then, which wasn't in itself inherently a political statement.

But then it got taken to be this thing about how the Biden administration and Kamala Harris aren't doing enough to protect our children in this hurricane and so became politicized. And it just kind of a, forgive the use of this hilarious word, but like a nothing burger in the sense that it is a fake image. We know there were in fact people affected by the flooding. This photo didn't reveal any new info. If you thought this was real, you didn't really learn anything new that you didn't already know. Of course there were children who are affected by the flood. This happens not to be one of them, this one's fake. But it didn't really give you any new fact that you now think you know that's not true. That matters. Certainly that could matter for your vote. And so actually we could debate hurricane relief. Did the administration do a good job? Did it do a bad job? And people might be quite misinformed about this, but I don't think this picture is not really driving that. And it's sort of inconceivable to me to see how this picture could have any impact by itself on the election. And this is kind of, I think, the flavor that Deepfakes have had so far.

They seem kind of, they don't seem very salient. They don't seem like they've been built in ways that would have a big effect on the election. They often seem to be either satirical, like self evidently satirical, or ineffective in the sense that they're not going to shift your mind in an important way on your vote. So that seems really surprising. Like a lot of people said this was going to be the cycle. I will note people started saying this in 2018, so we've been predicting this for a while and it has yet to occur. But people really thought now with the explosion of ChatGPT and other AI tools, we are really going to see a lot of it by now. And so I really want to take a moment and figure out why aren't we? And what does that tell us about what's going on? And I want to be very upfront. I don't know the answer. And the answer may be they're about to come. So let's see. I'm going to offer two takes. The first possibility is people have just recognized that so far, with the technology as it exists today and with our information environment as it exists today, they may just not have a big impact, and so they're not really worth doing. It's really, really hard in America today to change someone's really hard.

And that might be a bad thing in some ways, but when it comes to AI, maybe that's a saving grace. We have studied in a number of settings where the people doing the studies or paying for the studies, campaigners have really strong incentives to figure out how to change people's minds. And we just consistently find it's incredibly hard to do. Incredibly hard to do. Most Americans come with a pretty strong set of views on the way the world works and how they think it should work. And it's pretty hard to move them around by showing them stuff online. So there's been huge randomized studies of political ads online or political ads in the real world or whatever you want to do with political information. And it just generally has very, very small effects on people's attitudes because they already have really strong attitudes. And in a world where you're already being bombarded with other types of information about this election cycle, about the hurricane relief, for example, you're already getting it from 1000 directions. One deepfake might have trouble cutting through that noise. In addition, people may already be primed because this has been such a big discussion, and at this point, a quite large fraction of Americans have actually used ChatGPT or a similar tool themselves.

People may be primed already not to think these things are real when they see them. And I'm going to show you some evidence for that in a moment. Now, I want to contrast that with this alternative view that I would definitely not rule out, which is that we just haven't seen the big ones yet. Right. Because how could you cut through the noise? The fake kid in the hurricane that's not going to cut through the noise. What would cut through the noise? A super compelling video that claims it's from 20 years ago that reveals Kamala Harris used to be an official member of the Communist party in America or something, and it somehow seems real. So some unbelievably salacious claim about one or the other presidential candidate right before the election. That seems like the kind of thing that addresses all my points from before. Right. That does cut through the noise. It's so big, if it changed your view of this candidate, holding your political views fixed, that might really move the needle. And particularly for swing voters in a small number of battleground states, we might think that could really, really matter. And if you look back at 2016, where we had a massive October surprise, not a deep fake, but James Comey doing his own version of a deep fake, that was, you know, had a huge effect, huge effect on that, on the outcome.

I think we have pretty good evidence for that. So I think we should prepare ourselves that it's very possible in the next week or two, something really wild might come out. I will say there's a couple reasons I think it might not happen at this point. First of all, we're already halfway through October somehow, and it hasn't happened yet. And now a lot of Americans have already voted. So I don't know if you all track this, but early voting is super popular now. So October surprises are a little harder to pull off than they used to be. Because you used to be able to wait knowing everyone was going to vote on Election Day. You could spring it the week or two before. Now you spring it the week or two before, and I don't know, half of people have voted or something already, so we might not see one. I want to tell you about one particular kind of deep fake that I am more concerned about, I would say, and I do a lot of work on election administration, and obviously that's a pretty fraught topic these days. What are the rules by which we should run our elections and count our votes? And unfortunately, election administration gets a lot harder when elections are close, because that's when you have to really get everything done as fast as possible. You got to do recounts and et cetera. If it's a really lopsided election, you can call the election way before. You've actually dotted all the I's and crossed all the T's, and it's less of a big deal. This looks like it's going to possibly going to be a very close election. If it's a very close election, we're going to have a lengthy delay before we know the winner.

Just like in 2020, it could be weeks before we know the winner. And in that period, obviously, people are going to make extremely strong claims about how the vote counting is going, whether it's being done accurately, whether it's being done fairly, and so forth. And I have worked, and I have colleagues here at Stanford, particularly my colleague Justin Grimmer, have done a lot of work actually directly engaging with People who believe that that system is rigged and who develop statistical evidence that they argue proves that our elections are rigged. And I think that it's very likely we'll see those people using AI to generate evidence for their claims after election day. That's not to say, by the way, I strongly support every citizen should be poking around our election system and should. It's great that we have civic engagement on studying if our elections are working fairly or not. And we should keep an eye on that because it's certainly not a guarantee that it always will. But we should be skeptical of snake oil statistical efforts trying to claim that the election is fake. And I want to tell you just quickly a specific story about this that caught my eye that my colleague Justin Grimmer actually was directly involved in.

So this is a picture of a self styled election fraud expert in Nevada. And Nevada recently held its primary elections. And in Washoe county this person offered a study that he claimed proved beyond a doubt that the Republican primary election in Washoe county was run fraudulently. And the way he did this was that he took election returns data from Washoe County, Nevada and he put it into ChatGPT and he asked ChatGPT to look for evidence of election fraud. And he used this. ChatGPT found him evidence of election fraud. I'll tell you in a moment why it made no sense. And he circulated this, including to the election board. And on the basis of this evidence, they actually refused to certify the primary election and there was a delay in its certification. Eventually it was certified. But it appeared that his use of ChatGPT was what made the difference and made his claim seem extra persuasive to people who don't know very much about statistics, about how these things work. And so this is a quote from his report where he. I just want to highlight particularly that he says that he's using, quote, the most sophisticated artificial intelligence platforms and supercomputers in the world. And that was sort of like what gave him the strength of this evidence. Now why didn't this actually work? You don't need to read the fine print here, but this is. He actually provided. It's sort of admirable in a way.

He had provided all the logs. ChatGPT has this feature where you can share, here's the log of me talking to ChatGPT and what it did. And he shared the whole log where he uploaded the data and then asked ChatGPT to analyze it and blah blah, blah. What he failed to notice was that in between him uploading the data. And then having this back and forth with ChatGPT, where he urged it more and more passionately to look harder, it stopped analyzing the data and it just started making things up, completely making them up, which is called hallucination. It happens all the time with these generative AI tools. So by the end, when it reported to him and it told him These are things ChatGPT actually said to him, contact law enforcement, given the high probability of manipulation and oh, and ChatGPT also said election certification. The county commissioner should not certify this election as legitimate. This was after he berated it for a while, and at this point it was no longer using the data at all. And he claimed that this evidence was being drawn from the data, but as you can see, it just wasn't. So I think we're going to see a lot more stuff like that, particularly if the election is close and there's this window where we're all sitting around speculating about who's going to win. I think that's going to be a very fragile period. It was obviously extremely fragile in 2020, as we all know, and that might be a period where this AI stuff becomes particularly ripe. So we should keep an eye on that.

Okay, I want to talk a little bit about how people are adapting and then I want to wrap up with time for questions. So we have been planning for a while a survey to investigate how are people updating their beliefs about what's real or fake online or just in the world as generative AI takes off, is that of all ages? So the question is, is this of all ages? Yeah, I'm going to get. Actually, age is going to be an important thing I'm going to talk about. Yeah. And so we planned the survey this summer and it was actually pretty hard to get it done and fielded. And I repeatedly told my co authors that I had to present it to my mom in October. And we got it done just in time. We got it done last week. And so we have 3,000 representative Americans, all ages and so forth. And we asked them a bunch of stuff. But the thing I'm going to show you quickly is just we asked them to imagine that you've seen a piece of content. And then we specified whether we're talking about an audio piece of audio, an image or a video. And then we specified whether they're seeing it on social media or on TV news. And then we asked them if you just saw. If you saw a politician, let's say, depicted in one of those ways, image, audio or video on either social media or television, would that alone make you think that's a real thing that happened in the world or not?

And I think, in a way, somewhat reassuringly and contrary to what we hear in the public discourse around these things, Americans are pretty skeptical. So this is what we did, is we had them say this on a 0 to 3 scale, right? So 0 is not at all. You show me a video of Kamala Harris on social media, and if that's all you show me, I don't think for a moment that's necessarily real. All the way to four is very like, if I see it, I'm very confident it's real. And what we can see is across all three. Oops. Across all three, Americans are quite skeptical. So this is the average of all the responses. So the average confidence in audio is below somewhat. It's in between I'm not very confident and I'm somewhat confident, which to me tells me, like, I'm not really ready to believe something if I just see it. And the levels of this across audio photo are relatively similar. But I do think it's interesting to note you get a little bit of a premium for video. You're a little bit more willing to believe that's real, which I think is totally rational, because it's actually still much more costly and hard to make a really compelling fake video, whereas making fake audio is super, super easy now.

So I'm actually surprised there's not a bigger gap here. And again, in a way that kind of makes sense, people respond that they're even less confident if it's on social media than if it's on television. And I would imagine the logic people are employing is if it's on television, it's probably part of some broader report, and the people putting it on TV may have exerted at least some modicum of effort to figure out if it's real or not. We know they don't do 100% good job on that. But compared to social media, where a citizen journalist can put up a video into the ether at any moment, it makes sense that you might give TV a little bit of a premium. Okay, here's the age thing that I promised. So what is striking when you cut this by age is that it's actually the older and wiser segments of America that have adapted most quickly to the situation.

So you can see is that it's 55 and plus are the most skeptical, and 18 to 34 is still pretty skeptical, but less. And they're less across the board for all three. Types of media and for both venues. So I think there is, you know, by the way, we don't know that this is adaptation. Like we're going to measure this repeatedly before and after the election and see if people change. For now we don't. This could be adaptation, but it could also just be pre existing attitudes that are different. There's a question over there. If you'll just wait for the microphone, I'll get to you. Think about can you hear me? Yeah, the distribution because I know you know where I'm going. Which when two 1 or 2 or 3% can make the whole difference in election. Having some small numbers be. Absolutely, yeah, yeah.

You're getting one of the very hardest things about studying elections in America today, which is when everything is so close, every factor can matter, even if on average across everyone, it doesn't have a big effect. And so yeah, we would love, and I wish more people did this. We would need a lot more money to do this. But like really zoom in on the battleground states, really zoom in on the swing voters in battleground states and understand how they think about the information environment. Yeah, we have a question here too. Hi. Is there a difference in terms of ethnicity, race or income? We have looked at that. I'm not going to show it to you today. And by the way, this is all pretty preliminary. We do find differences by both race, income and education. And we're not separate. We do those separately. But we don't know about the correlations among them. So we don't know which of those factors might be explaining the difference more. But it's sort of more educated people, higher income people and white people are more skeptical on average. Yeah. One more question.

Oh, wait, a couple more questions. Oh boy. I'm going to take two questions now and finish and then I'll take all the rest of the questions. I have a concern about the survey and what you're asking and that is that trying to measure the impact of deep fakes on people by asking them whether they would believe it seems like you would get the same responses if you say are you likely to be scammed by somebody on a, you know, a telemarketing scam, everybody's going to say no. So my concern is that this isn't actually measuring whether they're truly going to be fooled or not. Absolutely, absolutely. I completely agree. And I think the most important thing we'll need to do and people are doing this is both study in the wild. As we see these deepfakes come out, how are they and that's how we studied, by the way, I didn't do this, but that's how people studied the Comey incident. There were surveys in the field not on how do you feel about this, but just on who you're going to vote for. And then we were able to do before and after on the Comey press conference. So I completely agree. This I think of as just trying to tap into this baseline.

What we want to see is whether as there are deepfakes out there, is there going to be this population wide adaptation where people are going to start to say, I don't even pay attention to social media content because I know it's all fake. That's kind of the. We're trying to see whether people are engaging in that logic. And I was surprised. To that note, I'll say I was very surprised to see the age gradient go this way because when the Ed school here at Stanford has done quite a lot of work on online literacy. And that in some ways goes the same way. Older people are more skeptical and younger people are fooled more by online stuff. But for certain other types of behavior, younger people are much more sophisticated. So if you ask them about how people inflate the way they look or how successful they are on LinkedIn or Instagram, young people are the ones who are like, yeah, we don't pay any attention. We know that's all fake. So there are different kinds of adaptations. Yeah. Okay. I'm going to take one more question. Yep. Yes. My question has to do with, is this, does this have something to do with the intensity of prior beliefs? In other words, if, you know, if I'm already inclined in a certain direction, if the deep fake functions as reinforcement as opposed to something outside.

Yeah. And it would seem like that would have a big influence. Yeah, I think that's right. And I think it would further depend on that. You would want to do that with specific deep fakes that appeal or don't appeal, one that is kind of reinforcing what they already believe, one that's telling them the opposite of what they believe we don't have. I don't know about that, but that's definitely the kind of direction I think people should go with this stuff. I will say to some extent that kind of question is something we've been stuck studying for a while. We haven't called it deepfakes, but it's sort of very similar to what's been done studying various kinds of misinformation online. And I will just say a limitation to those studies is sometimes what researchers call misinformation later turns out to be real. So that's a challenge. With deepfakes, that'd be easier, hopefully. Okay, I'm going to quickly wrap up so I can take as many questions as possible. So I'll just note where I think this is heading is. People are increasingly skeptical. They're learning perhaps that they shouldn't trust things just because they see them on video or just because they see them on audio.

That then opens up the space. Of course, that's reassuring. In one sense, we want people to be on their guard, but it's also a huge problem because then we can't know what's real. And that among other. That's just a general huge problem. If everyone becomes incredibly suspicious and doesn't think anything is real, that then opens up the door to all kinds of crazy beliefs. But in addition, there's this idea of what's called the liar's dividend, which is the idea that now something real could come out about a politician, for example, and the politician can just absolutely insist that's just fake. That didn't happen. And we have seen in elections in other parts of the world this happen already. There is a particularly high profile instance in Turkey where both things happened. Actually. There was a real scandal, as far as we know today, a real scandal occurred and the person implicated insisted it was fake. And there was for another candidate, there was a fake scandal that people insisted was real. So you can see how this can go both ways. And we're already seeing this to some extent in the US Though, I don't think it's, you know, it hasn't gone very far yet. And so this was a pretty great example where that's sort of just a random one.

But Kamala Harris had a rally at the airport that was, I don't know, relatively well attended. I'll be honest, I don't pay a lot of attention to how much attendance there is at these things. And Trump got very upset and insisted that the pictures showing that many people at the rally were fake. And so just to give you the sense that you can then use the suspicion to then claim things that are real are fake, potentially. I don't know for sure that the rally even happened, but I think it did. Okay, so last thoughts before I wrap up and take questions. We're going to have to do something new to deal with all of these problems. And it might be that the something new we do turns out to be something old, because everything I just described to you in some sense it's very new and different because the technology is so incredible and weird and scary in some ways, but in other ways, it's actually kind of a return to normal. Because if you step back and you say, ok, what is this technology really doing? Well, one thing it's doing is that now when you see a picture or you hear an audio recording or you see a video of a politician doing something salacious, let's say you don't know if it's real or not.

And if you flip that around, what it also means is that I, as a random person, can no longer make factual claims to you about whether a politician did something or didn't and directly prove it to you. And that's usually always been the norm in human history. And in fact, these last 50 years or so, where I could pull out a picture or especially I could play you an audio recording from the Nixon White House, let's say, hypothetically, you could or show you a video that was like, I didn't need to know anything about you. I could just pretty much believe it. Now, obviously, even that's a little bit strained. There have been fakes of those things in the past, but more or less, that would be something I'd want to see and learn from, regardless of who you are. Because the technology had this odd asymmetry where it was easy to do it and hard to fake. And that was it. Turns out, I think we will look back on that as being kind of a weird and unusual period of time. And we're now returning to what was true for most of human history and most of American democratic history, which is that that's not a sufficient way to learn about what's true or false. And fortunately, prior to the development of those things, we had already been working very hard to develop other ways of figuring out what's really going on.

And those other ways included what I told you about before. Journalists who they didn't prove it to you by showing you an audio recording or a video in 1930. They proved it to you through some kind of complicated process that certainly was not perfect. That involved them and their company developing a reputation for making claims that occasionally you could then verify yourself in the real world. Maybe, you know, one of the people quoted in the article, maybe you were actually at the. The event that the article is talking. And over time, they would form a reputation for, like, they actually try really hard to say things that really happened, and that's good for them economically. And so we're going to. One thing that's clear. I think is we're going to need to find a way to restore that business model. And maybe, optimistically, maybe that'll happen naturally, but probably more likely we'll have to do a lot of work to figure out how to do that. But that's pretty clear to me. That's the path forward when we can't just show you a video and prove to you that that's really Donald Trump saying something or so forth.

We also don't have to just go back to previous methods. We can develop new ones. I just want to tell you quickly about one. We've seen the growth and I've done some work in this area of online platforms where we attempt to figure out which things are real and which are fake, not by appealing to legacy media institutions, but by actually building new online institutions that are pretty interesting. And I just want to tell you really quickly about one that's had a surprising amount of success, which is this thing on X called Community Notes, originally developed at Meta. Later a similar project was implemented at Twitter called Birdwatch. Birdwatch became Community Notes. And the way it works is they sort of. It's a randomly recruited set of people who are compensated. They're like everyday Americans to fact check things that are posted on X. And it works surprisingly well.

You should go check it out. It's really quite interesting. And they suss out they're like journalists. They go. And they're not as good as like a professional journalist, but they go out and they try to figure out if it really happened or if it's true. And they have repeatedly put fact checks on Elon Musk's posts, which is hilarious. And to his credit, he has left them up. So this is a famous example, and this is actually in the Wikipedia article. So we'll have to develop things like that. People are also working very hard to try to establish not whether a piece of content is true or not, but rather, did it come from the person who's claiming it came from them. So I'm not going to dwell on this. It's a really, really interesting area. But for example, we now have cameras that will cryptographically stamp the image and then allows you to trace. Did that really come from this particular physical camera or not? And fancy news organizations like the New York Times have these. They're really, really expensive. But the New York Times has them. And we're building towards a future where you'll see something online and the system will validate for you.

Like this was physically captured in the real world by the New York Times. And I think things like that are clearly going to prove helpful. I want to end on an optimistic note also that we can also use the same technology to improve democracy. And that's what I'm most excited about. So Yamil Velez is a good friend and collaborator of mine. He and Semara Sevy and Donald Green have been working on this paper, for example, where they use generative AI to ingest political parties. Platform is at the local level in the US where people don't know that much about their political candidates. Typically they bring in all this information about the candidates and the party's positions. They create these chatbots that are very well informed. And then those chatbots have conversations with voters who want to learn, like about my local school board race or something like that.

And they've shown that this turns out to be an incredibly powerful way to synthesize political information and give you quick summaries. And the evidence they show in this paper is people like using these. They're significantly more informed after they use them and they want to use them in the future. So I think there's like a, a pretty exciting set of things to work on there. So I'm going to wrap up here and just kind of just to summarize and then take questions. It's pretty clear this is a big new disruption to our information environment. The world is like changing under our feet very, very quickly. At the same time, it's also true that in some ways we've been here before and we will adapt to this the same way we've adapted to all these previous technological disruptions to our information environment.

At the same time, those previous adaptations have not been smooth and sometimes they've been incredibly painful. So we need to kind of keep an eye on how that's going to go. And at the same time we should keep a close eye, I think, on the opportunities. This is a very powerful technology that can make us all smarter and that could be really good for our democracy. So I'm going to leave it there and I will turn to questions. Thank you. We have a couple over here. Start here. Thanks very much. I have two questions actually and you don't have to answer the second one. The first one is you talked about generative AI and what I was interested is more on the machine learning side and how groups like Future Forward and others are doing massive generation of not just one single deep fake or I'm not saying they're doing deep fakes, but factual but massive impressions over time. And I think there's research that shows if you, if you just, you sort of get overwhelmed, it does change your opinion over time with information.

So the first question is sort of the change in democracy based on using machine learning to generate this. The second question is, why did the Red Sox trade Mookie Bets? I have a great answer to the second one, but let me start with the first one. No, actually, I got to start with the second one. Clearly the answer is that the owner of the Red Sox no longer cares about the team. He owns like 100 other teams. It's a classic principal agent problem. On your first question, I think I don't. First of all, I don't know the answer. What I would say is one thing we've learned two things we've learned about politics and political advertising and those kinds of techniques over time. One is that we should always look at what happens in marketing first, because that will always come to politics next. My impression of where this is right now in marketing is, yeah, you use generative AI to make many versions of your ad, and then in an automated loop with no human in the loop, you serve those to people, see which ones are working and start shifting. So you're combining this machine learning with the generative AI to find the most engaging.

I imagine we'll see that with content, with organic content generation as well as ads. And yes, I imagine, as you're alluding to, I guess it's already happening. I'm not that familiar with it. We'll see that with political ads next. The second thing I will say is there's a very long history, especially in politics, but also just in marketing as a whole of, for obvious self important reasons, overstating the impact that those things actually have on individuals. And that's true in marketing and it's definitely true in politics. Cambridge Analytica was a great example of this. Cambridge Analytica shopped around their fancy method to a bunch of people I know in academia before the election. And it was, no offense, if someone here worked at Cambridge Analytica, it was not serious yet after the election it was covered as if it was this genius brain operation. And so I think in general I still come down on the side of we don't have a lot of evidence. This moves attitudes a lot. I agree though, that like as you put different kinds of new technology together, we should keep a very close eye on it. Yeah, there's a question behind you. With all of these information technologies you've discussed today, especially with AI, are we better off for government to take action before or after these things are established in the market or none whatsoever and let the public just handle it on their own. I don't know. That's a big meatball.

I mean, I definitely come down in the middle. I was pretty directly. I directly experienced through my work the government's efforts across the world around social media. I obviously have a very biased view. I mean, I'm an advisor at Meta. I personally do not think those laws have done a good job. I think most countries that have used, they took the rhetoric that people in America were using about misinformation and blah, blah, and most countries have passed laws that basically are intended to allow the government to say, I don't like this content, it's unsafe. And I think that's really bad. I don't think that's a good thing. And if I look out at what people are saying when I hear about AI safety, which is an incredibly important issue, as someone who studies politics, I worry tremendously about people using that rhetoric to argue for laws that are actually intended to engineer what people are allowed to say or think, which I think is we know we've just gone through experiencing in the US is not a successful strategy from any perspective. People don't like it, it backfires. It does not improve the information environment.

Now the US hasn't passed any laws about any of this stuff, at least at the federal level. So I'm really, I'm referring more globally here. But I think my answer is somewhere in the middle. Like if we don't do anything, I think it's pretty obvious there's going to be enormous problems, not just for politics, for all sorts of other parts of the information environment. Celebrities are already right now being imitated in ways that are driving them crazy. And that's just one example of a broader thing that could be terrible. I mean, we look at the sort of like sexual imagery being made about other people. There's just incredibly big problems that have to be solved. So I'm kind of. My middle ground answer is I think we should start with the ones that are very clear problems. And I worry a great deal about going too far. I have learned over time when I hear things like political misinformation, I get very nervous when governments start talking about that because I think the history of governments working on that is a very poor one. Thank you. Terrific presentation on a critical subject. Recently the state of New Jersey is mandating media literacy courses in the public schools to address this. I don't know if you're aware about that, but I was just wondering, what do you Think. Well, first off, does Stanford University have a course like that for undergraduates? Yeah. If not, why not?

Should it just like to get your thoughts about that as bringing this into part of a education process instead of government taking a role starting at the public schools and in college? That's a great question. I am not super deep on Stanford's undergrad curriculum at the moment, but I will say there are a couple people here in the ed school who work specifically on this question of media literacy. My impression is they're making progress. It's pretty hard to teach it in a way that works. Young people tend to think it's super boring to learn about that. You have to be careful to do it in a way that's not overly preachy or they immediately turn off and so forth. You have to be careful not to use examples. Again, the problem that often occurs is you use an example and later it turns out it was actually true. That's happened to the misinformation community over and over again, which I think says something about the perils of studying it. But I think they've made a lot of progress on that. And so my impression is yes, the other thing I will say that's going on at Stanford that I have been a little bit more involved in is we have rolled out this undergraduate civics education which is much broader than digital literacy. But it really, I think, is doing an excellent job of giving students the context from all different directions.

The context on why running off and making tech products without thinking about democracy is dangerous or could be bad. But also on the other side, thinking through why we should be able to have civil discourse with people we disagree with, why we should be more open minded, which is something universities in my opinion, have done a terrible job with the last 20 years or so. So I think we're making a lot of progress there and hopefully that will bleed into digital literacy as well. I liked your optimistic ending, but I would like you to talk a little bit about how we get from our current media environment that is so bifurcated. People watch Fox News, they trust that, people who read the New York Times trust that and neither the twain shall meet.

So how do we get from where we are to this lovely ending? I have no idea. I think it may be something where it's about the population becoming. Let me describe the problem. I will admire the problem problem and then attempted to describe a solution. The problem is the market, right? So it's like you make the. You cannot make money. People have tried over and over again to build TV news stations or newspapers whose business model is, I know, nonsense in the middle, tell you characterize sides of an argument and so forth. And it's challenging to make money that way. It's not impossible. And there are these very elite news services. I mean, I would not characterize New York Times as being in the middle, but the New York Times, the Wall Street Journal, maybe more in the middle, maybe on the right.

They're trying very hard to make statements that are true in their journalistic. They might have their slant on them, but they're trying very hard to not make stuff up. And that works for them in their business model. But that's because they're extremely elite and they capture enormous amounts of subscription revenue from very rich people. And we have not figured out a way to popularize that. There is no popular media service really doing that. And so I think it's something about people have to want it, and if they want it, the market will provide it. And so it may come back to the civics education, or it may come back like historically, the way these things in general, these things, the solutions to them have not been planned. They've emerged spontaneously. One of the solutions in mid 20th century America was that you consumed your political information in the newspaper. Lots of people, most people did not buy the newspaper because they cared about politics. They bought the newspaper because it had the radio schedule in it, it had the sports news, and then they consumed politics as a plus. And that then created a market for pretty centrist news because of the nature of the economic exchange going on.

And so it may be that something with new bundles of different types of content together on social media or something like that could be the path forward, but God only knows. Yeah, we'll leave it there. Thank you, everyone. This is great.

Artificial Intelligence, Politics, Technology, Democracy, Social Media, Elections, Stanford Graduate School Of Business