>> Hi, I'm Lisa Einstein. A tacit assumption in US elections is that American voters should decide the outcomes of American elections. But what if American voters are making decisions based on false information fed to them by a foreign actor, or by a domestic actor with a hidden agenda? What do we do about the ways in which our open information system lets people with deceptive agendas convince citizens of things that might influence their decisions at the voting booth? You don't have to hack the voting machines, you just hack the brains of the voters, convince them who to vote for or convince them not to vote. In recent history, no foreign influence campaign has attracted as much attention as the multiple Russian efforts to interfere in the 2016 US presidential election. These efforts included both a campaign of division on social media, and a hack-and-leak operation, where real e-mails from prominent Democrats were released under false pretenses with cooperating media organizations to embarrass Democrats and to draw attention away from stories that would be more favorable to Hillary Clinton. In an investigation led by one of our guests in this section, Alex Stamos, Facebook found approximately $100,000 in ad spending affiliated with Russia, from June 2015 to May of 2017, connected with hundreds of inauthentic accounts and pages. What do you think was the most common topic of these ads? Black lives matter. Rather than focus on support for particular candidates, ads focused on amplifying divisive social and political messages across the political spectrum, from LGBTQ matters, to race issues, to immigration, to gun rights. The ads were designed to sow division along already existing political fault lines in the United States, to further radicalize Americans, and to make Americans and the world question whether open democratic systems were really worth all the trouble. When Donald Trump claimed in a debate that the 2016 Democratic National Committee hack, could have been Russia or it could have been someone on their bed weighing 400 pounds, he played directly into Russian President Vladimir Putin's playbook, by forcing Americans to question the credibility of our democratic process. As John Podesta, whose inbox was the high profile target of Russia's hack-and-leak campaign said, ''When nothing is real, everything is possible.'' Our guests today, Camille Francois and Alex Stamos, are perhaps best qualified to help us make sense of this very complicated topic. In addition to their individual contributions to the fight against the abuse of online platforms, they have been hard at work over the past few months on issues affecting the 2020 US election. They both work at the Election Integrity Partnership, which brought together four organizations: the Stanford Internet Observatory, Graphika, the Atlantic Council's Digital Forensic Research Lab, and the University of Washington Center for an Informed Public to provide real-time detection, mitigation, and analysis of election related misinformation and disinformation. Camille Francois is the Chief Innovation Officer at Graphika, where she leads the company's work to detect and mitigate disinformation, media manipulation, and harassment. She previously served as a Special Advisor to the Chief Technology Officer of France in the Prime Minister's Office, working on France's first Open Government roadmap. Alex Stamos is the Director of the Stanford Internet Observatory and a former Chief Security Officer at Facebook. During his time at Facebook, he led the company's investigation into manipulation of the 2016 US presidential election, and helped pioneer several successful protections against these new classes of abuse. >> When we look at 2016, we have to be really careful not to smush everything that happened and even all of the actions by Russian aligned groups into one big disinformation campaign. When we look at Russian activity during the 2016 election, there's a variety of different things going on. There are at least two mostly offline campaigns. There was the overt state media Russia Today, Sputnik and the like, there was direct reach out to members of the Trump campaign and other people in the United States traditional human intelligence. Then in the online world, there were the direct attacks against election infrastructure, and then two totally separate disinformation campaigns. Like Lisa said, one of those disinformation campaigns was carried out by the GRU, that's the main Intelligence Directorate of the Russian military. They're an intelligence agency, one of the three major Russian intelligence agencies. They're the one that reports up to the Kremlin, up to the uniform members of the Russian military. The second campaign was via the Internet Research Agency. That's an interesting new actor. This is one of things that got really big in 2016 is the idea that these kind of campaigns which have for a very long time been carried out by state actors such as an intel agencies, could be done by private actors. The IRA, as we call it, and I know that the older folks, I have the same problem where my students don't know that IRA is a term that's ever been used for any other organized group on the international stage. But I'll try to say the Russian IRA, but if you hear IRA's we mean the Internet Research Agency. They are a private company belonged to a Oligarch named Yevgeny Prigozhin. They acted somewhat in concert with the government, but these two campaigns were actually quite different. The GRU campaign was the hack-and-leak. That was breaking into John Podesta's e-mail, breaking into the DNC e-mails, taking selected e-mails and then leaking them to the media with the goal of telling a story, and changing the overall narrative, mostly about Hillary Clinton and her campaign. Then the Russian Internet Research Agency campaign was all about driving division in the United States. It wasn't really as much about the campaign, as a long-standing operation to try to get Americans to fight about really important topics such as black lives matter, immigration and the like. What has changed in 2020? We've seen examples of both of these now in 2020. The timing of this class is pretty incredible, just like half an hour ago, the FBI attributed a bunch of e-mails that were sent out that said there from the Proud Boys that went to individual voters saying, ''We know how you're going to vote, we're going to hurt you if you don't vote for Donald Trump." That those e-mails were not set by the Proud Boys they were sent by Iran. This is not surprising to me, I actually while there's some people questioning the attribution, I think the attribution is accurate. This was our guess inside of our team. I really wish the Irish betting sites took action on cyber attribution because I'd make a couple of bucks that way. But this was our call based upon our technical analysis of the e-mails, that it was probably Iran. That's a great example of a state actor getting involved in disinformation campaign with a little bit of a technical component. The way they did this was via hacking a server in Estonia and a server in Saudi Arabia to try to cover their tracks. Then we've seen pure disinformation around the election, but the interesting thing that's different in 2020, than in 2016, is that most of that is domestic. That's like the big difference I think we're talking about tonight, is that the majority of the election and political disinformation problem has moved into the domestic sphere, it is being done by actual American actors, who are either financially motivated or are part of political actors who are trying to change the outcome of the election. >> I'll pick it up here and add a little bit to that. The other thing that strikes me as very different in 2020 than in 2016 is the entire approach of the technology sector, the technology industry, and of the government to these questions. I think it's fair to say that in 2016, most of the tech industry and most of the US government really got caught flat-footed in the face of foreign interference using social media. There's this really interesting document from ODNI in 2017, that talks about the Russian interference efforts, talks about what Alex said, like the hacking side, talks also about the role of Russia Today and Sputnik. Then you see that at that moment and time in 2017, there is still a deep lack of understanding around the rules of these foreign influence operations on social media. In that document in 2017, the Russian troll farm, the Russian IRA, is a mere footnote that is attributed to an investigative journalist. In Silicon Valley, you have a similar effect. Alex is a bit of an outlier here because over Facebook and Alex I can say this, I can brag about your work [LAUGHTER] without your having to do it. But it's an interesting moment because back then in Silicon Valley, most of the traditional cybersecurity teams do not consider information operations to be part of their remit. It's not treated as a security issues, and more often than not, most of the major platforms don't have specific rules that target this type of activity. There's no specific violation of rules that apply to the foreign troll farms, for instance, and they don't have specific staff and teams, who are in charge of detecting this activity. When all of this happens in 2017, you really have a moment where you see the entire tech industry having to catch up with this new threat, that is, foreign interference and information operations on social media. The very first public acknowledgment by a major technology company. That this is actually happening on these major platforms is a paper by Alex and his colleagues, Jen, and Will called information operations on Facebook. Which is the first time that the tech industry acknowledges, ''Hey, this is happening on our platforms and has a name, we can catch it, and we think that this is actually a problem. The good news is when you look at the situation in 2020, we're in a radically different environment. All the major platforms have created special policies around this type of activity, they have empowered specific teams to detect this type of activity, and more often than not, they've actually published the records of the types of campaigns that they've been able to identify since 2017. If you look at this record, everything they've been sharing since 2016 it's really interesting because, of course they have seen more Russian campaigns since, but they've also seen Iranian campaigns seeking to influence public opinions, using a similar set of techniques, they've seen Chinese campaigns. As Alex said, they've also seen domestic actors using the same techniques, making fake accounts, making fake personas, making fake websites, pretending to be news organizations, posing as journalists, and really injecting a lot of disinformation in the domestic ecosystem. What does that look like today right ahead of the election? >> It can take many different forms really. You have the huge impact of conspiratorial community. That's a really interesting topic because for a long time, the conspiracy theories and the conspiratorial communities, were considered fringe in the information environment. The wisdom was, "Well, that's just a small part of the Internet, and it doesn't really matter" which really changed perspective on that. I think it's become well-documented by researchers, that some of these conspiracy theories actually play a crucial and central role in the public conversation, and that this can have a very problematic impacts on topics such as COVID-19, where you see this conspiracy theories really inject harmful disinformation into the public conversations very quickly. You have conspiratorial communities, you have organized groups, and you have another difficult categories, which is political campaigns and candidates. Also not having clear boundaries on what is and what is not acceptable in the realm of political communication. We're now facing this disinformation environment much better equipped than 2016 because at least, we have a common vocabulary to discuss what these types of campaigns are, to evaluate their harms, and we have researchers able to detect and expose some of these campaigns. I think we've had a definition problem. I think that initially putting everything under the greater umbrella of disinformation might have been a mistake, because at the end of the day, the fact that your uncle is engaging in harmful conspiracy theory on Facebook, has really absolutely nothing to do with the sophisticated GRU campaign designed to interfere in the US election. At the end of the day, there's different types of issues with a different set of implications, different rules that apply, they're being investigated in research by different people too. I think that the confusion was that there was this one big disinformation umbrella, that needed one clear solution. Then it was either the Russians or domestic. I think that we're finally getting towards the end of that cycle with the recognition that really, there's a lot of different issues that of course, work together in a complicated information environment. But what we're experiencing with militarize social movements, the proud boys, the Google boys, really of course, once in a while, like today [LAUGHTER] intersects with foreign interference, but more often than not really doesn't and belongs to different issues set. >> I was just saying on the Russia situation specifically, it is very important for us to discuss what Russia did in 2016. It would've been great if the United States had reacted quickly. I think part of our problem is that on Earth Two were 100,000 votes with the other way, and Hillary Clinton was president. The same evidence came out that you could have had an aggressive American response, and then we could have moved on to these other issues. But the fact that the United States has never really punished Russia, and that there's still a political debate if anything happened in 2016, means we get stuck on just one little component of the overall problem. My hope is after this election that we can do a couple things. One, we can work on election security starting in 2021. We waited until late 2019 this time around, and we can't wait till 2023. The election security issues have to be addressed when there's no candidates, when it's not political, when it feels that there could be a little bit of bipartisan solution to it. Then the second is, we need to start talking about the overall just speech and platform, policy issues divorced from specific decisions. Part of the problem is going on right now, is when anybody criticizes the platforms or anything, it's that I see speech I don't like, and I want to take it down, or speech was taken down and I want put it back up. That's it. Nobody ever has a substantive discussion of how these decisions are made, or what the standards should be. It's all special pleading, and that is useless. It's useless. The sands of time will wipe all this stuff away. What really matters, we have to have a discussion about how these decisions are made, who's in power, how do you isolate these decisions from political winds, like you're talking about Marietje. These things have become very political. We should not want the companies to care about who's in the White House, or who's in power around the world to make these decisions. Just a side note, this stuff is way worse outside the United States. Especially in developing democracies, there are companies make speech decisions around elections, that have huge impact on individuals in a way that's really horrible, and we just don't talk about in the United States. Hopefully after the election, we can have discussions that aren't about specific URLs or specific stuff coming up or down, but it's about a better framework for how these decisions are made. >> I want to get down to some brass tacks both from the standpoint of the partnership that you guys are both contributing to, the logo right behind you on the screens for both of you, and from the standpoint of people who work in the platform. I'm going to give you a hypothetical about disinformation campaign on November 3rd, call it late in the day. If you don't like my hypothetical, just make up another one that seems more plausible to you. What I want to know is, what are you guys doing in the partnership? What do the people doing in the platforms, and how they decide to take action? Here's my hypothetical. Whether or not the election is going a landslide one way or another or its close, there's a disinformation campaign launched by let's say, Russia, who has interest perhaps in having Trump remain president. Trump has already begun making lots of signals about questioning the integrity of the vote count process, and so the disinformation campaign does the following. It is announced that there has been a successful hack into the vote count in several swing states. Completely independent of whether that's factually the case. Giving of course, the president and other people a potential justification for them calling into question the election outcome. Let's say it's 9:00 PM East Coast Time on November 3rd, and this begins to break, this disinformation campaign. What's the partnership doing? What are the platforms doing? How do they handle it? >> I'll give you a quick overview of how the partnership operates. It's four institutions, us, Stanford Internet Observatory, Graphika, University of Washington Center for Public Integrity, and DFR Lab at Atlantic Council. There's multiple tiers of people. Only Stanford, what we've done is we've hired 35 new students. They're operating the tier 1, they are in a war room. They're working four shifts a day now from 4:00 AM Pacific Time to midnight Pacific Time, five shifts now. They're working in shifts today. Something like that happened that first tier would find it. Now if it's breaking, we have pre-made searches with software that is plugged into Twitter and Facebook, and a bunch of other platforms. It would hopefully find that, it would get on their dashboard, they would file a ticket saying, "We're seeing this claim of election hacking, we need to investigate it." They'll open up that ticket, it'll get routed to a second tier analysts. This is where all of the partners are contributing. Let's say somebody from Graphika now joins, somebody from U-W now joins, and they look into it, and they find, "Okay, well, this is spreading pretty big, and we found this fact check. We found this, we found that." Then we would decide, according to the policies as we understand that this is violated or not, if we knew for sure that this was not true or if it looked like it was unlikely to be true or was it based upon facts. So it doesn't necessarily have to have a fact check, but if they're making a claim, and they don't have any factual basis to it, then we would document that in the ticket, and we push buttons, and it would get routed to all of the companies where we found it. That would go back to our analysts, who would go and find all of the examples on all those platforms. The other thing we would do that's really important is that go look across a bunch of platforms. That's one of the things that's different between us, and what's happening inside the companies. Facebook has got to big team doing this, Twitter has got a big team, YouTube has got big team. Other companies have smaller teams, but no matter the size, they're just caring about themselves, that's their job. If we see something interesting that's starting to go big on Facebook, that will end up on TikTok, it will end up on Reddit, guaranteed. We try to preempt that by one, notifying them, "Hey, this thing is coming," but then also going and finding examples. Those companies will all get notifications as we find the stuff, and then we'll track them. Because you're talking about a claim about specific state that we'd label it with the state and we notify the election integrity, ISAC. There's an organization that has been steered up by the Department of Homeland Security to coordinate between all the state and local election officials. Through them, we could contact, let's say it's Pennsylvania, we could route, "Hey, we found this. This is what's going on. We are taking care of the platforms, but you should know about it because if you are senior election official now you might want to come out and go on TV and say, 'Hey, we heard something's going big.'" That would route through the election integrity, ISAC to the people in Pennsylvania, and then hopefully the companies would take it down. That's an example of something that at least for Facebook and Twitter would definitely violate, for YouTube it would be fuzzier. I think for a number of companies they don't have specific policies. A lot other companies have these really broad policies, and so we'd have to make an argument under the broader policies that it's considered election disinformation. >> Once we've made the decisions, there's a manager shift, we log on and we say, "Okay guys, this is serious, let' s alert all the platforms." But that, we don't have to end it there. There have been a few cases in which when we found that information that we have good reasons to believe is deeply untrue, and we think our dissemination is being suspicious, we've also done our own investigations at the partnership, and we've published publicly and transparently, the result of these investigations. The other thing that I think is really important in our efforts is, if we see something bad, we don't just sit on it and secretly whisper to the ears of the platform. We will document it, and we will share it publicly and rapidly. I think that part of what we do is also important in an election season. >> That one we might go on TV. We'll have relationships with all the major networks. That's a situation where we could pin them. I'd say we can put somebody on TV to talk about this right now. Because I think, in the run-up to it, social media, the interesting thing about election day, is I think the ratio of the importance of traditional versus social media actually switches, and that every TV in the country is going to turned on, and every TV station is going to be talking about the election. Maybe there might be a Fear Factor rerun somewhere, but for the most part, this is what people are going to be engaged with. Trying to just combat disinformation on the social media platforms himself, is going to be less effective, and we're going to have to have that media play over.