>> Now as we hurdle towards the end of the 2020 election season, major tech and social media companies are issuing significant updates to their policies. You may have heard about the new policy to take down conspiracy theory pages or groups but even others proposed to pause all political ads after the polls close on November 3rd. The acceleration of these decisions and estimating their implications can really get your head spinning and it's ironic because most companies were very reluctant to moderate such content for a very long time that's to say to moderate for certain content because commercially inspired decisions not to show nudity or to prioritize paid for content. Curation into commercial interests already happens every day and then there's the ability for users to report content that they allege violates the Terms of Service but for social media companies to take down, demote or label this information but also violence, hate, lies or copyright protected materials was pushed back against for a long time. I remember the tech company representatives often answering such requests with the reference to the US First Amendment and the mantra that a company like Facebook did not want to be the arbiter of truth. Now Maria Ressa, journalists in the Philippines calls this an abdication of responsibility by the companies and her observations about how speech get weaponized are urgent reminder of the global nature and impact of US-centric corporates decisions. So in turn, we may want to ask ourselves whether Democratic lawmakers and regulators have not abdicated responsibility too. What do we know about content that is taken down and no longer visible to anyone and are these decisions being made by humans or by machines or both? The oversight over the policies by companies is lacking and that does not add to better transparency or accountability. To get to the nitty-gritty of these decisions and their consequences. We invited esteemed guests Evelyn Douek and Nick Pickles to join us for discussion. Evelyn is a lecture on law and an SJD candidate at Harvard Law School, where she studies global regulation of online speech and private content moderation and Nick Pickles is the Global Head of Public Policy Strategy and Development at Twitter. >> The question for me, is collaboration in itself a bad thing? The question is, how do you do collaboration in the right way? I guess the starting point is companies like Twitter or Facebook, Google are global platforms by their nature and yet if you look at the regulatory standards, there is no global standards on anything. There is no global standard on speech. No global standards on even things like terrorism. The things that you would expect governments agree on as a huge amount of politics, a huge amount of disagreement. In some country, something is illegal in other countries, the same thing is illegal. Even within the European Union, you got to disparity over things like hate speech. Way starting from a position where we're not looking to enforce. When we're starting from position where what's being asked of the companies differ significantly where you are. The challenge and the reason why industry collaboration, I think is an absolutely right is going to be a huge part of the future. Firstly as you've highlighted the expectational industry to move quickly against content that's identified as harmful by the public by policymakers, by academics and particularly once you look at this challenge that the internet is a vast ecosystem, we often talk about four or five companies as if they represent the whole internet and actually, twitter is a good example, depending which room I'm in. Twitter is the largest of this whole companies or the smallest of the big companies and so we don't have the resources of the Googles and the Facebooks of this world and so the question is, how do we make sure in a world where more decisions are being taken by automated technology, but motivated by a desire to take actions faster to prevent harm. How do we make sure that those technologies on the preserve of a few large companies, in almost becoming a competitive advantage of scale that in smaller companies can't compete with and so you end up in this strange situation where one of the potential downsides, for example, of requiring regulation that take content down at a certain period of time is you either incentivize a lots of automated decision-making or you incentivize people to rush things, make a decision and check later if they were correct. Now for a large company with big AI investment, maybe that technology is more accurate. For a smaller company will not access the same resources. Maybe they run a higher margin of error and they take down more speech on intentionally. So whether it's collaborating on showing things like hashes, which is a digital fingerprints of images that have been previously removed because they violate the terms of service. Whether it's collaborating on things like sharing technology and sharing things. PhotoDNA for example, a tool that was originally developed by Microsoft, now licensed across the industry, used to find child sexual exploitation. It goes back to the question that everyone asked. The question isn't, should industry collaborate? The question is, what are the guardrails and the frameworks that needs to be in place to make sure that those collaborations can be scrutinized does transparency and I'll end by just saying that actually we are in the rooms and we're not playing chess and one of the things actually I am very concerned about is the pressure from policymakers to harmonize and actually that pressure is often external rather than internal and the real reason I worry there is that harmonization across platforms on speech standards is bound for free expression. Actually having a diversity of platforms, having a diversity of different rules does mean that you can have content that is allowed on different surfaces, users can make the choice about those platforms. So actually I think certainly from Twitter's point of view, we don't want to have that standardization. We think that an open internet thrives with user choice and a competitive environment. In a situation where people are adversarial by nature, they are aware that people are looking for them and this is definitely the case in something like child sexual exploitation. >> That technically sophisticated, they're using technologies to try and mask their identity. It's not a threat model to save the industry collaborates using tools like photo DNA for a sophisticated acts a group who would have taken that into account, and actually if you look at something as simple as video hashing, think one of the statistics that was most striking after the Christ Church attack was the numbers of different versions of the same video being uploaded, and without a doubt, some of those were being manipulated to try and evade detection. So I think depending on the threat model, importantly, you still need rules at the public that explains to people, this is what the rules are and if you break them, and I think you can separate out the detection technology, guess the government policy source of methods from the rules. The other thing I'd say is the content moderation isn't just about content, it's often about behavior. You're looking at how an account behaves, again that's a different way and going forward the challenge is going to be the content moderation is just about leave piece of content up, take piece of content down. There's a lot of interventions in the middle, which I mentioned labeling that we've been doing, putting things behind a warning message saying, click on this first, so I think in some instances your moderation decision is also public to the user, and so the goods be aware of it. Yeah. I think you can definitely more transparent about the standards of the rules without necessarily empowering bad actors to understand the details of your enforcement technology. >> I just wanted to underline something I think you said which was really important is the idea of not ever having stable rules. I just fundamentally agree with that in a very deep sense, not just because of bad actors, but just because of the sense of speech rules are. This is something we've been arguing about for centuries and I'm not going to find a right answer to, so something I spent a lot of time thinking about is how can we disagree it better and more productively and more transparently, because we're going to be disagreeing about this forever. It's not just because of bad actors, but it's because these are hard and some of these things are hard and also some of them are fundamentally contestable, but also because culture change, society changes, speech like itself changes. One of my favorite examples in the past couple of weeks is the proud boys hashtag, which two weeks ago was a symbol of violent white supremacy and then literally overnight was reclaimed by gay men kissing and making it a symbol of gay love in order to reclaim the hashtag and those two words overnight literally completely change their meaning, and so this is just going to be a rapidly evolving process all the time, and so how do we make that more productive because I think the current process we have is just so adhoc, and so responsive to public attention, public pressure is very interested. Oh, yeah, the powerful people get attention and they get their decisions, have attention made and get remedies, but other people don't. It's who can get their name into the press that gets remedy and so we need to find processes to make that productive disagreement. >> Thank you. Just building on that, I would love to hear a little bit more about how to best find that context. Not only context in terms of a something satire or is something hateful, or is somebody impersonating or legitimate, but also in the global context or in the context of the reality on the ground, for example, the risk of actual outbreak of violence, which unfortunately is no longer something that Americans should only think about abroad, but also a closer to home, so that contexts and then maybe also the ability or the methods to actually enforce, I was struck by the example of the Christ Church violence, but also struck by responses to the announced QAnon ban where a lot of people reacted quite cynically and said, well, we'll have to see whether Facebook can enforce its own policies. Beyond finding the right context, what makes it so hard? Is it really possible to enforce promised policies or is there inevitably going to be a glitches, maybe Evelyn, if you want to start and then over to Nick, or are you eager to start Nick? Evelyn, okay. We'll stick with you. >> [LAUGHTER] I think this is another really fundamental point is the question of enforcement because so often these platforms have pretty good policy, sometimes great policies on paper that you look at that and you go, you know that is a really comprehensive hate speech policy. But then when it comes to actually enforcement, it just doesn't happen. I mean to Twitter, it has been in the news in the last couple of weeks with this, but they have a policy against death wishes on people and they drew attention to it in the context of the President's illness and all wonder people said, hold on, [LAUGHTER] I seem to remember getting a little bunch of death wishes on my Tweet timelines. So the question reinforcement is so fundamental because the rules don't mean anything if they aren't actually in practice. I think there's two elements to this. I think that we do have to accept that perfect enforcement isn't ever going to be possible, the scale of these platforms. I mean, governments never got perfect enforcement right. Speech decisions are difficult and they never had to govern the scale, so much speech was never governable before and they use AI tools that are fundamentally blunt instruments. I think we do have to get to a point where we accept that, but we can't allow the inevitability of error to come and excuse and excuses all error, and we need to think about this in a more systemic level or like what are the acceptable error rates that we are willing to accept? That might be different in the context of some rules that are using the context of other rules. For example in the issue of like incitement to violence, we might really want platforms to get an extremely low error rate. That might be something that we think it is really important that their policies are enforced really strictly and well, and we might want them to err on the side of over enforcement of those rules, for example, as opposed to under enforcement because of the cost of getting it wrong in that case, are so much greater than the cost of getting it wrong in a different kind of case, like spam or something else. I do think that is right, the perfect enforcement isn't possible, but that has to only be the start of the conversation, not the end of the conversation.