Samuel Woolley.
Courtesy Samuel Woolley

The Internet Broke Democracy. To Fix It, Design for Human Rights.

Author Samuel Woolley argues that a slew of new technologies will further degrade political life unless we rein them in.

by

Above: “Social media companies risk becoming legacy media as quickly as they became new media, because they’ve failed at addressing online disinformation.”

In 2013, when Samuel Woolley began studying online misinformation as a graduate student at the University of Washington, hardly anyone was worried about the subject. Protests like the Arab Spring and Occupy Wall Street had demonstrated how activists could use online tools to organize for good. But largely, major social networks like Facebook and Twitter were still just a place to post photos, RSVP to parties, and swap movie recommendations. 

The Reality Game: How the Next Wave of Technology Will Break the Truth By Samuel Woolley PublicAffairs $28; 272 pages
The Reality Game: How the Next Wave of Technology Will Break the Truth
By Samuel Woolley
PublicAffairs
$28; 272 pages
Buy the book here.

Over the next few years, everything changed. Scandals like Gamergate and Cambridge Analytica proliferated. Fake news fueled ethnic violence in Myanmar and Sri Lanka. False political ads and misleading memes proliferated across platforms, further entrenching Americans in their political echo chambers. A master troll was elected president. How much worse could it get?

Significantly worse, Woolley argues in his new, ominously subtitled book, The Reality Game: How the Next Wave of Technology Will Break the Truth. Based on years of research and interviews with everyone from Google engineers to Ukranian hackers, it’s a compelling and terrifying look at the future of political life online. Woolley—now a professor of journalism at the University of Texas at Austin—examines the current state and potential future of a slew of technologies, from political bots to deepfakes. Using the umbrella term “computational propaganda” to encompass the many ways these tools can be misused, Woolley paints a bleak, Black Mirror-esque picture. But he’s careful to point out the many ways threats have been overhyped (virtual reality, after all, has been the next big thing for a decade now). The book also devotes considerable space to solutions, arguing that breaking up the tech giants won’t be enough. Woolley spoke with the Observer about computational propaganda and the need to bake ethics into technology from the start.

What are political bots, and why are they a threat?

They’re profiles on social media that are made to look like real people and engage in political discussion. If one person can spread messages on social media effectively, imagine what 10,000 bots can do at the behest of one person. Bots can create the illusion of popularity for ideas and candidates, and then that illusion will be picked up as real by the platforms. Bots are often built to communicate directly with trending algorithms. It’s not so much that people are being tricked by these fake accounts; it’s that they are picking up on a trend bots created and a technology firm legitimized. 

Bots can massively amplify attacks on journalists and marginalized communities, and they can also more effectively trick people who are not digital natives. They’re a very potent political weapon.

What role do bots and other forms of computational propaganda play in Texas specifically? 

During the 2016 election, the Russian Internet Research Agency built pages on Facebook specifically to target Texans. One of them was called Heart of Texas, and it was built as a secessionist page. It’s a bait and switch: The Russians or other actors will create pages with legit content, build a following, then start posting extreme stuff—in this case against Muslims. The fascinating thing about the Russian targeting of Texans in 2016 is it actually resulted in an offline protest, where Texans showed up basically at the behest of Russian agents. Earlier, Governor Greg Abbott had responded on Twitter about the Jade Helm conspiracy theory, and the CIA director later said the governor might’ve emboldened the Russians. So we’ve seen these threats very potently here.

Because Texas is such an important voting state, and because of increasing conversations about Texas turning purple, it’s been a key target of demographically oriented attacks. I think we should expect in 2020 that Texans, especially Latino and African American communities, will be core targets of people spreading propaganda.

Are platforms like Facebook and Twitter irreparably broken?

The problem with Facebook and Twitter is they weren’t designed with democracy and human rights in mind. And they certainly weren’t designed with the potential threat of disinformation and misinformation in mind. What we’re seeing with the major platforms right now is a scrambled attempt to rebuild the plane while the plane is being flown.

There’ve been laudable efforts by these companies to attempt to respond to the threats at hand, but they’re too far down the road. They’ve scaled too quickly and with profit too much in mind to be effective at combating computational propaganda. I think we’ll see a divestment away from platforms like Facebook and a move toward WhatsApp, Instagram, and video apps like TikTok. Social media companies risk becoming legacy media as quickly as they became new media, because they’ve failed at addressing online disinformation.

Until we can regulate these companies, what are some shorter-term fixes?

One of the key things I see happening within Facebook, Google, and Twitter is that employees are really leading a charge. I’ve done many interviews with current and former tech employees who tell me their voices aren’t often heard. We should support efforts like Coworker.org, which is attempting to bring labor organizing to social media firms.

We also need universities and other institutions to invest in public interest technologists. There’s a massive sort of brain drain, in which engineers and computer scientists are leaving top universities and going to tech companies because they pay so well. We need to build programs that incentivize public interest technology work in the same way that the Ford Foundation and others created public interest law in the 1950s and ’60s. 

Finally, we have to support journalism. A lot of people treat journalism as though it’s broken and needs to be re-created, but it’s already doing a really great job responding to the threat at hand. A big part of the salvation to the problem of computational propaganda will come from journalists. Groups like First Draft, Poynter, Neiman Labs, and the Tow Center at Columbia are all leading the charge against misinformation online. It’s great that Google News Lab gave millions to their news initiative, but we need to see more—way more—money going to independent news.

There’s been a lot of talk about the need to break up the social media giants, but you write that there are risks to that approach too.

We’re dealing with monopolies here. There’s no way we can deny that. But I’m fearful that when politicians get their acts together and start legislating, they’ll break up the companies without holding them accountable first. I hope that before any antitrust cases come about, there’ll be repercussions and serious monetary compensation, as well as handing over of data, before the companies get broken up and divest themselves of responsibility. 

One of your book’s recurring themes is that technology is shaped by the people behind it. You argue that we must build human rights into technology. What will that look like?

Throughout all my research, the thing that’s shown up again and again is that there are always people behind technologies. People encode their own values into bots, AI systems, and algorithms. That’s where the work of people like Safiya Noble in Algorithms of Oppression comes up and discusses how these technologies can absolutely be built to be racist. If you train a machine-learning algorithm using tagging from only white men, then it’s very likely that it will be biased toward white men and will leave out people of color and women. 

For the next wave of computer scientists, I’d like to see training that gears people toward designing for democracy. The Zuckerbergs and Dorseys of the world espoused the idea that their tools would be saviors of democracy because they’d allow for open communication, but they didn’t consider how to promote equity and human rights. And so with Jane McGonigal, who’s a game designer and author, we designed something called the Ethical Operating System. It’s a gamified series of guidelines and prompts to make technology designers think about the problems that could come up with technology before they build it, as they build it, and as they launch it. 

Overall, the book is pretty dystopian, but you also write that all is not lost. What are some reasons for hope?

When I started this work in 2013, there wasn’t a conversation. Now people all around the world are talking about this. It wasn’t until 2016 that the social media companies started to pay attention, and now they’re paying very close attention because they’ve realized this is affecting their bottom line. Their investors are angry, and the world is angry. 

There’s also been regulation outside the United States—look to places like Germany for how we might think about responding. Several U.S. politicians are working to build sensible regulation, people like senators Mark Warner and Dianne Feinstein. Even if these laws aren’t being passed yet, we need to do the hard work of building them now. States like California and Washington are also moving toward banning the usage of bots for malicious purposes and attempting to curb the effects of disinformation. So the truth is that society is fighting this problem from multiple angles and all sorts of people are getting involved in this battle. And we’re getting a lot better at it.

This interview has been edited for length and clarity.

Read more from the Observer: