Social Media Makes Us Soldiers in the War Against Ourselves

Social Media Makes Us Soldiers in the War Against Ourselves

https://ift.tt/2I6sl5T

Over the past three years, America’s information ecosystem has proven easy pickings for anyone with a fistful of VPN connections and a sweatshop of kids playing World of Trollcraft. Whatever precise effects Russian interference had on the 2016 election, it finished off both social media’s innocence and traditional media’s authority.

But Americans, as of now, have nowhere else to turn. The habits of the library and the newsstand, to say nothing of pre-digital social life, are lost to us. Instead, we’re stalled in the data smog that hangs over social media and search engines. Sometimes we confront trolls, bots, phish, spam, and malware head-on; sometimes we meet trollspeak in memes parroted by real people. But the sanctity of our reason is routinely violated online.

In rolling revelations all winter, Facebook and other tech companies admitted that potentially hundreds of millions of users had been tricked by data miners and harassed by trolls, including legions at the Internet Research Agency, the Russian outfit indicted by the Justice Department in February. That sounds like a cause for condolences. But trolled people troll people. Many victims turn around and enlist as foot soldiers, passing on their cognitive injuries to others. “Computational propaganda,” as the human-machine hybrid campaigns are known, has been described as a way of “hacking people.”

This damage to our brains is overdetermined. First, the crime is in the software. As WIRED’s own Adam Rogers predicted in 2015, “Google’s search algorithm”—with zero help from bad actors—“could steal the presidency.” But digitization has also simply overwhelmed us. The journalist Craig Silverman put it this way: “Our human faculties for sense-making, and evaluating and validating information, are being challenged and in some ways destroyed.” And the information war includes seasoned generals, including Yev­geny Prigozhin (a restaurateur, b. 1961 in Leningrad) and Mikhail Bystrov (a cop, said to be in his late fifties). These two men ran the IRA and deftly exploited America’s mental vulnerabilities, flammable culture, and opportunistic software.

The weapons are hybrids too. According to reports in March, Cambridge Analytica, the data firm employed by the Trump campaign, launched disinformation scripts and bulk provokatsiya. IRA did the same, but it also conscripted real people. Some of these are partisans, or freestyling trolls. But a smaller group willingly subjugate themselves to specific infowar efforts. In January, a woman in South Carolina—a cheerful-­looking phytocannabinoid seller in her mid-­sixties—seems to have mobilized her #MAGA-­festooned Twitter account to promote a Nunes-­supporting meme: “Release the memo.” “Make this trend,” she implored. Trend it did.

Computational propaganda, which describes human-machine collaboration in influence ops, was coined at the Oxford Internet Institute at Balliol College, Oxford. (Balliol was founded in 1263, the year King James I of Aragon aimed to sabotage significant information channels by censoring Hebrew writing.) The phrase describes the mixing of algorithms, automation, and human curation to manipulate perceptions, affect cognition, and influence behavior.

That human curation is key. People can whitewash buggy botspeak by giving it a human sheen in a retweet. Curators can also identify the cultural flash points—the NFL, Colin Kaepernick, the memo—that fire people up, so botnets can ratchet up the velocity of the most incendiary memes. The writer Jamelle Bouie points out that, in the US, these “flash points” often entail racism. It takes an American idiom and id to properly troll the electorate.

Samantha Bradshaw, at the Oxford Internet Institute, recently documented the ways that 28 nations have used social media to shape opinion. In every case, the campaigns aimed to ape the style and habits of actual activists, and they caught on to the degree that seemed human. The content didn’t need to be accurate or fair to be effective; it just needed to seem human, and humans with beating hearts are uniquely able to dispel the whiff of the uncanny from an automated script. Humans, of course, are indispensable when bodies are needed to show up in space or for photos.

As Bradshaw told the British parliament in testimony about hybrid information warfare, researchers lack the corporate datasets or government subpoena power to identify the exact humans involved in these campaigns. But the IRA indictments pointed the way to some Americans implicated in the Kremlin-sponsored infowar in 2016. When CNN approached two such people, they had contrasting responses.

“What would you think? A guy calls you and you talk to him and you build up a rapport over a period of time,” said Harry Miller, who was reportedly and unwittingly paid by some of the Russian indictees to cage Hillary Clinton in effigy. “They had that beautiful website.” By contrast, Florine Gruen Goldfarb, who mobilized Trumpites to demonstrate at an IRA-organized event, refused to accept that she’d been manipulated. “I don’t go with the Russians. C’mon, give me a break,” she said.

Bots have equanimity when it comes to contested stories. Humans decisively prefer to spread lies.

The fact that the campaigns involve masquerade, deception, and anthropomorphism—the disguising of robots as people—is part of why the IRA is charged with fraud and not acts of war. It’s also why Americans are disinclined to see the internet and the nation as under siege. If we had swollen glands and bloody vomit, we’d accept a diagnosis of anthrax poisoning, but no one likes to see herself as cognitively vulnerable. Once, to my shame, I circulated some bot-­amplified lies about antifa. (The meme was “Antifa is just as bad as neo-Nazism.”) When caught out, I started to justify myself; fortunately, seeing disinfo as aerosolized anthrax—equally hard to detect—helped restore my confidence. I corrected my mistake. My immune system rallied. “No one likes to be told they’ve been duped,” Bradshaw told me by email. But we must be “more aware of the ways in which bad actors try to infiltrate our networks to manipulate our thoughts and actions.”

To determine how we got here, we might not need to perseverate on the exotic stuff: the Kremlin or troll farms and botnets. Perhaps the fault is in our ancient all-too-human bodies. In March, an MIT study of false news made it clear that bots have equanimity when it comes to contested stories, while humans decisively prefer to spread lies over truth. In particular, we appear to like and share the lies that shock and disgust, arousing our bodies in druglike ways.

If so, there’s no way around this problem but through it. Of course, propaganda should be marked, regulated, and debunked. But at the same time, we need to understand our fragility as animals. Poor, mortal creatures of living-dying flesh that we are, we crave sensation. More even than robots, our most ancient proclivities may be our undoing.


Virginia Heffernan(@page88) is a contributor to WIRED. She wrote about how we see the world now in issue 26.04.

This article appears in the May issue. Subscribe now.

Tech

via Wired Top Stories https://www.wired.com

May 2, 2018 at 06:09AM

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.