Reprinted with permission from AlterNet.
Friday’s federal grand jury indictment of 12 Russian intelligence officers for meddling in 2016’s presidential election underscores how easy it was to use American internet tools and technology to wage a propaganda war to disrupt U.S. elections.
But a curious dichotomy has appeared in the American political world when it comes to preventing a repeat in 2018’s elections. While there’s been much adoand action from officials to prevent hacking the computer systems that comprise the voting process—one of Russia’s 2016 tactics, there’s also a corresponding absence of federal action when it comes to proactive efforts to stop online propaganda, which was Russia’s other major focus.
Both activities, hacking and propaganda, were features of Russian meddling in 2016, as the Senate Intelligence Committee noted in a recent bipartisan report. Its intelligence services tried to help Donald Trump’s campaign by hacking into local voting systems, and by posting incendiary propaganda on social media—based, in part, on stolen Democratic Party emails and documents. Russia’s goal was to undermine public confidence in the election, even though there’s no evidence that voter rolls or counts were altered.
It remains to be seen if Friday’s indictment, drafted by Special Counsel Robert S. Mueller, will change the absence of a federal response to Russian propagandizing. But, so far, cybersecurity has been the priority, even though there’s been no evidence yet of Russian attacks on American election infrastructure in 2018. In contrast, there have been some evidence this year of pro-Trump propagandizing on social media tied to Russia-connected websites.
Testing 2018’s Propaganda Waters?
Mueller’s 29-page indictment is filled with technical details of hacking, stealing emails and campaign documents, as well as creating websites and online personas to confuse or provoke Americans based on their partisan leanings. Mueller noted Russian intelligence agents created websites, such as ActBlues.com, an intentional misspelling of a major Democratic funding hub, and also fabricated social media pages, personas and hashtags to bait U.S. voters.
Those same strategies can be seen in two recent online attacks against Trump critics that experts said were Russian interventions.
The first was online robotic, or bot, attack, via Twitter, on organizers of a nationwide protest that coalesced around the hashtag #FamiliesBelongTogether, which Tim Chambers, a technologist and writer, documented on Medium.com. “National gatherings with over 300,000 RSVPs were planning on using this hashtag,” Chambers wrote, referring to a legitimate grassroots rejection of Trump’s family-separating immigration policies.
Trump allies retweeted misspellings of the protest hashtag, “thus spreading the decoy hashtag and diluting the real hashtag’s social reach,” Chambers said. “At its peak, I found 22,000 tweets a day were using the wrong hashtag… This was no accident. This was manufactured. One of our scans showed that all of the decoy hashtag tweets in just the last 72 hours had a potential reach of well over 8.9 million people.”
Chambers’s report may be more significant in what it signifies about the deregulated state of online political communications than its impact on anti-Trump protests. There is a difference between “potential” and actual reach—how impactful the bots were. Presumably, if someone wanted to attend such a protest, there would be ample ways to stay informed—including warnings about online disinformation efforts.
But those targeting #FamiliesBelongTogether were not the only bot campaign aimed at Trump critics. In June, a campaign of Democrats who purportedly left the party because other Democrats are vocally lambasting Trump’s White House staff has appeared around the hashtag #WalkAway, according to the Alliance for Securing Democracy, a bipartisan project funded by the German Marshall Fund whose purpose is “tracking Russian influence operations on Twitter.”
“The #walkaway movement, a campaign highlighting alleged discord among the left, got a boost this week from accounts linked to Russian influence operations,” its website said on a July 10 update. “The first use of the #walkaway hashtag from accounts monitored on Hamilton 68 [a group affiliated with the Alliance] was noted in early June (a few weeks after the grassroots campaign began), but engagement remained relatively low throughout the month. Activity spiked on July 2, when Hamilton 68 noted 73 unique tweets using #walkaway, and roughly another 50 using related campaign hashtags (e.g. #walkawaymovement).”
It concluded, “The late engagement suggests Kremlin-oriented accounts are trailers rather than leaders of the campaign, but the high-level of current engagement indicates an effort to astroturf support for the movement and hijack the narrative.”
Whether this campaign has any impact in Democratic circles, which is different from its propaganda value on right-wing websites, is an open question. As Ann Brennan, a “nurse, mom, longtime Philly dweller, [and] dog lover,” wrote on the #WalkingAway page on Twitter, “The #walkingaway crew? They are not former Democrats. They betray themselves with their rhetoric. They aren’t changing minds either. No one is buying their corny transparent shtick.”
If nothing else, these bot attacks are another front in a deepening online political communication toolbox, a trend that started years ago as online publishing became more accessible and affordable to the public. With that communication revolution also came more opportunities and pathways for political propaganda.
“Digital publishing tools have dramatically reduced the costs of producing news, and as a result a large number of new outlets have flourished,” said a March report surveying the academic research surrounding social media and political polarization from the William and Flora Hewlett Foundation. “The content they produce ranges from high-quality investigative journalism to information that is completely false and misleading, in some cases sponsored by state actors and artificially amplified by bots and other automated accounts.”
“And, even more complex from a research perspective, there is a wide gray area between these two extremes, which includes clickbait stories, outlets promoting conspiracy theories, hyperpartisan sites, and websites whose business models rely on plagiarizing mainstream media stories,” it continued. “These sites often receive traffic volumes higher than traditional news sites, with social media being an important source of traffic.”
But even as there’s growing awareness about how online propaganda works, the federal response to propaganda has been faint.
Congress this April appropriated $380 million to harden security of election computer systems. (The Senate Rules Committee held a hearing this week on those efforts.) But Congress took no action concerning actions that might reel in online propaganda. The Senate Intelligence Committee’s recent bipartisan report on Russian meddling said it was stymied from looking into the propaganda’s impact.
“The Committee notes that the ICA [Intelligence Community Assessment] does not comment on the potential effectiveness of this propaganda campaign, because the U.S. Intelligence Community makes no assessments on U.S. domestic political processes,” its report said.
Meanwhile, the Federal Election Commission, which held June hearings on online political ad disclosure requirements, is not poised to act until after 2018’s midterms. Nobody is suggesting censorship is warranted, but even modest disclosure faces an uphill fight at the FEC. (Moreover, what’s contemplated wouldn’t address the social media fabrications described in Mueller’s latest indictment or the propaganda tracked by investigators such as the Alliance for Securing Democracy.)
Meaningful Checks and Balances?
“The big fight at the FEC is how much disclaimer information needs to be included on the face of the ad, and whether there are any ads that could include an alternative disclaimer: the icon that you click on to get the full disclaimer information. That seems to be where the dispute is—where to draw those lines and how broad the exceptions should be,” said Brendan Fischer, Campaign Legal Center director of federal reform.
What he means by disclaimer information is revealing who is really behind political ads. In its June hearings, advertising industry lobbyists said all that was needed was an icon identifying an ad as political—which would not reveal much. Meanwhile, the FEC’s purview is very narrow. A 2002 federal campaign reform exempted all online advertising from regulation. Since then, the Supreme Court has said express advocacy—political ads with keywords explicitly saying vote for or vote against candidates—can be regulated. So, in short, whatever FEC action might be coming will only apply to a fraction of online political communications.
The FEC isn’t expected to issue any new rules before the midterms. But it is likely to do something before the next presidential election gets underway. Public Citizen’s Craig Holman, who has lobbied for better campaign disclosure for years, finds that prospect encouraging. Any federal action, as opposed to Silicon Valley regulating itself, would be more wide-ranging and impactful, he said.
“When it comes to the FEC, I have some hope,” Holman said. “Because the Republicans on the FEC have historically, since 2006, just shot down just any [proposed] regulation of foreign influence and disclosure of money spent on the Internet. They’ve shot it down right and left, and just would glare at the Democratic commissioners when they would be proposing some sort of disclaimer requirements on the Internet.”
But such blanket opposition has now backfired, Holman said, and the FEC’s GOP-appointed commissioners know it.
“They only started realizing they really screwed up when it became evident that Russians used that dark money loophole that Republicans created to influence the elections,” he said. “They don’t like Russians meddling in our elections. They won’t go so far as to say that there was a coordinated effort between the Trump campaign and Russia, or the Trump administration and Russia, to affect the elections. But they don’t like foreign interference. They switched entirely, saying, ‘Okay, I guess we do need something on the Internet to try to uncover this foreign influence.’”
But between now and the time the FEC may issue new disclosure rules, it’s an open question if there will be a federal response to the latest instances of Russian propaganda aimed at Trump’s critics. On Twitter, however, the mudslinging continues.
“Yes I was a liberal, but Liberals showed me that they are not truthful, so i left liberal land, truth always wins when anything is concerned,” tweeted William Drago, who lists “truth” as his ID—making one wonder if this post is a Russia-fabricated bot.
“This fake hashtag is just funny. Keep it up Sergei,” replied Elizabeth G., who appears to be a real person from Key West, Florida.
“#walkaway people on both sides should be ashamed of themselves, but most are too busy pointing fingers at the other side,” tweeted Sakeenah Ayesaha. “Where’s an island for the people in America who just want to live happy lives without all the hate?”
Indeed. And where are the federal officials who represent them? Until Mueller’s latest indictment, they were focusing on protecting the voter registration and ballot-casting computers. They weren’t interested in addressing Russian propagandizing, even with evidence that it is continuing in 2018.
This article was produced by Voting Booth, a project of the Independent Media Institute.
Steven Rosenfeld is a senior writing fellow of the Independent Media Institute, where he covers national political issues. He is the author of several books on elections, most recently Democracy Betrayed: How Superdelegates, Redistricting, Party Insiders, and the Electoral College Rigged the 2016 Election (March 2018, Hot Books).