Weapons of Mass Persuasion
Photo: US Army

FYI.

This story is over 5 years old.

Tech

Weapons of Mass Persuasion

Online advertising has allowed governments and political entities to weaponize the marketplace of ideas. Is there anything we can do about it?

The shadow war of influence waged by political actors on social media appears to grow ever more complex with time.

In Russia and China, large state-sponsored agencies manage social media manipulation campaigns, developing them as components in actual combat doctrine. Swarms of bots manipulate conversation and disrupt activists in a growing number of states including Syria, Mexico, Turkey, and the United States, to name a few. "Growth hacking" tactics borrowed from the startup world are helping to promote the spread of an ISIS-developed Android app. The Israeli military is active on 30 social media platforms in six different languages, and in April, the British army created a brigade of "Facebook warriors" responsible for "non-lethal warfare," drawing together "a host of existing and developing capabilities essential to meet the challenges of modern conflict and warfare."

Advertisement

In April 2014, the Associated Press revealed a sham social platform launched with the intention of destabilizing the Castro regime through subtle algorithmic manipulation. These techniques compliment an already robust trade in for-hire Wikipedia manipulation and fake Twitter followings.

We can read these developments as the response to an earlier wave of grassroots movements that crashed on the scene with an aura of invincibility and inevitability in the early 2010s. As debate continues over the long-term, real-world efficacy of movements like Occupy Wall Street (2011) and the Arab Spring (2010), what is not in dispute was the power of these emergent waves of activity to quickly direct mass attention and command the national and global agenda.

For governments and other political entities with an interest in shaping social discussion, these vast upswells might be seen as a threat, capable of fomenting political dissent and organizing massive demonstrations. In contrast to an earlier era, where a few select gatekeepers of newspaper, radio, and television were linchpins in shaping public perception, online social platforms present an ever more decentralized landscape of influence that is radically more difficult to predict and shape. It's enough to give any rootin' tootin' Edward Bernays-loving propagandist heartburn.

These tools would focus on the development of a kind of "cognitive" security, focusing on maintaining the integrity of the online social environment.

Advertisement

But the growing sophistication of techniques deployed to manipulate social media indicate that, far from surrendering these spaces, governments and political actors worldwide have redoubled their efforts to gain control over online discussion. Just as the software and hardware of the internet has been militarized by the imperatives of a mostly secret "cyberwar," so too are online social spaces being weaponized in new and mostly hidden ways.

These operations tend to operate in the dark, difficult for most web users to detect and even harder to counter. A new set of tools may be needed to help citizens protect themselves. In contrast to the discipline of computer security, which focuses on the integrity of machines, these tools would focus on the development of a kind of "cognitive" security, focusing on maintaining the integrity of the online social environment.

It's true that there is a strong parallel between this new era of social media manipulation and the deception and psychological warfare of past decades. Whether it is leafleting, creating phony organizations, or slowing organizational processes, the practice of striking at the ideas and social behavior of opponents is a strategic approach with both a distinguished (and infamous) lineage.

However, what we know about the current state of research and the flow of funds from governments to researchers, suggests that the coming generation of techniques will be distinguishable in important ways.

Advertisement

Militaries and other actors have and will continue to refine traditional public relations methods with the latest findings in quantitative social science. The result will be a form of tactical persuasion that is unique in its quantitative approach, the potential level of precision, and subtlety.

Interestingly, it is the financial backbone of the internet—advertising—that has made the modern web such a fertile place for these new techniques to emerge.

Online advertising introduced a need to track and quantify user behavior. Unlike earlier forms of advertising in print or otherwise, which were less able to target potential customers and monitor consumer responses, Internet advertising has created a rich, persistent, and growing cache of data about users. While this has been a boon to researchers seeking to learn about the dynamics of large social groups, it also provides would-be influencers with an ideal laboratory to test and then precisely evaluate the effectiveness of their techniques.

Online advertising also produced a need to segment and identify audiences that could then be targeted and promoted to. It produced a financial rationale for many of the building blocks of social platforms: the "friend," the "follower," the "user profile," and so on. This data around identity also provides the raw material for messaging techniques that are tailored for ever more specific audiences. Rather than formal messages piped over radio airwaves to all, influence operations can become targeted and customized for particular subgroups, or even specific clusters of users.

Advertisement

And online advertising drove the need to acquire large audiences of users quickly too. This has produced social platforms with highly permeable environments where people can create a robust presence with little to no effort. It has also resulted in platforms which generally feature little, if any, verification. It can often be difficult to tell when a given user is real, or constructed with an agenda in mind. This opens the door to coordinated efforts that create swarms of false identities, or a few realistic ones that act as effective sock puppets for a broader campaign.

In 2011, the leak of the HBGary e-mails revealed both a government solicitation and a proposal for exactly this kind of "persona management" sockpuppeting. Where user behavior is tied to broader systems of content recommendation (as in trending topics or generated newsfeeds) this new generation of techniques can shape a conversational environment in ways that are difficult for users to detect.

Short of entirely altering the financial infrastructure for the internet, what can and should be done about the development of new weapons of mass persuasion? Any attempt to grapple with this issue confronts the important question of how these efforts by militaries should be viewed in the context of the vast ecosystem of influence that confronts users of the social web already.

Does an attempt to shape voting around political candidates through search engine manipulation feel more or less acceptable than the manipulation of search results around commercial products? Is the use of swarms of bots to disrupt political coordination around hashtags on Twitter more or less acceptable than, as some researchers have pointed out, the influence that platforms exert just through their existing design?

Advertisement

These are relevant questions in large part because they define who should be the responsible party in policing the arenas of online discussion, and the permissible tools that parties bring to that marketplace of ideas. The question of what to do about a more quantitative, more precise, and more invisible form of persuasion raises the broader question of the kind of massively participatory conversational ecosystem we want to exist online.

There is good reason to not place the development of these policies in the hands of perhaps the most obvious guardians of this space, the platforms themselves. For one, platforms, either willingly or unwillingly, are susceptible to the outside influence of military and government actors. To that end, platforms operating on their own will be unreliable stewards and architects of this kind of policy.

Distributed tools may become necessary to help users navigate an ever more militarized, quantified, and sophisticated universe of behavioral nudges.

Nor would they likely take this role. As Tarleton Gillespie and others have observed, the long-standing rhetorical and legal position taken by user-generated platforms of all stripes has been to play the role of a purely neutral facilitator. Wading into the role of refereeing the tools of influence permitted by users would reverse this direction significantly and raise questions about the other responsibilities these companies may have to the content flowing across the online spaces they govern. Should a platform ban the spreading of false or misleading information on its platform? How would it enforce such a ban, in any case? Would policing the methods of influence also make platforms responsible for the truth or accuracy of any given piece of content? How would they enforce such rules, in any case?

Advertisement

To date, journalists and leakers have provided a human layer of protection by exposing these campaigns to the public when they are discovered. But in the future, distributed tools may become necessary to help users navigate an ever more militarized, quantified, and sophisticated universe of behavioral nudges.

Collective systems of blocking on Twitter now used for neutralizing somewhat the influence of harassers online may become potent shields in logging and neutralizing military sock puppetry. One might imagine a "radar" style system which constantly monitors the content shared by one's social group online, scanning and quantifying the sources of information that are proving to be most influential. The unexpected emergence of an untrusted or a clearly biased source may be cause for greater scrutiny on the part of the user, sort of like a more paranoid ThinkUp.

Another system might be the development of a user-facing tool to assess the algorithmic vulnerabilities of social media platforms. Such a tool would run a preset battery of test search queries or actions on a given platform, evaluate the results, and inform the user on how it might be manipulated to show the content desired by another user. Groups of Facebook users, for instance, could run structured sets of "likes" on each other's content in order to better identify how coordinated efforts might be used to actively shape the News Feed they end up seeing.

Granting users more effective means of detecting and countering these efforts will only be the beginning. Better computer security spawns better exploits. Advertising interprets blocking as a threat and routes around it. The same may apply here: Systems that give users more control over the influence environment around them will confront a rising tide of sophistication to resist it. In that sense, the new battleground of influence may lead to an ever expanding arms race, not just between persuasive political powers, but between them and the users of the world's social networks.

All Fronts is a series about technology and forever war. Follow along here.

Tim Hwang leads the Intelligence and Autonomy Project at Data & Society, which connects the dots between robots, algorithms and automation. He is also co-author of The Container Guide, a field guide to the integrated logistics industry. He is @timhwang and at timhwang.org.

Top: Army National Guard Sgt. Rebecca Pilmore talks to her team as driver Pfc. Lucas Graham (right) maneuvers through simulated convoy training.