How Secrecy Fuels Facebook Paranoia

https://www.nytimes.com/2019/01/16/magazine/facebook-election-analytics.html

Version 0 of 1.

In December of last year, the Senate Intelligence Committee released two reports it had commissioned concerning Russia’s efforts to influence the 2016 election. The outline of the interference effort has always been easy to make out, but key questions about its significance — Could Russia really affect voter sentiment by posting? Did trolls really lower voter turnout in key states? — are still, in large part, matters for speculation. And the committee’s findings did little to change that.

On the subject of Facebook, for example, the reports analyzed new internal data on the campaign efforts of the Internet Research Agency, Russia’s “troll farm,” and offered some examples of its work. We know that there were more than 76 million “engagements” with content from the Russian group — likes, comments, shares and clicks — but it is not currently possible to know how many came from real people. “Since Facebook did not provide data about any sock-puppet accounts involved in the distribution of the content or the existence of ‘fake Likes’ from these accounts,” one report says, “we are operating under the assumption that this engagement was from real people and that this content was pushed into the News Feeds of their Friends as well.”

Those are two enormous leaps. We do not currently know for sure whether these engagements were carried out by humans or bots or a mixture of the two, so the researchers were left to assume the worst. It’s worth scanning other recent Facebook narratives for similar assumptions. Did Facebook cause the gilets jaunes protests in France? Is the movement “a beast born almost entirely from Facebook,” as a recent BuzzFeed piece suggested? Or are the protesters just using Facebook as they sustain a long tradition of civil unrest in France?

The latter theory is obscured by the basic difficulty of discerning why an individual feels a certain way, much less a diverse nation of millions. The former, however, feels as though it has been hidden from us, because Facebook should, in theory, be able to shed light on this question. Its users live in a state of full surveillance, with them and everything around them subject to near-total tracking. The company is at least capable of knowing how a piece of content found its way from one user to thousands or how a gilets jaunes group functions on the social platform. Far more than any outside researchers, Facebook is capable of answering questions about the Internet Research Agency in 2016: For example, how many of the accounts that interacted with the group’s posts went on to interact with other political content? Was the audience for these posts even real in the first place or part of the operation itself? (Both reports criticized tech companies for doing the bare minimum to assist in the committee’s efforts.)

The biggest internet platforms are businesses built on asymmetric information. They know far more about their advertising, labor and commerce marketplaces than do any of the parties participating in them. We can guess, but can’t know, why we were shown a friend’s Facebook post about a divorce, instead of another’s about a child’s birth. We can theorize, but won’t be told, why YouTube thinks we want to see a right-wing polemic about Islam in Europe after watching a video about travel destinations in France. Everything that takes place within the platform kingdoms is enabled by systems we’re told must be kept private in order to function. We’re living in worlds governed by trade secrets. No wonder they’re making us all paranoid.

The original sin of our current tech hegemons is that in order to work, the rest of the world can’t know how they work. How does Google trawl the web and produce search results? That’s a secret that helped it become the dominant search engine and that sustains its business model. How does Facebook choose what comes next in your News Feed? How do YouTube recommendations work? Too much transparency, the defense goes, would expose them to reverse-engineering or abuse by bad actors.

The technologist Anil Dash recently described some of the internet’s biggest platform businesses as “fake markets.” These are businesses that purport to be marketplaces, making money by connecting parties — people who want rides with drivers, advertisers with eyeballs — but are not actually markets in the strict sense of the word. They’re centrally and often assertively managed and manipulated. Some hardly resemble markets at all. Dash singled out Uber: In that “market,” drivers don’t set prices, consumers don’t actually have much choice and resources are allocated by trade-secret algorithms. Our ignorance of how such things work is easier to ignore when a platform is establishing itself and sharing the benefits of its growth. This dynamic only starts to bother us after a platform wins, when there are fewer alternatives or none at all.

When platforms become entrenched and harder for users to leave, the secrets they keep are reflected back to them as resentments. On Facebook, where every user’s experience is a mystery to all others — and where real-world concepts like privacy, obscurity and serendipity have been recreated on the terms of an advertising platform — users understandably imagine that anything could be happening around them: that their peers are being indoctrinated, tricked, sheltered or misled on a host of issues. These gaps in knowledge are magnified once a platform becomes involved in real-world events. Twitter is broadly understood to be a catalytic political force, but this understanding is usually half hunch — its relationship to the Arab Spring is still in dispute; its usefulness to Donald Trump will never be fully understood; and Twitter seems to be in no hurry to help figure it out. Similarly: Is YouTube reflecting a rise in reactionary politics, enabling it or creating it? These are already impossible questions to answer in full; that everything is unfolding inside of a closely guarded attention marketplace makes them difficult to even approach.

So, did Facebook cause the gilets jaunes uprising in France? Maybe, interesting theory; meanwhile, though, I don’t even know why it’s recommending I friend someone I’ve never heard of. Did Facebook swing the 2016 election? Could have, as far as we know; anyway, I can’t even guess why Instagram started showing me a bunch of photos of a certain breed of dog or why it’s suddenly serving me ads for meal kits. I know how these things make me feel, but Facebook knows how they made me behave — knowledge it won’t soon share.

It appears to be the tendency of the press, and of our imaginations in general, to extend these theories in a particular direction. After years of Facebook’s telling us how good it is at connecting people and influencing their decisions, it is tempting to say something like: Yes, O.K., then wouldn’t it have been easy to use these same tools to persuade people to vote for Donald Trump? Facebook has equivocated on this question in a telling way. It’s helpful to imagine what it would have to say to successfully combat claims Russia used its platform to swing the election: Sure, this many users saw propaganda but only for a moment; and besides, this content didn’t seem to affect their behavior in any way at all; sure, this amount of money was spent on ads, but those ads don’t appear to have done anything; yes, these Instagram accounts had that many followers, but more than half of them were bots themselves; O.K., 76 million people were exposed to this content in some way or another, but they mostly glossed over it, like spam.

They’ve inched tellingly in this direction but going all the way would involve unflattering disclosure — the sort that would need to be legally compelled. Mainly, it would be tantamount to admitting that the systems we’re not allowed to know about — and the metrics we aren’t allowed to see — might not be quite as valuable, or as worthy of trade secrecy, as Facebook needs us to think they are. While it’s true that perceptions of the tech industry have shifted, they aren’t necessarily closer to reality. These companies mythologized their own omniscience when it was a boon to their business. Versions of these myths persist, but they’re no longer under their creator’s control — and they’re starting to bite back.