Search Ebook here:


The Chaos Machine: The Inside Story of How Social Media Rewired Our Minds and Our World



The Chaos Machine: The Inside Story of How Social Media Rewired Our Minds and Our World PDF

Author: Max Fisher

Publisher: Little

Genres:

Publish Date: September 6, 2022

ISBN-10: 031670332X

Pages: 400

File Type: Epub

Language: English

read download

Book Preface

consequences

WALKING INTO FACEBOOK’S headquarters can feel like entering the Vatican: a center of power shrouded in secrecy and opulence that would shame a Russian oligarch. The company had spent $300 million alone on building number 21, an airy steel and glass playground of gardens, patios, and everything-is-free restaurants that I visited in late 2018. Between meetings, a two-story mural on a back wall caught my eye, reminding me of a famous Chinese artist who’d recently been featured at the Guggenheim Museum. I asked the PR rep who was minding me if it had been deliberately painted in his style. She laughed politely. It was no mimicry; the artist had been flown in to do an original on Facebook’s walls. So had dozens of other artists. All around me fabulously paid programmers hustled down hallways adorned by priceless murals.

In my bag, stuffed between notepads, was my ticket in: 1,400-plus pages of internal documents, from regions across the globe, that revealed Facebook’s unseen hand in setting the bounds of acceptable politics and speech for two billion users worldwide. To the insider who’d leaked them to me, the files were evidence of the company’s sloppiness and shortcuts in trying to stem the growing global turmoil that he believed its products exacerbated, or even caused. To me, they were even more than that. They were a window into how Facebook’s leadership thought about the consequences of social media’s rise.

Like many, I had initially assumed social media’s dangers came mostly from misuse by bad actors—propagandists, foreign agents, fake-news peddlers—and that at worst the various platforms were a passive conduit for society’s preexisting problems. But virtually everywhere I traveled in my reporting, covering far-off despots, wars, and upheavals, strange and extreme events kept getting linked back to social media. A sudden riot, a radical new group, widespread belief in some oddball conspiracy—all had a common link. And though America had yet to explode into violence, the similarities to what was happening back home were undeniable. Every week there was another story of a Twitter conspiracy overtaking national politics, a Reddit subculture drifting into neo-Nazism, a YouTube addict turning to mass murder.

And Donald Trump’s unexpected victory in 2016 had been attributed, in part, to social media. Though the role of the platforms remained poorly understood, it was already clear that Trump’s rise had been abetted by strange new grassroots movements and hyperpartisan outlets that thrived online, as well as Russian agents who’d exploited social media’s reality-distorting, identity-indulging tendencies. This global pattern seemed to indicate something fundamental to the technology, but exactly what that was, why it was happening, or what it meant, nobody was able to tell me.

At the other end of the world, a young man I’ll call Jacob, a contractor with one of the vast outsourcing firms to which Silicon Valley sends its dirty work, had formed much the same suspicions as my own. He had raised every alarm he could. His bosses had listened with concern, he said, even sympathy. They’d seen the same things he had. Something in the product they oversaw was going dangerously wrong.

Jacob, slight and bookish, had grown up in love with the internet and had been tinkering with computers for years. The technologies seemed to represent the best of the United States. He’d looked up especially to web moguls like Mark Zuckerberg, Facebook’s CEO and founder, who argued that connecting the world would make it better. When Jacob landed a job with an outsourcing agency that reviewed user content for Facebook and Instagram, one of several that the company employs worldwide, it felt like becoming part of history.

Every day his team clicked through thousands of posts from around the world, flagging any that broke a rule or crossed a line. It was draining but necessary work, he felt. But over some months in 2017 and 2018 they had noticed the posts growing more hateful, more conspiratorial, and more extreme. And the more incendiary the post, they sensed, the more widely the platforms spread it. It seemed to them like a pattern, one playing out at once in the dozens of societies and languages they were tasked with overseeing.

Moreover, they believed that their ability to constrain this rising hate and incitement was hamstrung by the very thing that was supposed to help them: the dozens of secret rulebooks dictating what they were to allow on the platforms and what to remove. To Facebook’s more than two billion users, those rules are largely invisible. They are intended to keep the platforms safe and civil, articulating everything from the line between free expression and hate speech to the boundaries of permissible political movements. But as the rulebooks proved inadequate to stemming harms that were often ginned up by the platform itself, and as corporate oversight of this most unglamorous part of the business drifted, the worldwide guides had sprawled to hundreds of confusing and often contradictory pages. Some of the most important, on identifying terrorist recruitment or overseeing contentious elections, were filled with typos, factual errors, and obvious loopholes. The sloppiness and the lacunae suggested a dangerous disregard for a job that Jacob saw as a matter of life and death, and at a time when the platforms were overflowing with extremism that increasingly bled into the real world. Just months earlier, in Myanmar, the United Nations had formally accused Facebook of allowing its technology to help provoke one of the worst genocides since World War II.

Jacob recorded his team’s findings and concerns to send up the chain. Months passed. The rise in online extremism only worsened. He clocked in and out, waiting at his terminal for a response from headquarters, far away in America, that never came. He had an idea. It would mean cracking the security system at work, secreting confidential files abroad, and convincing the media to broadcast his warnings for him—all in the hope of delivering them to the screen of one person: Mark Zuckerberg, founder and CEO of Facebook. Distance and bureaucracy, he was sure, kept him from reaching the people in charge. If only he could get word to them, they would want to fix things.

Jacob first reached me in early 2018. A series of stories I’d worked on, investigating social media’s role in spinning up mass violence in places like the small Asian nation of Sri Lanka, struck him as confirmation that the problems he’d observed on his screen were real—and had growing, sometimes deadly consequences. But he knew that his word alone would not be enough. He would need to siphon Facebook’s internal rulebooks and training documents from the computers at his office. It wouldn’t be easy—the machines were heavily locked down and the office closely monitored—but it was possible: a year earlier someone had managed to pass some of the files to The Guardian, and more were later leaked to Vice News. Jacob built a program to secrete the files out, encrypting and washing them to remove digital fingerprints that might trace back to him or even the country where his office was located. He transferred some to me through a secure server. A few weeks later I flew out to gather the rest and to meet him.

Facebook, on learning what I’d acquired, invited me to their sleek headquarters, offering to make a dozen or so corporate policymakers available to talk. All were tough-minded professionals. Some had accrued sterling reputations in Washington DC, in fields such as counterterrorism or cybersecurity, before joining the Silicon Valley gold rush. Others had impressive backgrounds in human rights or politics. They were hardly the basement hackers and starry-eyed dropouts who had once governed the platforms—although it would later become clear that the dorm-room ideologies and biases from Silicon Valley’s early days were still held with near-religious conviction on their campuses, and remained baked into the very technology that pushed those same ideals into the wider world.

A strange pattern emerged in my conversations at Facebook’s headquarters. An executive would walk me through the challenge that consumed their days: blocking terrorists from recruiting on the platform, outmaneuvering hostile government hackers, determining which combinations of words constituted an unacceptable incitement to violence. Nearly any question I posed, however sensitive, yielded a direct, nuanced answer. When problems remained unsolved, they acknowledged as much. No one ever had to check their notes to tell me, say, Facebook’s policy on Kurdish independence groups or its methods for distributing hate-speech rules in Tagalog.

I found myself wondering: with such conscientious, ultra-qualified people in charge, why do the problems for which they articulate such thoughtful answers only ever seem to get worse? When rights groups warn Facebook of impending danger from their platform, why does the company so often fail to act? Why do journalists like me, who have little visibility into the platforms’ operations and an infinitesimal fraction of their staffing or budget, keep turning up Facebook-born atrocities and cults that seem to take them by surprise? But at some point in each interview, when I would ask about dangers that arose not from bad actors misusing the platform but from the platform itself, it would be like a mental wall went up.

“There’s nothing new about the types of abuse that you see,” the company’s global-policy chief said when I asked about the platform’s consequences. “What’s different here is the amplification power of something like a social media platform,” she said. “As a society, we’re still quite early in understanding all the consequences of social media,” the company’s head of cybersecurity said, suggesting the primary change wrought by the technology had simply been to reduce “friction” in communication, which allowed messages to travel faster and wider.

It was a strangely incomplete picture of how Facebook works. Many at the company seemed almost unaware that the platform’s algorithms and design deliberately shape users’ experiences and incentives, and therefore the users themselves. These elements are the core of the product, the reason that hundreds of programmers buzzed around us as we talked. It was like walking into a cigarette factory and having executives tell you they couldn’t understand why people kept complaining about the health impacts of the little cardboard boxes that they sold.

At one point, talking with two employees who oversaw crisis response, I dropped out of reporter mode to alert them to something worrying I’d seen. In countries around the world, a gruesome rumor was surfacing, apparently spontaneously, on Facebook: that mysterious outsiders were kidnapping local children to make them sex slaves and to harvest their organs. Communities exposed to this rumor were responding in increasingly dangerous ways. When it spread via Facebook and WhatsApp to a rural part of Indonesia, for instance, nine different villages had separately gathered into mobs and attacked innocent passersby. It was as if this rumor were some mysterious virus that turned normal communities into bloodthirsty swarms, and that seemed to be emerging as if from the platform itself. The two Facebookers listened and nodded. Neither asked any questions. One commented vaguely that she hoped an independent researcher might look into such things one day, and we moved on.

But versions of the rumor continued to emerge on Facebook. An American iteration, which had first appeared on the message board 4chan under the label “QAnon,” had recently hit Facebook like a match to a pool of gasoline. Later, as QAnon became a movement with tens of thousands of followers, an internal FBI report identified it as a domestic terror threat. Throughout, Facebook’s recommendation engines promoted QAnon groups to huge numbers of readers, as if this were merely another club, helping to grow the conspiracy into the size of a minor political party, for seemingly no more elaborate reason than the continued clicks the QAnon content generated.

Within Facebook’s muraled walls, though, belief in the product as a force for good seemed unshakable. The core Silicon Valley ideal that getting people to spend more and more time online will enrich their minds and better the world held especially firm among the engineers who ultimately make and shape the products. “As we have greater reach, as we have more people engaging, that raises the stakes,” a senior engineer on Facebook’s all-important news feed said. “But I also think that there’s greater opportunity for people to be exposed to new ideas.” Any risks created by the platform’s mission to maximize user engagement would be engineered out, she assured me.

I later learned that, a short time before my visit, some Facebook researchers, appointed internally to study their technology’s effects, in response to growing suspicion that the site might be worsening America’s political divisions, had warned internally that the platform was doing exactly what the company’s executives had, in our conversations, shrugged off. “Our algorithms exploit the human brain’s attraction to divisiveness,” the researchers warned in a 2018 presentation later leaked to the Wall Street Journal. In fact, the presentation continued, Facebook’s systems were designed in a way that delivered users “more and more divisive content in an effort to gain user attention & increase time on the platform.” Executives shelved the research and largely rejected its recommendations, which called for tweaking the promotional systems that choose what users see in ways that might have reduced their time online. The question I had brought to Facebook’s corridors—what are the consequences of routing an ever-growing share of all politics, information, and human social relations through online platforms expressly designed to manipulate attention?—was plainly taboo here.

The months after my visit coincided with what was then the greatest public backlash in Silicon Valley’s history. The social media giants faced congressional hearings, foreign regulation, multibillion-dollar fines, and threats of forcible breakup. Public figures routinely referred to the companies as one of the gravest threats of our time. In response, the companies’ leaders pledged to confront the harms flowing from their services. They unveiled election-integrity war rooms and updated content-review policies. But their business model—keeping people glued to their platforms as many hours a day as possible—and the underlying technology deployed to achieve this goal remained largely unchanged. And while the problems they’d promised to solve only worsened, they made more money than ever.

The new decade brought a wave of crises. The Covid-19 pandemic, racial reckoning and backlash in the United States, the accelerating rise of a violent new far right, and the attempted destruction of American democracy itself. Each tested the platforms’ influence on our world—or revealed it, exposing ramifications that had been building for years.

In summer 2020, an independent audit of Facebook, commissioned by the company under pressure from civil rights groups, concluded that the platform was everything its executives had insisted to me it was not. Its policies permitted rampant misinformation that could undermine elections. Its algorithms and recommendation systems were “driving people toward self-reinforcing echo chambers of extremism,” training them to hate. Perhaps most damning, the report concluded that the company did not understand how its own products affected its billions of users.

But there were a handful of people who did understand and, long before many of us were prepared to listen, tried to warn us. Most began as tech-obsessed true believers, some as denizens themselves of Silicon Valley, which was precisely why they were in a better position to notice early that something was going wrong, to investigate it, and to measure the consequences. But the companies that claimed to want exactly such insights stymied their efforts, questioned their reputations, and disputed their findings—until, in many cases, the companies were forced to acknowledge, if only implicitly, that the alarm raisers had been right all along. They conducted their work, at least initially, independently of one another, pursuing very different methods toward the same question: what are the consequences of this technology? This book is about the mission to answer that question, told in part through the people who led it.

The early conventional wisdom, that social media promotes sensationalism and outrage, while accurate, turned out to drastically understate things. An ever-growing pool of evidence, gathered by dozens of academics, reporters, whistleblowers, and concerned citizens, suggests that its impact is far more profound. This technology exerts such a powerful pull on our psychology and our identity, and is so pervasive in our lives, that it changes how we think, behave, and relate to one another. The effect, multiplied across billions of users, has been to change society itself.

Silicon Valley can hardly be blamed for the psychological frailties that lead us to do harm or to act against our own interests. Nor for the deep cultural polarization, in America and elsewhere, that primed users to turn these new spaces into venues of partisan conflict, destroying any shared sense of welfare or reality. Even its biggest companies cannot be blamed for the high-tech funding model that gave rise to them, by handing multimillion-dollar investments to misfit twentysomethings and then demanding instant, exponential returns, with little concern for the warped incentives this creates. Still, these companies accrued some of the largest corporate fortunes in history by exploiting those tendencies and weaknesses, in the process ushering in a wholly new era in the human experience. The consequences—though in hindsight almost certainly foreseeable, if someone had cared to look—were obscured by an ideology that said more time online would create happier and freer souls, and by a strain of Silicon Valley capitalism that empowers a contrarian, brash, almost millenarian engineering subculture to run the companies that run our minds.

By the time those companies were pressured into behaving at least somewhat like the de facto governing institutions they had become, they found themselves at the center of political and cultural crises for which they were a partial culprit. You might charitably call the refereeing of a democracy bent on its own destruction a thankless task—if the companies had not put themselves in positions of such power, refused responsibility until it was forced on them at the point of a regulatory gun, and, nearly every step of the way, compromised the well-being of their users to keep billions of dollars in monthly revenue flowing. With little incentive for the social media giants to confront the human cost to their empires—a cost borne by everyone else, like a town downstream from a factory pumping toxic sludge into its communal well—it would be up to dozens of alarmed outsiders and Silicon Valley defectors to do it for them.


Download Ebook Read Now File Type Upload Date
Download here Read Now Epub September 13, 2022

How to Read and Open File Type for PC ?