Leave a comment

Social Media’s “Frictionless Experience” for Terrorists

This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.

The incentives of social media have long been perverse. But in recent weeks, platforms have become virtually unusable for people seeking accurate information.

First, here are four new stories from The Atlantic:

Dangerous Incentives

“For following the war in real-time,” Elon Musk declared to his 150 million followers on X (formerly Twitter) the day after Israel declared war on Hamas, two accounts were worth checking out. He tagged them in his post, which racked up some 11 million views. Three hours later, he deleted the post; both accounts were known spreaders of disinformation, including the claim this spring that there was an explosion near the Pentagon. Musk, in his capacity as the owner of X, has personally sped up the deterioration of social media as a place to get credible information. Misinformation and violent rhetoric run rampant on X, but other platforms have also quietly rolled back their already lacking attempts at content moderation and leaned into virality, in many cases at the cost of reliability.

Social media has long encouraged the sharing of outrageous content. Posts that stoke strong reactions are rewarded with reach and amplification. But, my colleague Charlie Warzel told me, the Israel-Hamas war is also “an awful conflict that has deep roots … I am not sure that anything that’s happened in the last two weeks requires an algorithm to boost outrage.” He reminded me that social-media platforms have never been the best places to look if one’s goal is genuine understanding: “Over the past 15 years, certain people (myself included) have grown addicted to getting news live from the feed, but it’s a remarkably inefficient process if your end goal is to make sure you have a balanced and comprehensive understanding of a specific event.”

Where social media shines, Charlie said, is in showing users firsthand perspectives and real-time updates. But the design and structure of the platforms are starting to weaken even those capabilities. “In recent years, all the major social-media platforms have evolved further into algorithmically driven TikTok-style recommendation engines,” John Herrman wrote last week in New York Magazine. Now a toxic brew of bad actors and users merely trying to juice engagement have seeded social media with dubious, and at times dangerous, material that’s designed to go viral.

Musk has also introduced financial incentives for posting content that provokes massive engagement: Users who pay for a Twitter Blue subscription (in the U.S., it costs $8 a month) can in turn get paid for posting content that generates a lot of views from other subscribers, be it outrageous lies, old clips repackaged as wartime footage, or something else that might grab eyeballs. The accounts of those Twitter Blue subscribers now display a blue check mark—once an authenticator of a person’s real identity, now a symbol of fealty to Musk.

If some of the changes making social-media platforms less hospitable to accurate information are obvious to users, others are happening more quietly inside companies. Musk slashed the company’s trust-and-safety team, which handled content moderation, soon after he took over last year. Caitlin Chin-Rothmann, a fellow at the Center for Strategic and International Studies, told me in an email that Meta and YouTube have also made cuts to their trust-and-safety teams as part of broader layoffs in the past year. The reduction in moderators on social-media sites, she said, leaves the platforms with “fewer employees who have the language, cultural, and geopolitical understanding to make the tough calls in a crisis.” Even before the layoffs, she added, technology platforms struggled to moderate content that was not in English. After making widely publicized investments in content moderation under intense public pressure after the 2016 presidential election, platforms have quietly dialed back their capacities. This is happening at the same time as these same platforms have deprioritized the surfacing of legitimate news by reputable sources via their algorithms (see also: Musk’s decision to strip out the headlines that were previously displayed on X if a user shared a link to another website).

Content moderation is not a panacea. And violent videos and propaganda have been spreading beyond major platforms, on Hamas-linked Telegram channels, which are private groups that are effectively unmoderated. On mainstream sites, some of the less-than-credible posts have come directly from politicians and government officials. But experts told me that efforts to ramp up moderation—especially investments in moderators with language and cultural competencies—would improve the situation.

The extent of inaccurate information on social media in recent weeks has attracted attention from regulators, particularly in Europe, where there are different standards—both cultural and legal—regarding free speech compared with the United States. The European Union opened an inquiry into X earlier this month regarding “indications received by the Commission services of the alleged spreading of illegal content and disinformation, in particular the spreading of terrorist and violent content and hate speech.” In an earlier letter in response to questions from the EU, Linda Yaccarino, the CEO of X, wrote that X had labeled or removed “tens of thousands of pieces of content”; removed hundreds of Hamas-affiliated accounts; and was relying, in part, on “community notes,” written by eligible users who sign up as contributors, to add context to content on the site. Today, the European Commission sent letters to Meta and TikTok requesting information about how they are handling disinformation and illegal content. (X responded to my request for comment with “busy now, check back later.” A spokesperson for YouTube told me that the company had removed tens of thousands of harmful videos, adding, “Our teams are working around the clock to monitor for harmful footage and remain vigilant.” A spokesperson for TikTok directed me to a statement about how it is ramping up safety and integrity efforts, adding that the company had heard from the European Commission today and would publish its first transparency report under the European Digital Services Act next week. And a spokesperson for Meta told me, “After the terrorist attacks by Hamas on Israel, we quickly established a special operations center staffed with experts, including fluent Hebrew and Arabic speakers, to closely monitor and respond to this rapidly evolving situation.” The spokesperson added that the company will respond to the European Commission.)

Social-media platforms were already imperfect, and during this conflict, extremist groups are making sophisticated use of their vulnerabilities. The New York Times reported that Hamas, taking advantage of X’s weak content moderation, have seeded the site with violent content such as audio of a civilian being kidnapped. Social-media platforms are providing “a near-frictionless experience for these terrorist groups,” Imran Ahmed, the CEO of the Center for Countering Digital Hate, which is currently facing a lawsuit from Twitter over its research investigating hate speech on the platform, told me. By paying Musk $8 a month, he added, “you’re able to get algorithmic privilege and amplify your content faster than the truth can put on its pajamas and try to combat it.”


Today’s News

  1. After saying he would back interim House Speaker Patrick McHenry and postpone a third vote on his own candidacy, Representative Jim Jordan now says he will push for another round of voting.
  2. Sidney Powell, a former attorney for Donald Trump, has pleaded guilty in the Georgia election case.
  3. The Russian American journalist Alsu Kurmasheva has been detained in Russia, according to her employer, for allegedly failing to register as a foreign agent.

Evening Read

Illustration by Ben Hickey

The Annoyance Economy

By Annie Lowrey

Has the American labor market ever been better? Not in my lifetime, and probably not in yours, either. The jobless rate is just 3.8 percent. Employers added a blockbuster 336,000 jobs in September. Wage growth exceeded inflation too. But people are weary and angry. A majority of adults believe we’re tipping into a recession, if we are not in one already. Consumer confidence sagged in September, and the public’s expectations about where things are heading drooped as well.

The gap between how the economy is and how people feel things are going is enormous, and arguably has never been bigger. A few well-analyzed factors seem to be at play, the dire-toned media environment and political polarization among them. To that list, I want to add one more: something I think of as the “Economic Annoyance Index.” Sometimes, people’s personal financial situations are just stressful—burdensome to manage and frustrating to think about—beyond what is happening in dollars-and-cents terms. And although economic growth is strong and unemployment is low, the Economic Annoyance Index is riding high.

Read the full article.

More From The Atlantic

Culture Break

an image of someone hunched over layer onto blocks of color and an image of a moon in the top right corner
Illustration by The Atlantic. Sources: Alfred Gescheidt / Getty; Getty

Read. “Explaining Pain,” a new poem by Donald Platt:

“The way I do it is to say my body / is not my / body anymore. It is someone else’s. The pain, therefore, / is no longer / mine.”

Listen. A ground invasion in Gaza seems all but certain, Hanna Rosin discusses in the latest episode of Radio Atlantic. But then what?

Play our daily crossword.


Working as a content moderator can be brutal. In 2019, Casey Newton wrote a searing account in The Verge of the lives of content moderators, who spend their days sifting through violent, hateful posts and, in many cases, work as contractors receiving relatively low pay. We Had to Remove This Post, a new novel by the Dutch writer Hanna Bervoets, follows one such “quality assurance worker,” who reviews posts on behalf of a social-media corporation. Through this character, we see one expression of the human stakes of witnessing so much horror. Both Newton and Bervoets explore the idea that, although platforms rely on content moderators’ labor, the work of keeping brutality out of users’ view can be devastating for those who do it.

— Lora

Katherine Hu contributed to this newsletter.

When you buy a book using a link in this newsletter, we receive a commission. Thank you for supporting The Atlantic.

Source link

Leave a Reply