All posts tagged: moderation

Meta allowed pornographic ads that break its content moderation rules

Meta allowed pornographic ads that break its content moderation rules

Meta owns social media platforms including Facebook and Instagram JRdes / Shutterstock In 2024, Meta allowed more than 3300 pornographic ads – many featuring AI-generated content – on its social media platforms, including Facebook and Instagram. The findings come from a report by AI Forensics, a European non-profit organisation focused on investigating tech platform algorithms. The researchers also discovered an inconsistency in Meta’s content moderation policies by re-uploading many of the same explicit images as standard posts on Instagram and Facebook. Unlike the ads, those posts were swiftly removed for violating Meta’s Community Standards. “I’m both disappointed and not surprised by the report, given that my research has already exposed double standards in content moderation, particularly in the realms of sexual content,” says Carolina Are at Northumbria University’s Centre for Digital Citizens in the UK. The AI Forensics report focused on a small sample of ads aimed at the European Union. It found that the explicit ads allowed by Meta primarily targeted middle-aged and older men with promotions for “dubious sexual enhancement products” and “hook-up …

What Meta’s move to community moderation could mean for misinformation

What Meta’s move to community moderation could mean for misinformation

Meta, the parent company of Facebook, Instagram, WhatsApp and other services has announced it will discontinue its third-party factchecking programmes, starting in the US. Journalists and anti-hate speech activists have criticised the decision as an attempt to curry favour with the incoming US president, Donald Trump, but there could be an even more cynical reason. Meta’s strategy could be a calculated move for greater user engagement and income. This decision marks a significant shift in how the social media giant addresses misinformation on its platforms. Meta’s official rationale for ending its independent factchecking in favour of crowdsourced contributions centres on promoting free expression. Chief executive, Mark Zuckerberg, said that the company seeks to reduce censorship and will concentrate its enforcement efforts on illegal or highly harmful content. This move aligns with broader discussions among governments, social media companies, civil society groups and the public on balancing freedom of expression and content moderation. These debates have become urgent, as there is mounting evidence that there are biases in content moderation. For example, a 2023 University of …

Bluesky joins Threads to court users frustrated by Meta’s moderation issues

Bluesky joins Threads to court users frustrated by Meta’s moderation issues

Social networking startup Bluesky is seizing the moment. Amid ongoing moderation issues affecting X rival Instagram Threads, the decentralized X competitor Bluesky has created an account on Meta’s newest platform. In doing so, the startup aims to capitalize on the discussions now taking place on Threads, where a number of users are threatening to leave Threads for Bluesky over this latest set of problems. On Wednesday, Instagram head Adam Mosseri said the company was looking into the network’s moderation issues, but no resolution has yet come about. Nor has Instagram explained what caused people to have their Threads’ posts downranked and blocked, or their accounts removed or falsely flagged as belonging to underage users. However, many suspect the company is relying on AI-powered moderation systems, which are likely misfiring. As conversations about leaving Threads for Bluesky ramped up, Bluesky set up an account and reached out to Threads users, cheekily writing “Heard people were talking about us … so we created an account to share some more information!” The company then clarified several key ways …

Trust and Safety Exec Talks About AI and Content Moderation

Trust and Safety Exec Talks About AI and Content Moderation

Alex Popken was a longtime trust and safety executive at Twitter focusing on content moderation before leaving in 2023. She was the first employee there dedicated to moderating Twitter’s advertising business when she started in 2013. Now, she’s vice president of trust and safety at WebPurify, a content moderation service provider that works with businesses to help ensure the content people post on their sites follows the rules. Social media platforms are not the only ones that need policing. Any consumer-facing company — from retailers to dating apps to news sites — needs someone to weed out unwanted content, whether that’s hate speech, harassment or anything illegal. Companies are increasingly using artificial intelligence in their efforts, but Popken notes that humans remain essential to the process. Popken spoke recently with The Associated Press. The conversation has been edited for clarity and length. QUESTION: How did you see content moderation change in that decade you were at Twitter? ANSWER: When I joined Twitter, content moderation was in its nascent stages. I think even trust and safety …

Reddit’s I.P.O. Is a Content Moderation Success Story

Reddit’s I.P.O. Is a Content Moderation Success Story

Redditors howled at these changes — and Mr. Wong’s successor as C.E.O., Ellen Pao, was chased out by a horde of angry users — but the company’s pivot to respectability was an undeniable success. Reddit’s image has gradually improved under a co-founder, Steve Huffman, who came back in 2015 to run the site as chief executive, and Reddit was able to build the ad-based business model that sustains it today. In particular, I want to single out three steps Reddit took to clean up its platform, all of which were instrumental in paving the way for the company’s public debut. First, the company took aim at bad spaces, rather than bad individuals or bad posts. Reddit, unlike other social media sites, is organized by topic; users can join “subreddits” devoted to gardening, anime or dad jokes. That meant that once the company made new rules banning hate speech, harassment and extremism, it faced an important question: Should we enforce the new rules user by user or post by post, as new violations are reported, or …

Bluesky launches Ozone, a tool that lets users create and run their own independent moderation services

Bluesky launches Ozone, a tool that lets users create and run their own independent moderation services

Decentralized Twitter/X rival Bluesky announced today that it’s open sourcing Ozone, a tool that lets individuals and teams collaboratively review and label content on the network. The company plans to open up the ability for individuals and teams to run their own independent moderation services later this week, which means users will be able to subscribe to additional moderation services on top of Bluesky’s default moderation. In a blog post, Bluesky said the change will give users “unprecedented control” over their social media experience. The company’s vision for moderation is a stackable ecosystem of services, which is why it will start allowing users to install filters from independent moderation services on top of what Bluesky already requires. As a result, users will be able to create a customized experience tailored to their preferences. For example, someone could create a moderation service that blocks images of spiders on the network. If you’re someone who gets a jump scare when you see a spider, you could install the moderation service and have all labeled spider pictures disappear from …

Gemini’s Culture War, Kara Swisher Burns Us and SCOTUS Takes Up Content Moderation

Gemini’s Culture War, Kara Swisher Burns Us and SCOTUS Takes Up Content Moderation

Listen and follow ‘Hard Fork’Apple | Spotify | Amazon | YouTube Google removed the ability to generate images of people from its Gemini chatbot. We talk about why, and about the brewing culture war over artificial intelligence. Then, did Kara Swisher start “Hard Fork”? We clear up some podcast drama and ask about her new book, “Burn Book.” And finally, the legal expert Daphne Keller tells us how the U.S. Supreme Court might rule on the most important First Amendment cases of the internet era, and what Star Trek and soy boys have to do with it. Today’s guests: Kara Swisher, tech journalist and Casey Newton’s former landlord Daphne Keller, director of the program on platform regulation at Stanford University’s Cyber Policy Center Additional Reading: Credits “Hard Fork” is hosted by Kevin Roose and Casey Newton and produced by Davis Land and Rachel Cohn. The show is edited by Jen Poyant. Engineering by Alyssa Moxley and original music by Dan Powell, Marion Lozano, Diane Wong and Rowan Niemisto. Fact-checking by Caitlin Love. Special thanks to …

The Supreme Court could decide the future of content moderation — or it could punt

The Supreme Court could decide the future of content moderation — or it could punt

The Supreme Court is considering the fate of two state laws that limit how social media companies can moderate the content on their platforms. In oral arguments on Monday, the justices grappled with a thorny set of questions that could reshape the internet, from social networks like Facebook and TikTok to apps like Yelp and Etsy. In October, the Supreme Court decided to hear the two parallel cases, one in Florida (Moody v. NetChoice, LLC) and one in Texas (NetChoice, LLC v. Paxton). In both instances, signed into law by Republican governors, a new state law instructed social media companies to stop removing certain kinds of content. Florida’s Senate Bill 7072 prevents social media companies from banning political candidates or putting restrictions on their content. In Texas, House Bill 20 told social media companies that they could no longer remove or demonetize content based on the “viewpoint represented in the user’s expression.” In Florida, a federal appeals court mostly ruled in favor of the tech companies, but in Texas the appeals court sided with the …

Does the First Amendment apply to social media moderation? The U.S. Supreme Court will decide

Does the First Amendment apply to social media moderation? The U.S. Supreme Court will decide

Sign up for The Brief, The Texas Tribune’s daily newsletter that keeps readers up to speed on the most essential Texas news. The U.S. Supreme Court on Monday will consider whether the First Amendment’s freedom of speech clause applies to social media companies’ content moderation. Their decision could render a Texas law unconstitutional. The lawsuit challenges whether Texas and Florida can legally prohibit large social media companies from banning certain political posts or users. Both states passed laws in 2021 to stop what Republican state leaders considered “censorship” of conservative viewpoints. The laws came on the heels of the Jan. 6, 2021 attack on the U.S. Capitol, which led Facebook, Twitter and other social media platforms to suspend former president Donald Trump’s social media accounts because his posts were thought to glorify violence. Tech industry groups then brought a lawsuit in which they argued those laws are unconstitutional because they conflict with the First Amendment, which protects against government infringement of speech. Tech trade groups NetChoice and Computer & Communications Industry Association sued Texas and …

Bluesky CEO confronts content moderation in the fediverse

Bluesky CEO confronts content moderation in the fediverse

The panel on stage at the Knight Foundation’s Informed event is Elon Musk’s nightmare blunt rotation: Techdirt editor Mike Masnick, Twitter’s former safety lead Yoel Roth, and Bluesky CEO Jay Graber, who have come together to discuss content moderation in the fediverse. It’s been more than a year since Musk showed up at Twitter HQ with a literal sink in tow, but many social media users are still a bit nomadic, floating among various emerging platforms. And if a user made the choice to leave Twitter in the Musk era, they likely are looking for a platform with actual moderation policies, which means even more pressure for leaders like Graber to strike the fragile balance between tedious over-moderation and a fully hands-off approach. “The whole philosophy has been that this needs to have a good UX and be a good experience,” Graber said about her approach to running Bluesky. “People aren’t just in it for the decentralization and abstract ideas. They’re in it for having fun and having a good time here.” And at the start, …