News
Leave a comment

Ex-Meta employee says his warnings of Instagram’s harm to teens were ignored | Instagram


On the same day whistleblower Frances Haugen was testifying before Congress about the harms of Facebook and Instagram to children in the fall of 2021, a former engineering director at the social media giant who had rejoined the company as a consultant sent an alarming email to Mark Zuckerberg about the same topic.

Arturo Béjar, known for his expertise on curbing online harassment, recounted to the Meta CEO of his own daughter’s troubling experiences with Instagram. But he said his concerns and warnings went unheeded. And on Tuesday, it was Béjar’s turn to testify to Congress.

“I appear before you today as a dad with first-hand experience of a child who received unwanted sexual advances on Instagram,” he told a panel of US senators.

Béjar worked as an engineering director at Facebook from 2009 to 2015, attracting wide attention for his work to combat cyberbullying. He thought things were getting better. But between leaving the company and returning in 2019 as a contractor, Béjar’s own daughter had started using Instagram.

“She and her friends began having awful experiences, including repeated unwanted sexual advances, harassment,” he testified on Tuesday. “She reported these incidents to the company and it did nothing.”

In the 2021 note, as first reported by the Wall Street Journal, Béjar outlined a “critical gap” between how the company approached harm and how the people who use its products – most notably young people – experience it.

“Two weeks ago my daughter, 16, and an experimenting creator on Instagram, made a post about cars, and someone commented ‘Get back to the kitchen.’ It was deeply upsetting to her,” he wrote. “At the same time the comment is far from being policy violating, and our tools of blocking or deleting mean that this person will go to other profiles and continue to spread misogyny. I don’t think policy/reporting or having more content review are the solutions.”

Béjar testified before a Senate subcommittee on Tuesday about social media and the teen mental health crisis, hoping to shed light on how Meta executives, including Zuckerberg, knew about the harms Instagram was causing but chose not to make meaningful changes to address them.

He believes that Meta needs to change how it polices its platforms, with a focus on addressing harassment, unwanted sexual advances and other bad experiences, even if these problems don’t clearly violate existing policies. For instance, sending vulgar sexual messages to children doesn’t necessarily break Instagram’s rules, but Béjar said teens should have a way to tell the platform they don’t want to receive these types of messages.

“I can safely say that Meta’s executives knew the harm that teenagers were experiencing, that there were things that they could do that are very doable and that they chose not to do them,” Béjar told the Associated Press (AP). This, he said, makes it clear that “we can’t trust them with our children”.

Opening the hearing on Tuesday, Connecticut senator Richard Blumenthal, a Democrat who chairs the Senate judiciary’s privacy and technology subcommittee, introduced Béjar as an engineer “widely respected and admired in the industry” who was hired specifically to help prevent harms against children, but whose recommendations were ignored.

“What you have brought to this committee today is something every parent needs to hear,” added Missouri senator Josh Hawley, the panel’s ranking Republican.

Béjar pointed to user surveys carefully crafted by the company that show, for instance, that 13% of Instagram users – ages 13 to 15 – reported having received unwanted sexual advances on the platform in the previous seven days.

Béjar said he doesn’t believe the reforms he’s suggesting would significantly affect revenue or profits for Meta and its peers. They are not intended to punish the companies, he said, but to help teenagers.

skip past newsletter promotion

“You heard the company talk about it: ‘Oh this is really complicated,’” Béjar told AP. “No, it isn’t. Just give the teen a chance to say: ‘This content is not for me’ and then use that information to train all of the other systems and get feedback that makes it better.”

The testimony comes amid a bipartisan push in Congress to adopt regulations aimed at protecting children online.

Meta, in a statement, said: “Every day countless people inside and outside of Meta are working on how to help keep young people safe online. The issues raised here regarding user perception surveys highlight one part of this effort, and surveys like these have led us to create features like anonymous notifications of potentially hurtful content and comment warnings. Working with parents and experts, we have also introduced over 30 tools to support teens and their families in having safe, positive experiences online. All of this work continues.”

Regarding unwanted material users see that does not violate Instagram’s rules, Meta points to its 2021 “content distribution guidelines” that say “problematic or low-quality” content automatically receives reduced distribution on users’ feeds. This includes clickbait, misinformation that’s been factchecked and “borderline” posts, such as a “photo of a person posing in a sexually suggestive manner, speech that includes profanity, borderline hate speech, or gory images”.

In 2022, Meta also introduced “kindness reminders” that tell users to be respectful in their direct messages – but it only applies to users who are sending message requests to a creator, not a regular user.

Today’s testimony comes just two weeks after dozens of US states sued Meta for harming young people and contributing to the youth mental health crisis. The lawsuits, filed in state and federal courts, claim that Meta knowingly and deliberately designs features on Instagram and Facebook that addict children to its platforms.

Béjar said it is “absolutely essential” that Congress passes bipartisan legislation “to help ensure that there is transparency about these harms and that teens can get help” with the support of the right experts.

“The most effective way to regulate social media companies is to require them to develop metrics that will allow both the company and outsiders to evaluate and track instances of harm, as experienced by users. This plays to the strengths of what these companies can do, because data for them is everything,” he wrote in his prepared testimony.



Source link

Leave a Reply