All posts tagged: Tech companies

It’s Time to Give Up on Ending Social Media’s Misinformation Problem

It’s Time to Give Up on Ending Social Media’s Misinformation Problem

If you don’t trust social media, you should know you’re not alone. Most people surveyed around the world feel the same—in fact, they’ve been saying so for a decade. There is clearly a problem with misinformation and hazardous speech on platforms such as Facebook and X. And before the end of its term this year, the Supreme Court may redefine how that problem is treated. Over the past few weeks, the Court has heard arguments in three cases that deal with controlling political speech and misinformation online. In the first two, heard last month, lawmakers in Texas and Florida claim that platforms such as Facebook are selectively removing political content that its moderators deem harmful or otherwise against their terms of service; tech companies have argued that they have the right to curate what their users see. Meanwhile, some policy makers believe that content moderation hasn’t gone far enough, and that misinformation still flows too easily through social networks; whether (and how) government officials can directly communicate with tech platforms about removing such content is …

AI for the People, Courtesy of … Elon Musk?

AI for the People, Courtesy of … Elon Musk?

Yesterday afternoon, Elon Musk fired the latest shot in his feud with OpenAI: His new AI venture, xAI, now allows anyone to download and use the computer code for its flagship software. No fees, no restrictions, just Grok, a large language model that Musk has positioned against OpenAI’s GPT-4, the model powering the most advanced version of ChatGPT. Sharing Grok’s code is a thinly veiled provocation. Musk was one of OpenAI’s original backers. He left in 2018 and recently sued for breach of contract, arguing that the start-up and its CEO, Sam Altman, have betrayed the organization’s founding principles in pursuit of profit—transforming a utopian vision of technology that “benefits all of humanity”  into yet another opaque corporation. Musk has spent the past few weeks calling the secretive firm “ClosedAI.” It’s a mediocre zinger at best, but he does have a point. OpenAI does not share much about its inner workings, it added a “capped-profit” subsidiary in 2019 that expanded the company’s remit beyond the public interest, and it’s valued at $80 billion or more. …

We’re Already Living in the Post-Truth Era

We’re Already Living in the Post-Truth Era

This is Atlantic Intelligence, a limited-run series in which The Atlantic’s leading thinkers on AI will help you understand the complexity and opportunities of this groundbreaking technology. Sign up here. For years, experts have worried that artificial intelligence will produce a new disinformation crisis on the internet. Image-, audio-, and video-generating tools allow people to rapidly create high-quality fakes to spread on social media, potentially tricking people into believing fiction is fact. But as my colleague Charlie Warzel writes, the mere existence of this technology has a corrosive effect on reality: It doesn’t take a shocking, specific incident for AI to plant doubt into countless hearts and minds. Charlie’s article offers a perspective on the dustup over an edited photograph released by Kensington Palace on Sunday of Kate Middleton and her children. The image was immediately flagged by observers—and, shortly thereafter, by wire services such as the Associated Press—as suspicious, becoming the latest bit of “evidence” in a conspiratorial online discourse about Middleton’s prolonged absence from the public eye. There’s no reason to suspect that …

Are Social-Media Companies Ready for Another January 6?

Are Social-Media Companies Ready for Another January 6?

In January, Donald Trump laid out in stark terms what consequences await America if charges against him for conspiring to overturn the 2020 election wind up interfering with his presidential victory in 2024. “It’ll be bedlam in the country,” he told reporters after an appeals-court hearing. Just before a reporter began asking if he would rule out violence from his supporters, Trump walked away. This would be a shocking display from a presidential candidate—except the presidential candidate was Donald Trump. In the three years since the January 6 insurrection, when Trump supporters went to the U.S. Capitol armed with zip ties, tasers, and guns, echoing his false claims that the 2020 election had been stolen, Trump has repeatedly hinted at the possibility of further political violence. He has also come to embrace the rioters. In tandem, there has been a rise in threats against public officials. In August, Reuters reported that political violence in the United States is seeing its biggest and most sustained rise since the 1970s. And a January report from the nonpartisan …

Please Upgrade to 1TB If You Want to See Your Baby Again

Please Upgrade to 1TB If You Want to See Your Baby Again

Here I was, going about my day and minding my own business, when a notification popped up on my phone that made my blood run cold: Your iCloud storage is full. I am, as I’ve written before, a digital hoarder whose trinkets, tchotchkes, and stacks of yellowing newspapers (read: old pixelated memes) are distributed across an unknown number of cloud servers around the globe. On Apple’s, I’ve managed to blow through 200 gigabytes of storage, an amount of data that, not even a decade ago, felt almost infinite: my own little Library of Congress, or that warehouse from the end of Raiders of the Lost Ark, filled with screenshots of bad tweets. The overwhelming majority of this space is dedicated to 31,013 photos and 1,742 videos I’ve personally taken. The rest is likely brain-cell-destroying junk that others have sent me in texts and group chats. Running out of iCloud storage is obviously not an unusual circumstance. But I have also recently been forced to upgrade my Google storage from 100 to 200 GB. I started …

This Is Where the AI Race Goes Next

This Is Where the AI Race Goes Next

Artificial intelligence can appear to be many different things—a whole host of programs with seemingly little common ground. Sometimes AI is a conversation partner, an illustrator, a math tutor, a facial-recognition tool. But in every incarnation, it is always, always a machine, demanding almost unfathomable amounts of data and energy to function. AI systems such as ChatGPT operate out of buildings stuffed with silicon computer chips. To build bigger machines—as Microsoft, Google, Meta, Amazon, and other tech companies would like to do—you need more resources. And our planet is running out of them. The computational power needed to train top AI programs has doubled every six months over the past decade and may soon become untenable. According to a recent study, AI programs could consume roughly as much electricity as Sweden by 2027. GPT-4, the most powerful model currently offered to consumers by OpenAI, was by one estimate 100 times more demanding to train than GPT-3, which was released just four years ago. Google recently introduced generative AI into its search feature, and may have …

Generative AI Might Finally Bend Copyright Past the Breaking Point

Generative AI Might Finally Bend Copyright Past the Breaking Point

It took Ralph Ellison seven years to write Invisible Man. It took J. D. Salinger about 10 to write The Catcher in the Rye. J. K. Rowling spent at least five years on the first Harry Potter book. Writing with the hope of publishing is always a leap of faith. Will you finish the project? Will it find an audience? Whether authors realize it or not, the gamble is justified to a great extent by copyright. Who would spend all that time and emotional energy writing a book if anyone could rip the thing off without consequence? This is the sentiment behind at least nine recent copyright-infringement lawsuits against companies that are using tens of thousands of copyrighted books—at least—to train generative-AI systems. One of the suits alleges “systematic theft on a mass scale,” and AI companies are potentially liable for hundreds of millions of dollars, if not more. In response, companies such as OpenAI and Meta have argued that their language models “learn” from books and produce “transformative” original work, just like humans. Therefore, …

The Deeper Problem With Google’s Racially Diverse Nazis

The Deeper Problem With Google’s Racially Diverse Nazis

Generative AI is not built to honestly mirror reality, no matter what its creators say. Illustration by Paul Spella / The Atlantic; Source: Keystone-France / Getty February 26, 2024, 5:52 PM ET Is there a right way for Google’s generative AI to create fake images of Nazis? Apparently so, according to the company. Gemini, Google’s answer to ChatGPT, was shown last week to generate an absurd range of racially and gender-diverse German soldiers styled in Wehrmacht garb. It was, understandably, ridiculed for not generating any images of Nazis who were actually white. Prodded further, it seemed to actively resist generating images of white people altogether. The company ultimately apologized for “inaccuracies in some historical image generation depictions” and paused Gemini’s ability to generate images featuring people. The situation was played for laughs on the cover of the New York Post and elsewhere, and Google, which did not respond to a request for comment, said it was endeavoring to fix the problem. Google Senior Vice President Prabhakar Raghavan explained in a blog post that the company …

The Case That Has Some Liberals Defending Big Tech

The Case That Has Some Liberals Defending Big Tech

As a progressive legal scholar and activist, I never would have expected to end up on the same side as Greg Abbott, the conservative governor of Texas, in a Supreme Court dispute. But a pair of cases being argued next week have scrambled traditional ideological alliances. The arguments concern laws in Texas and Florida, passed in 2021, that if allowed to go into effect would largely prevent the biggest social-media platforms, including Facebook, Instagram, YouTube, X (formerly Twitter), and TikTok, from moderating their content. The tech companies have challenged those laws—which stem from Republican complaints about “shadowbanning” and “censorship”—under the First Amendment, arguing that they have a constitutional right to allow, or not allow, whatever content they want. Because the laws would limit the platforms’ ability to police hate speech, conspiracy theories, and vaccine misinformation, many liberal organizations and Democratic officials have lined up to defend giant corporations that they otherwise tend to vilify. On the flip side, many conservative groups have taken a break from dismantling the administrative state to support the government’s power …