All posts tagged: AI doomerism

What Happens If AI Learns to Teach Itself?

What Happens If AI Learns to Teach Itself?

ChatGPT exploded into the world in the fall of 2022, sparking a race toward ever more advanced artificial intelligence: GPT-4, Anthropic’s Claude, Google Gemini, and so many others. But with every passing month, tech corporations appear more and more stuck, competing over millimeters of progress. The most advanced and attention-grabbing AI models, having consumed most of the text and images available on the internet, are running out of training data, their most precious resource. This, along with the costly and slow process of using human evaluators to develop these systems, has stymied the technology’s growth, leading to iterative updates rather than massive paradigm shifts. As researchers are left trying to wring water from stone, they are exploring a new avenue to advance their products: They’re using machines to train machines. Over the past few months, Google Deepmind, Microsoft, Amazon, Meta, Apple, OpenAI, and various academic labs have all published research that uses an AI model to improve another AI model, or even itself, in many cases leading to notable improvements. Numerous tech executives have heralded …

Don’t be fooled by the AI apocalypse

Don’t be fooled by the AI apocalypse

A guide to understanding which fears are real and which aren’t Illustration by The Atlantic November 16, 2023, 2:14 PM ET This is Atlantic Intelligence, an eight-week series in which The Atlantic’s leading thinkers on AI will help you understand the complexity and opportunities of this groundbreaking technology. Sign up here. Executive action, summits, big-time legislation—governments around the world are beginning to take seriously the threats AI could pose to society. As they do, two visions of the technology are jostling for the attention of world leaders, business magnates, media, and the public. One sounds like science fiction, in which rogue robots extinguish humanity or terrorists use AI to accomplish the same. You aren’t alone if you fear the coming of Skynet: The executives at the helm of the very companies developing this supposedly terrifying technology—at OpenAI, Google, Microsoft, and elsewhere—are the ones sounding the alarm that their products could end the world, and efforts to regulate AI in the U.S. and the U.K. are already parroting those prophecies. But many advocates and academics say …

Focus on the Problems Artificial Intelligence Is Causing Today

Focus on the Problems Artificial Intelligence Is Causing Today

Much of the time, discussions about artificial intelligence are far removed from the realities of how it’s used in today’s world. Earlier this year, executives at Anthropic, Google DeepMind, OpenAI, and other AI companies declared in a joint letter that “mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” In the lead-up to the AI summit that he recently convened, British Prime Minister Rishi Sunak warned that “humanity could lose control of AI completely.” Existential risks—or x-risks, as they’re sometimes known in AI circles—evoke blockbuster science-fiction movies and play to many people’s deepest fears. Read: AI doomerism is a decoy But AI already poses economic and physical threats—ones that disproportionately harm society’s most vulnerable people. Some individuals have been incorrectly denied health-care coverage, or kept in custody based on algorithms that purport to predict criminality. Human life is explicitly at stake in certain applications of artificial intelligence, such as AI-enabled target-selection systems like those the Israeli military has used in Gaza. In …

AI’s Present Matters More Than Its Imagined Future

AI’s Present Matters More Than Its Imagined Future

Last month, I found myself in a particular seat. A few places to my left was Elon Musk. Down the table to my right sat Bill Gates. Across the room sat Satya Nadella, Microsoft’s CEO, and not too far to his left was Eric Schmidt, the former CEO of Google. At the other end of the table sat Sam Altman, the head of OpenAI, the company responsible for ChatGPT. We had all arrived that morning for the inaugural meeting of Senate Leader Chuck Schumer’s AI Insight Forum—the first of a set of events with an ambitious objective: to accelerate a bipartisan path toward meaningful artificial-intelligence legislation. The crowd included senators, tech executives, civil-society representatives, and me—a UC Berkeley computer-science researcher tasked with bringing years of academic findings on AI accountability to the table. I’m still unsure of what was achieved in that room. So much of the discussion was focused on concerns and promises outside the periphery—the most extreme dangers and benefits of AI—rather than on adopting a clear-eyed understanding of the here and now. …