Technology
Leave a comment

Don’t be fooled by the AI apocalypse

Don’t be fooled by the AI apocalypse


A guide to understanding which fears are real and which aren’t

Illustration by The Atlantic

This is Atlantic Intelligence, an eight-week series in which The Atlantic’s leading thinkers on AI will help you understand the complexity and opportunities of this groundbreaking technology. Sign up here.

Executive action, summits, big-time legislation—governments around the world are beginning to take seriously the threats AI could pose to society. As they do, two visions of the technology are jostling for the attention of world leaders, business magnates, media, and the public. One sounds like science fiction, in which rogue robots extinguish humanity or terrorists use AI to accomplish the same. You aren’t alone if you fear the coming of Skynet: The executives at the helm of the very companies developing this supposedly terrifying technology—at OpenAI, Google, Microsoft, and elsewhere—are the ones sounding the alarm that their products could end the world, and efforts to regulate AI in the U.S. and the U.K. are already parroting those prophecies.

But many advocates and academics say that the doomsday narrative distracts from all of the more quotidian ways AI upends lives while allowing corporations to cast themselves as responsible stewards of dangerous technologies. The competing vision of AI’s harms is concrete, drawing from years of research on how workers are exploited and data are stolen to train AI, and how the resulting algorithms exacerbate biased policing, discriminatory hiring practices, emergency-room errors, misinformation, and more, as Amba Kak and Sarah Myers West, the executive director and the managing director of the AI Now Institute, respectively, wrote in an article last week.

Debates over whom AI is harming in the present, how, and what to do about it will directly shape the technology’s future. The four stories below offer a guide to which fears are real and which aren’t, why these two divergent AI narratives have come about, and the path that each might lead us down.


What to Read

  • The AI debate is happening in a cocoon: While politicians discuss how AI might end the world, algorithms continue to wreak havoc every day, Amba Kak and Sarah Myers West write.
  • AI doomerism is a decoy: One of Big Tech’s earliest warnings of an AI apocalypse was fantastic PR, aggrandizing their products and diverting attention from AI’s existing harms, I wrote in June.
  • AI’s present matters more than its imagined future: After attending a closed-doors Senate forum on the future of AI, Inioluwa Deborah Raji wrote that the public must stop daydreaming about the technology and listen to the people AI is already harming.
  • America already has an AI underclass: Even the most advanced chatbots rely on a massive, poorly compensated, and neglected workforce of people both domestically and abroad, I wrote last July.

P.S.

Misinformation is not the only way that AI can distort reality. Many AI-generated images of people are weirdly, abnormally hot, and in a recent article, Caroline Mimbs Nyce tried to figure out what’s going on.

— Matteo



Source link

Leave a Reply