Technology
Leave a comment

AI’s ‘fog of war’ – The Atlantic

AI’s ‘fog of war’ – The Atlantic


This is Atlantic Intelligence, an eight-week series in which The Atlantic’s leading thinkers on AI will help you understand the complexity and opportunities of this groundbreaking technology. Sign up here.

Earlier this year, The Atlantic published a story by Gary Marcus, a well-known AI expert who has agitated for the technology to be regulated, both in his Substack newsletter and before the Senate. (Marcus, a cognitive scientist and an entrepreneur, has founded AI companies himself and has explored launching another.) Marcus argued that “this is a moment of immense peril,” and that we are teetering toward an “information-sphere disaster, in which bad actors weaponize large language models, distributing their ill-gotten gains through armies of ever more sophisticated bots.”

I was interested in following up with Marcus given recent events. In the past six weeks, we’ve seen an executive order from the Biden administration focused on AI oversight; chaos at the influential company OpenAI; and this Wednesday, the release of Gemini, a GPT competitor from Google. What we have not seen, yet, is total catastrophe of the sort Marcus and others have warned about. Perhaps it looms on the horizon—some experts have fretted over the destructive role AI might play in the 2024 election, while others believe we are close to developing advanced AI models that could acquire “unexpected and dangerous capabilities,” as my colleague Karen Hao has described. But perhaps fears of existential risk have become their own kind of AI hype, understandable yet unlikely to materialize. My own opinions seem to shift by the day.

Marcus and I talked earlier this week about all of the above. Read our conversation, edited for length and clarity, below.

Damon Beres, senior editor


“No Idea What’s Going On”

Damon Beres: Your story for The Atlantic was published in March, which feels like an extremely long time ago. How has it aged? How has your thinking changed?

Gary Marcus: The core issues that I was concerned about when I wrote that article are still very much  serious problems. Large language models have this “hallucination” problem. Even today, I get emails from people describing the hallucinations they observe in the latest models. If you produce something from these systems, you just never know what you’re going to get. That’s one issue that really hasn’t changed.

I was very worried then that bad actors would get a hold of these systems and deliberately create misinformation, because these systems aren’t smart enough to know when they’re being abused. And one of the largest concerns of the article is that 2024 elections might be impacted. That’s still a very reasonable expectation.

Beres: How do you feel about the executive order on AI?

Marcus: They did the best they could within some constraints. The executive branch doesn’t make law. The order doesn’t really have teeth.

There have been some good proposals: calling for a kind of “preflight” check or something like an FDA approval process to make sure AI is safe before it’s deployed at a very large scale, and then auditing it afterwards. These are critical things that are not yet required. Another thing that I would really like to see is independent scientists as part of the loop here, in a kind of peer-review way, to make sure things are done on the up-and-up.

You can think of the metaphor of Pandora’s box. There are Pandora’s boxes, plural. One of those boxes is already open. There are other boxes that people are messing around with and might accidentally open. Part of this is about how to contain the stuff that’s already out there, and part of this is about what’s to come. GPT-4 is a dress rehearsal of future forms of AI that might be much more sophisticated. GPT-4 is actually not that reliable; we’re going to get to other forms of AI that are going to be able to reason and understand the world. We need to have our act together before those things come out, not after. Patience is not a great strategy here.

Beres: At the same time, you wrote on the occasion of Gemini’s release that there’s a possibility the model is plateauing—that despite an obvious, strong desire for there to be a GPT-5, it hasn’t emerged yet.  What change do you realistically think is coming?

Marcus: Generative AI is not all of AI. It’s the stuff that’s popular right now. It could be that generative AI has plateaued, or is close to plateauing. Google had arbitrary amounts of money to spend, and Gemini is not arbitrarily better than GPT-4. That’s interesting. Why didn’t they crush it? It’s probably because they can’t. Google could have spent $40 billion to blow OpenAI away, but I think they didn’t know what they could do with $40 billion that would be so much better.

However, that doesn’t mean there won’t be other advances. It means we don’t know how to do it right now. Science can go in what Stephen Jay Gould called “punctuated equilibria,” fits and starts. AI is not close to its logical limits. Fifteen years from now, we’ll look at 2023 technology the way I look at Motorola flip phones.

Beres: How do you create a law to protect people when we don’t even know what the technology looks like from here?

Marcus: One thing that I favor is having both national and global AI agencies that can move faster than legislators can. The Senate was not structured to distinguish between GPT-4 and GPT-5 when it comes out. You don’t want to go through a whole process of having the House and Senate agree on something to cope with that. We need a national agency with some power to adjust things over time.

Is there some criterion by which you can distinguish the most dangerous models, regulate them the most, and not do that on less dangerous models? Whatever that criterion is, it’s probably going to change over time. You really want a group of scientists to work that out and update it periodically; you don’t want a group of senators to work that out—no offense. They just don’t have the training or the process to do that.

AI is going to become as important as any other Cabinet-level office, because it is so pervasive. There should be a Cabinet-level AI office. It was hard to stand up other agencies, like Homeland Security. I don’t think Washington, from the many meetings I’ve had there, has the appetite for it. But they really need to do that.

At the global level, whether it’s part of the UN or independent, we need something that looks at issues ranging from equity to security. We need to build procedures for countries to share information, incident databases, things like that.

Beres: There have been harmful AI products for years and years now, before the generative-AI boom. Social-media algorithms promote bad content; there are facial-recognition products that feel unethical or are misused by law enforcement. Is there a major difference between the potential dangers of generative AI and of the AI that already exists?

Marcus: The intellectual community has a real problem right now. You have people arguing about short-term versus long-term risks as if one is more important than the other. Actually, they’re all important. Imagine if people who worked on car accidents got into a fight with people trying to cure cancer.

Generative AI actually makes a lot of the short-term problems worse, and makes some of the long-term problems that might not otherwise exist possible. The biggest problem with generative AI is that it’s a black box. Some older techniques were black boxes, but a lot of them weren’t, so you could actually figure out what the technology was doing, or make some kind of educated guess about whether it was biased, for example. With generative AI, nobody really knows what’s going to come out at any point, or why it’s going to come out. So from an engineering perspective, it’s very unstable. And from a perspective of trying to mitigate risks, it’s hard.

That exacerbates a lot of the problems that already exist, like bias. It’s a mess. The companies that make these things are not rushing to share that data. And so it becomes this fog of war. We really have no idea what’s going on. And that just can’t be good.

Related:


P.S.

This week, The Atlantic’s David Sims named Oppenheimer the best film of the year. That film’s director, Christopher Nolan, recently sat down with another one of our writers, Ross Andersen, to discuss his views on technology—and why he hasn’t made a film about AI … yet.

— Damon



Source link

Leave a Reply