Technology
Leave a comment

Danger of artificial stupidity

Danger of artificial stupidity
Danger of artificial stupidity

It is not often that you are obliged to proclaim a much-loved international genius wrong, but in the alarming prediction made recently regarding Artificial Intelligence and the future of humankind, I believe Professor Stephen Hawking is. Well to be precise, being a theoretical physicist — in an echo of Schrödinger’s cat, famously both dead and alive at the same time — I believe the Professor is both wrong and right at the same time.

Wrong because there are strong grounds for believing that computers will never be able to replicate all human cognitive faculties and right because even such emasculated machines may still pose a threat to humanity’s future existence; an existential threat, so to speak.

In an interview on December 2, 2014 Rory Cellan-Jones asked how far engineers had come along the path towards creating artificial intelligence, and slightly worryingly Professor Hawking, replied “Once humans develop artificial intelligence it would take off on its own and redesign itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Although grabbing headlines, such predictions are not new in the world of science and science fiction; indeed my old boss at the University of Reading, Professor Kevin Warwick, made a very similar prediction back in 1997 in his book “March of the Machines.” In that book Kevin observed that even in 1997 there were already robots with the “brain power of an insect”; soon, he predicted, there would be robots with the brain power of a cat, and soon after that there would be machines as intelligent as humans. When this happens, Warwick claimed, the science fiction nightmare of a “Terminator” machine could quickly become reality, because these robots will rapidly become more intelligent than and superior in their practical skills to the humans that designed and constructed them.

The notion of humankind subjugated by evil machines is based on the ideology that all aspects of human mentality will eventually be instantiated by an artificial intelligence program running on a suitable computer, a so-called “Strong AI”. Of course if this is possible, accelerating progress in AI technologies — caused both by the use of AI systems to design ever more sophisticated AIs and the continued doubling of raw computational power every two years as predicted by Moore’s law — will eventually cause a runaway effect wherein the artificial intelligence will inexorably come to exceed human performance on all tasks: the so-called point of “singularity” first popularized by the American futurologist Ray Kurzweil.

And at the point this “singularity” occurs, so Warwick, Kurzweil and Hawking suggest, humanity will have effectively been “superseded” on the evolutionary ladder and may be obliged to eek out its autumn days gardening and watching cricket; or in some of Hollywood’s more dystopian visions, be cruelly subjugated or exterminated by machine.

I did not endorse these concerns in 1997 and do not do so now; although I do share — for very different and mundane reasons that I will outline later — the concern that artificial intelligence potentially poses a serious risk to humanity.

There are many reasons why I am skeptical of grand claims made for future computational artificial intelligence, not least empirical. The history of the subject is littered with researchers who have claimed a breakthrough in AI as a result of their research, only for it later to be judged harshly against the weight of society’s expectations. All too often these provide examples of what Hubert Dreyfus calls “the first step fallacy” — undoubtedly climbing a tree takes a monkey a little nearer the moon, but tree climbing will never deliver a would-be simian astronaut onto its lunar surface.

I believe three foundational problems explain why computational AI has failed historically and will continue to fail to deliver on its “Grand Challenge” of replicating human mentality in all its raw and electro-chemical glory:

1) Computers lack genuine understanding: in the “Chinese room argument” the philosopher John Searle (1980) argued that even if it were possible to program a computer to communicate perfectly with a human interlocutor (Searle famously described the situation by conceiving a computer interaction in Chinese, a language he is utterly ignorant of) it would not genuinely understand anything of the interaction (cf. a small child laughing on cue at a joke she doesn’t understand).

2) Computers lack consciousness: in an argument entitled “Dancing with Pixies” I argued that if a computer-controlled robot experiences a conscious sensation as it interacts with the world, then an infinitude of consciousnesses must be present in all objects throughout the universe: in the cup of tea I am drinking as I type; in the seat that I am sitting on as I write, etc., etc.. If we reject such “panpsychism,” we then must reject “machine consciousness”.

3) Computers lack mathematical insight: in his book The Emperor’s New Mind, the Oxford mathematical physicist Sir Roger Penrose deployed Gödel’s first incompleteness theorem to argue that, in general, the way mathematicians provide their “unassailable demonstrations” of the truth of certain mathematical assertions is fundamentally non-algorithmic and non-computational.

Taken together, these three arguments fatally undermine the notion that the human mind can be completely instantiated by mere computations; if correct, although computers will undoubtedly get better and better at many particular tasks — say playing chess, driving a car, predicting the weather etc. — there will always remain broader aspects of human mentality that future AI systems will not match. Under this conception there is a “humanity-gap” between the human mind and mere “digital computations”; although raw computer power — and concomitant AI software — will continue to improve, the combination of a human mind working alongside a future AI will continue to be more powerful than that future AI system operating on its own. The singularity will never be televised.

Furthermore, it seems to me that without understanding and consciousness of the world, and lacking genuine creative (mathematical) insight, any apparently goal directed behavior in a computer-controlled robot is, at best, merely the reflection of a deep rooted longing in its designer. Besides, lacking an ability to formulate its own goals, on what basis would a robot set out to subjugate mankind unless, of course, it was explicitly programmed to do so by its (human) engineer? But in that case our underlying apprehension regarding future AI might better reflect the all too real concerns surrounding Autonomous Weapons System, than casually re-indulging Hollywood’s vision of the post-human “Terminator” machine.

Indeed, in my role as one of the AI experts on the International Committee for Robot Arms Control (ICRAC), I am particularly concerned by the potential military deployment of robotic weapons systems — systems that can take decisions to militarily engage without human intervention — precisely because current AI is still very lacking and because of the underlying potential of poorly designed interacting autonomous systems to rapidly escalate situations to catastrophic conclusions; such systems exhibit a genuine “artificial stupidity.”

A light-hearted example demonstrating just how easily autonomous systems can rapidly escalate situations out of control occurred in April 2011, when Peter Lawrence’s book The Making of a Fly was auto-priced upwards by two “trader-bots” competing against each other in the Amazon reseller market-place. The result of this process is that Lawrence can now comfortably boast that his modest scholarly tract — first published in 1992 and currently out of print — was once valued by one of the biggest and most respected companies on Earth at $23,698,655.93 (plus $3.99 shipping).

In stark contrast, on September 6th 1983, during a military exercise named “Operation Able Archer,” a terrifying real-world example of “automatic escalation” nearly ended in disaster when an automatic Soviet military surveillance system all but instigated World War III. During the height of what Russia perceived to be an intimidating US military exercise in central Europe, a malfunctioning Soviet alarm system alerted a Soviet colonel that the USSR was apparently under attack by multiple US ballistic missiles. Fortunately, the colonel had a hunch that his alarm system was malfunctioning, and reported it as such. Some commentators have suggested that the colonel’s quick and correct human decision to over-rule the automatic response system averted East-West nuclear Armageddon.

In addition to the danger of autonomous escalation I am skeptical that current and foreseeable AI technology can enable autonomous weapons systems to reliably comply with extant obligations under International Humanitarian Law; specifically three core obligations: (i) to identify combatants from non-combatants; (ii) to make nuanced decisions regarding proportionate responses to a complex military situation; and (iii) to arbitrate on military or moral necessity (regarding when to apply force).

Sadly, it is all too easy to concur that AI may pose a very real “existential threat” to humanity without ever having to imagine that it will reach the level of superhuman intelligence that Professors Warwick and Hawking so graphically warn us of. For this reason in May 2014 members of the International Committee for Robot Arms Control travelled to Geneva to participate in the first multilateral meeting ever held on Autonomous Weapons Systems (LAWS); a debate that continues to this day at the very highest levels of the UN. In a firm, but refracted, echo of Warwick and Hawking on AI, I believe we should all be very concerned.

Leave a Reply