Posted by Richard Willett - Memes and headline comments by David Icke Posted on 21 July 2023

The Military Dangers of AI Are Not Hallucinations

I give myself credit for being significantly ahead of my time. I first came across artificial intelligence (AI) in 1968 when I was just 24 years old and, from the beginning, I sensed its deep dangers. Imagine that.

Much as I’d like to brag about it, though, I was anything but alone. I was, in fact, undoubtedly one of millions of people who saw the movie 2001: A Space Odyssey, directed by Stanley Kubrick from a script written with Arthur C. Clarke (inspired by a short story, “The Sentinel,” that famed science-fiction writer Clarke had produced in – yes! – 1948). AI then had an actual name, HAL 9,000 (but call “him” Hal).

And no, the first imagined AI in my world did not act well, which should have been (but didn’t prove to be) a lesson for us all. Embedded in a spaceship heading for Jupiter, he killed four of the five astronauts on it and did his best to do in the last of them before being shut down.

It should, of course, have been a warning to us all about a world we would indeed enter in this century. Unfortunately, as with so many things that are worrying on planet Earth, it seems that we couldn’t help ourselves. HAL was destined to become a reality – or rather endlessly multiplying realities – in this world of ours. In that context, TomDispatch regular Michael Klare, who has been warning for years about a “human” future in which “robot generals” could end up running armed forces globally, considers wars to come, what it might mean for AI to replace human intelligence in major militaries globally, and just where that might lead us. I’m not sure that either Stanley Kubrick or Arthur C. Clarke would be surprised. ~ Tom Engelhardt

A world in which machines governed by artificial intelligence (AI) systematically replace human beings in most business, industrial, and professional functions is horrifying to imagine. After all, as prominent computer scientists have been warning us, AI-governed systems are prone to critical errors and inexplicable “hallucinations,” resulting in potentially catastrophic outcomes. But there’s an even more dangerous scenario imaginable from the proliferation of super-intelligent machines: the possibility that those nonhuman entities could end up fighting one another, obliterating all human life in the process.

The notion that super-intelligent computers might run amok and slaughter humans has, of course, long been a staple of popular culture. In the prophetic 1983 film WarGames, a supercomputer known as WOPR (for War Operation Plan Response and, not surprisingly, pronounced “whopper”) nearly provokes a catastrophic nuclear war between the United States and the Soviet Union before being disabled by a teenage hacker (played by Matthew Broderick). The Terminator movie franchise, beginning with the original 1984 film, similarly envisioned a self-aware supercomputer called “Skynet” that, like WOPR, was designed to control U.S. nuclear weapons but chooses instead to wipe out humanity, viewing us as a threat to its existence.

Though once confined to the realm of science fiction, the concept of supercomputers killing humans has now become a distinct possibility in the very real world of the near future. In addition to developing a wide variety of “autonomous,” or robotic combat devices, the major military powers are also rushing to create automated battlefield decision-making systems, or what might be called “robot generals.” In wars in the not-too-distant future, such AI-powered systems could be deployed to deliver combat orders to American soldiers, dictating where, when, and how they kill enemy troops or take fire from their opponents. In some scenarios, robot decision-makers could even end up exercising control over America’s atomic weapons, potentially allowing them to ignite a nuclear war resulting in humanity’s demise.

Read More: The Military Dangers of AI Are Not Hallucinations

The Dream

From our advertisers