After his interview at MIT’s Centennial Symposium in 2014, technocrat-extraordinaire Elon Musk opened himself up to the audience for questions. One audience member asked for his thoughts on Artificial Intelligence, and that’s when Musk’s expression changed. He suddenly became very serious.
I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. There should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish.
Musk then went on to warn us about AI, using some notably esoteric language to describe what he believes to be “our greatest existential threat”:
With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like, ‘Yeah, he’s sure he can control the demon.’ Doesn’t work out.
Most people interpreted Musk’s words as a simple analogy to describe the possible dangers surrounding the creation of an advanced AI, but perhaps his choice of words deserve more careful scrutiny.
In fact, that is what this essay is about, for, I believe that Musk chose his words very carefully. In other words, he might be right in the sense that you can’t “create” (artificial) intelligence but you can “summon” it.
Artificial Intelligence vs Conscious Computers
“Artificial Intelligence” (AI) has become a popular buzzword these days, so first I want to distinguish between what’s often referred to as “AI” and the concept of a conscious computer.
“AI” is often used interchangeable with the term “machine learning” to denote a computer system that has the ability to “learn” how to perform certain tasks with extreme accuracy, far exceeding what a human could do.
These systems are based on complex algorithms that ingest large amounts of “training data”, and as this training data is fed into the system, it self-updates its internal hyperparameters to evolve greater and greater levels of predictive accuracy. In other words, the system optimises towards a specific outcome/goal.
One of the most popular machine learning models (especially for image analysis) is called an “Artificial Neural Network” (ANN). Don’t let the name impress you too much because although some people claim that ANNs attempt to replicate the functioning of the human brain, this isn’t exactly correct.
These models are called “artificial neural networks” because they consist of multiple interconnected nodes, arranged in layers (similar to how the brain consists of billions of interconnected neurons). A data point is fed into the model and once it has been propagated through the network, a certain result is outputted. Then, through a process called “backpropagation”, the ANN adjusts its weightings in order to achieve a better result the next time round. This process is repeated until the neural net has “learned” to output extremely accurate results.
Again, despite the name, ANNs are actually quite simple. In fact, using an open source software library (like Python’s Scikit or R), you can write your very own ANN in less than 10 lines of code! Technically, anyone has access to this technology, although the most powerful models are proprietary and far more complex than what most people could create. A good example of this is Google’s chess playing neural net called “AlphaZero” which can wipe the floor with any grandmaster.
Read More – Conjuring the Demon: Artificial Intelligence, Descartes and the Brain-Consciousness Connection