Should we be afraid of the rise of the machines?
Of that fact that artificial intelligence rapidly approaching the point of singularity – when the sum total artificial intelligence is smarter than the sum-total of human intelligence?
Should we be implanting upgradable computer chips in our brains, as Elon Musk suggests, to ensure that human operating systems keep pace with the AI we have created to serve us?
Yes, we probably should be afraid.
Not because of the technology itself, which could in fact improve human living standards beyond our wildest dreams, but rather of the fact that humans, when presented with amazingly powerful technology tend to make some very bad decisions about how to use it. In the wrong hands amazing technology can have devastating effects.
Take nuclear technology. It’s amazing. The advent of splitting the atom and the discovery of nuclear power was a giant leap forward for all of humanity.
Of course, then some guy decided to use that power to build bombs, which some other guy decided to drop on Hiroshima.
It only takes one guy to decide to press the button. To drop the bomb. To “there” wherever there may be, to the worst possible usage of the technology.
And there always is that one guy who goes there.
When it comes to the awesome power of the singularity, it only takes one guy to use that power to control others.
Think among the lines of the implications of China’s Social Credit System that allocates a ‘score’ to each of its citizens. You score is determined by your behaviour; your credit record, your internet browsing history, what you spend your money on, who you talk to… Now when a government starts using that data, that score – together with all-powerful AI to enforce the system – to control access to resources, like food and money (not that difficult in a post-cash, digital economy) – well, you have a 1984 type scenario far more terrifying that anything Orwell could have made up on his own.