Have you ever wondered what would happen if machines decided that human beings are just a waste of the planet's resources, that they are actually useless or even dangerous? What if machines and robots rebel and turn against us?
Imagine this:all the automated devices you use end up revolting against you. Your vehicle with self-driving option refuses to obey your commands. Your smartphone ignores your attempts to unlock the device, etc.
In our time, when more and more machines, robots and artificial intelligences (AI) are being developed, it is relevant to ask ourselves how serious would be a rise in power, even a revolution on the part of of these technologies? If robots ever take over the planet, would there be room for humans?
Currently, some robots are able to learn to play chess on their own in just a few hours. Others run hotels, or assist surgeons in the operating room. Not to mention the robots that are found in the homes of many people around the world, such as robot vacuum cleaners, or connected devices such as Amazon Echo, or Google Home. But even if they rebelled, these robots couldn't do us much harm...
However, what if one day all these devices decide that humans are useless, or even worse, globally harmful, and they turn against us? Could we survive this automated disaster? If you're imagining a Terminator-like scenario, we'll arrest you right there! Considering the type of robots that trample currently our Earth, we believe there are many, many ways to survive such a robot apocalypse. For example, simply closing the door behind you to start. Indeed, despite the fact that robots have mastered mathematics and are able to solve very complex problems, a simple closed door is an obstacle that they cannot not yet overcome (at least for most of them).
On top of that, the coordination of current robots is not perfect. If the experimental robots in the robotics labs were to rebel today, they would still have many problems (moving around, doing what they really want). Knowing that, then, we can think it's not that scary, is it? Also, most of them could very easily mistake a chair for a human being:indeed, most robots have no way of recognizing us (yet).
You may also like:HAL 9000-inspired artificial intelligence to manage future planetary bases
And what about cell phones and other smartphones? Although a phone today is able to recognize your face, it is very unlikely that it could do anything to physically harm you.
One of the worst things it could do is leak information about you, posting a compromising video or any other photo, video, or document that you wanted to keep private on all your constantly connected networks.
Regarding vehicles, there are also not as many risks as we might imagine, at least at present. Indeed, even if the car decides to perform an act of pure rebellion, you will still be able to control the vehicle (at least in part) with the steering wheel, the handbrake, or simply by switching off the ignition.
As for large industrial assembly robots, their margin of action is also very limited, since they are screwed to the ground. So they couldn't technically attack us. However, there is another category of robots that we might be more worried about…
In reality, the greatest danger would come from combat robots . Indeed, military drones flying high in the air at the time of the potential revolution would have absolutely no difficulty launching missiles with the aim of bringing us down. Fortunately, these robots need fuel (whatever it is), and therefore could not fly above our heads indefinitely.
An automated army of combat robots equipped with machine guns, however, could cause very significant damage. But again, it wouldn't be impossible to neutralize them (or wait for time to neutralize them). Indeed, most of the current combat robots are only prototypes, and therefore have many flaws (some are not waterproof, others have only a very low autonomy, etc.).
What about nuclear weapons? You should know that computers are indeed involved in the stages of launching nuclear weapons. However, these computers could not technically complete the task on their own, since it is imperative that human beings validate the key steps.
Besides, even if AIs managed to trick us into dropping nukes by giving us false information (for example), the robots themselves would also suffer from nuclear fallout, so it wouldn't be very smart of them. they want to survive. This is because nuclear explosions generate powerful electromagnetic radiation (electromagnetic pulse, or EMP), which would damage the robots' delicate electronic circuits.
But beware, all this does not mean that the warnings of Elon Musk or Stephen Hawking about technological advances in the field of artificial intelligence are not important, and that they should not be taken into account. Indeed, we may be decades away from living in communities with fully autonomous machines. Such a question (the one that gave rise to our article) will therefore have a completely different answer in a few decades.
Recently, in a documentary, Elon Musk warned humanity that artificial intelligence (AI) could stimulate the creation of a real dictator robot who could rule humanity forever:"If a company or a small group of people manages to develop a superior super-intelligence, then they could take over the world said Musk in the documentary. “At least when there is a human evil dictator, he will eventually die. But for an AI, there is no natural death – it could live forever. We would then have an immortal evil dictator that we could never get rid of he continued.
Also read:Saudi Arabia just gave citizenship to female robot Sophia
In the documentary, Musk essentially explains that governments, or other entities, could create dangerous AI that can outlast human leaders and never be destroyed. He also explains that to avoid such a scenario, we should start by democratizing AI.
His documentary, called "Do You Trust This Computer?" » ("Do You Trust This Computer?"), lays out the potential dangers of AI, including what could happen if AI evolves to become smarter than humans, and eventually become its master. /P>
“We are rapidly moving towards digital super-intelligence that far exceeds any human. I think it's obvious said Musk. “We are five years old. I think such digital super-intelligence will happen again in my lifetime, I am 100% sure of it he added.
This is a warning echoing previous dire predictions about the potential dangers of artificial intelligence. If we do manage to create intelligent, conscious machines, then the scenario mentioned above might not be so peaceful.
And you, what scares you the most about a potential robot revolution? Do you think this technology will be a threat in the future? What do you think of androids (human looking robots)? Do you think these AIs will ever be able to develop a consciousness of their own?