The “should robots overtake humans” debate has recently been fueled by warnings about the potential threat of unregulated robot development from some academic or industrial superstars. What is conspicuously missing from these warnings, however, is a clear description of any realistic scenario in which robots could confidently challenge humans as a whole, not as puppets programmed and controlled by humans, but as autonomous forces acting on their own “will.”
If these types of scenarios were never realistic, even if we could see robots being used as ruthless killing machines by terrorists, dictators, and warlords shortly, as elite scientists and experts  have warned, we would still have nothing to fear. much about the so-called demonic threat of robots, as some elite experts have warned because it is ultimately just another form of human threat.
Table of Contents
However, if the above scenarios could predictably materialize
in the real world, then people need to start worrying about how to prevent dangers instead of how to win debates over imaginary dangers.
The reason people on both sides of the debate couldn’t see or show a very clear scenario that robots could very realistically challenge humans is a philosophical problem. So far, all discussions on this topic have focused on the possibility of creating a robot that could be considered human in the sense that it could think like a human, instead of just being a human tool controlled by programmed instructions restaurant robot. According to this school of thought, it seems that we don’t need to worry about the threat of robots to our human species as a whole since no one has yet been able to provide any plausible reason why it is possible to produce this type of robot.
Unfortunately, this way of thinking is philosophically incorrect because people who think this way miss a fundamental point about our human nature: humans are social creatures.
An important reason why we could survive as we are now and do what we do now is that we live and act as a social community. Similarly, when we estimate the potential of robots, we should not only focus our attention on their intelligence (which is, of course, still filled by humans) but also consider their sociability (which, of course, would initially be created by humans). ).
This would further lead to another philosophical question: what would fundamentally determine the sociability of robots? There can be a wide variety of arguments for this question. But in terms of being able to challenge humans, I would argue that the basic social criteria for robots can be defined as follows:
1) Robots could communicate with each other;
2) Robots could help each other recover from damage or shutdown through necessary operations including replacing batteries or replenishing other forms of energy supplies;
3) Robots could perform the production of other robots from exploration, collection, transportation, and processing of raw materials to the assembly of final robots.
Once robots could have the above functions and begin to “live” together as an interdependent multitude, we should reasonably consider them social beings. Social robots could form a community of robots. Once robots could function as defined above and form a community, they would no longer have to live as slaves to their human masters. Once that happens, it would be the beginning of history where robots could challenge humans or begin their cause to take over humans.
Another question would be: Is the sociability defined above realistic for robots?
Since all of the above functions do not exist (at least publicly) in today’s world, we would be wise to avoid unnecessary arguments, basing our judgment on whether any particular scientific principle would be violated in any practical attempt to realize some known scientific principle. function among the above. Communicating with other machines, moving objects, operating and repairing machine systems, and exploring natural resources are all common practices with programmed machines today. Therefore, while we may not have a single robot or a group of individual robots possessing all the functions listed above, there is no fundamental reason why any of the above functions should be considered unproducible according to any known scientific principle, i.e. the only thing left to do would be integrating these functions into one whole robot (and thus into a group of individual robots).
Since we see no known scientific principle that would prevent any of these functions from being realized, we should reasonably expect that with the money invested and the time to be spent, the creation of social robots as previously defined could conceivably become a reality if people in this world will make some special efforts to prevent it.
While sociability would be a critical prerequisite for robots to attack humans, it may still not be sufficient for robots to pose any threat to humans. For robots to become a real threat to humans, they must have some ability to fight or fight. Unfortunately for humans, the fighting ability of robots may be more real than their sociability. It is reasonable to expect that human-robot manufacturers will go to great lengths to integrate the most advanced technology available into the design and manufacture of robots.