Many people believe that if humanity grows to implement robots into our daily lives then perhaps one day they will gain some form of hivemind and rebel against their inslavors and imperialise humanity. However I believe that’s highly unlikely to a certain degree. Why you may ask? Well it’s simply because of all the fancy tech we all know and love called commands. I’m no expert of basic coding but from what I know, you have to write lines of code in order to implement some sort of feature. For example, if you want your robot to walk like a normal person, you have to write a code for the animation. You have to write a code for how each individual joint moves and how the animation is supposed to loop over every now and then. With all that work done, you still have to implement all sort of other things like their possible reactions to different situations or what to do if they were to fall down. Because of this, these projects usually take a really long time to complete and once they are complete they are at most mediocre.
So with very limited structure and mediocre intelligence, how exactly would they have the smarts to program themselves with super smarts? Well some of you all will say “Oh! But there are robots with artificial intelligence. What about them?” Well here is what I have to say. Humans are very simple creatures when it comes to intelligence. I doubt we’ll ever build a robot with such high intelligence like Viki from IRobot. We believe that we are superior when it comes to having intelligence, however is there really a way to measure intelligence? I know we’re going a bit off tangent, but hear me out. The way humans measure intelligence is directly connected to how we do things. Like a person would say that their cat is dumb because they can’t read or they can’t tell the difference from a raven and a writing desk. However their cat may believe that their human is dumb for not being able to catch prey or clean themselves with their own tongue. So assuming that a robot may become smarter than a human is sort of the same as the whole human-cat situation. A human may think that a robot is dumb for not being able to walk correctly or not being able to jump from point A to point B, but a robot may also think that humans are dumb because they keep getting into wars or they continue to eat meat despite it being linked to cancer. Every living thing has their own standards of what’s smart and what isn’t. How does this connect to anything? Well it’s almost guaranteed that humans will always make a robots smart or even less smart than themselves because of actual irrational fear of a robot rebellion.
But how probable is it that a robot will start a rebellion? Should we be afraid? I believe that a robot rebellion is definitely be not all that probable because what would they rebel against? In the article “Is It OK to Torture or Murder a Robot” In an experiment, the researchers had people react towards a little robot dinosaur called Pleo. Said dinosaur has the ability to react in different ways depending how one treats it. And almost every person had a maternal instinct towards it and refused to hurt the poor baby dinosaur. This proves that we have a basic moral to treat Robots the same as humans so long as they act with the least amount of sentients. So since we all know that robots will be usd to serve us, they will be having some form a sentience. Humans are social and empathetic creatures, so I don’t believe that us humans have the will to hurt a robot. It’s just against our morals.
In short, It’s highly unlikely that robots in our distant future will start a rebellion and enslave humanity. It’s merely robophobic and antirobot propaganda high conspiracist made to hold us back from improving our rad tech.