Warner Bros.
He also used his fiction to entertain a foreboding question: Should a robot be able to kill a human?
Asimov decided not, and drew up three "laws of robotics" that governed how robots behaved in his fictional universes.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
In short: human life is revered above all else. Not only is a robot forbidden from harming a person, robots are universally charged with protecting people. It's an effective system that sets up a number of entertaining plots in his writing, but the takeaway is clear: In an Asimovian universe where things are operating normally, it is made impossible for a robot to harm a human.
(Futurist and writer Ray Kurzweil famously talks about the singularity, an indeterminate point in the future where machine capability will overtake that of man, which he maintains is just a matter of time away.)
We were surprised when, in casual conversation, an acquaintance expressed relief at the existence of Asimov's laws as if they merited some sort of actual protection from would-be robot overlords. Let's be clear here: Asimov's three laws of robotics are a fictional creation with no real-world bearing on robot behavior. In this sense they're a lot like The Force - fun to contemplate, but useless in defending oneself from sci-fi monsters.
This begs the question: What checks actually are in place to prevent some sort of robot uprising in the future?
None.