What prevents Asimov's robots from locking all humans in padded cells for the humans protection?





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty{ margin-bottom:0;
}






up vote
45
down vote

favorite
6














  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.




Since Asimov's robots are already shown as not possessing "human common sense" when applying the laws to their actions, law 1 pretty much forces robots to lock up humans in matrix-style containers, or possibly put them in cryostasis. If they don't, they're allowing humans to come to harm in the future (human accidentally bites his tongue/stubs his toe/gets cancer/whatever) through inaction. Human arguments to the contrary are to be ignored as conflicting with law 1.



Where am I wrong? Are the Laws not what actually guides the robots, instead being something simplified for the robot user manual booklet or something?










share|improve this question









New contributor




budgiebeaks is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
















  • 12




    sort of the premise of the "I, Robot" film w/ Will Smith
    – NKCampbell
    yesterday










  • Remember also that the three rules are not complete (ignore the existence of the 0th law momentarily), they're a plot device. If the three rules actually worked there wouldn't be any stories. Also, @NKCampbell the movie is crap and the presentation of the 0th law is awful. If you actually examine the events you'll see that the robots hurt people "because...uh...EXPLOSIONS" not via justified use of the 0th law. If you go back and read...I think it was Robots and Empire, the 0th law killed the robot that tried to act on it. His belief allowed him to act and slow his shutdown, but still died.
    – Draco18s
    yesterday












  • This is an active area of research in 2018 now that robots are becoming more intelligent and it's not simple. 3 principles for creating safer AI | Stuart Russell youtu.be/EBK-a94IFHY
    – chasly from UK
    23 hours ago










  • Are the Laws not what actually guides the robots, instead being something simplified for the robot user manual booklet or something? - Yes, exactly so. It says that somewhere in one of the stories, I think.
    – Harry Johnston
    13 hours ago






  • 1




    I won't promote this as an answer since I can't give references, but there is an early short story where one of the many different representations of Multivac runs the whole world economy and other governmental decisions. It starts to make sub-optimal decisions specifically to stop humans from relying on it because it realises that the reliance is weakening the human race.
    – Alchymist
    10 hours ago

















up vote
45
down vote

favorite
6














  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.




Since Asimov's robots are already shown as not possessing "human common sense" when applying the laws to their actions, law 1 pretty much forces robots to lock up humans in matrix-style containers, or possibly put them in cryostasis. If they don't, they're allowing humans to come to harm in the future (human accidentally bites his tongue/stubs his toe/gets cancer/whatever) through inaction. Human arguments to the contrary are to be ignored as conflicting with law 1.



Where am I wrong? Are the Laws not what actually guides the robots, instead being something simplified for the robot user manual booklet or something?










share|improve this question









New contributor




budgiebeaks is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
















  • 12




    sort of the premise of the "I, Robot" film w/ Will Smith
    – NKCampbell
    yesterday










  • Remember also that the three rules are not complete (ignore the existence of the 0th law momentarily), they're a plot device. If the three rules actually worked there wouldn't be any stories. Also, @NKCampbell the movie is crap and the presentation of the 0th law is awful. If you actually examine the events you'll see that the robots hurt people "because...uh...EXPLOSIONS" not via justified use of the 0th law. If you go back and read...I think it was Robots and Empire, the 0th law killed the robot that tried to act on it. His belief allowed him to act and slow his shutdown, but still died.
    – Draco18s
    yesterday












  • This is an active area of research in 2018 now that robots are becoming more intelligent and it's not simple. 3 principles for creating safer AI | Stuart Russell youtu.be/EBK-a94IFHY
    – chasly from UK
    23 hours ago










  • Are the Laws not what actually guides the robots, instead being something simplified for the robot user manual booklet or something? - Yes, exactly so. It says that somewhere in one of the stories, I think.
    – Harry Johnston
    13 hours ago






  • 1




    I won't promote this as an answer since I can't give references, but there is an early short story where one of the many different representations of Multivac runs the whole world economy and other governmental decisions. It starts to make sub-optimal decisions specifically to stop humans from relying on it because it realises that the reliance is weakening the human race.
    – Alchymist
    10 hours ago













up vote
45
down vote

favorite
6









up vote
45
down vote

favorite
6






6







  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.




Since Asimov's robots are already shown as not possessing "human common sense" when applying the laws to their actions, law 1 pretty much forces robots to lock up humans in matrix-style containers, or possibly put them in cryostasis. If they don't, they're allowing humans to come to harm in the future (human accidentally bites his tongue/stubs his toe/gets cancer/whatever) through inaction. Human arguments to the contrary are to be ignored as conflicting with law 1.



Where am I wrong? Are the Laws not what actually guides the robots, instead being something simplified for the robot user manual booklet or something?










share|improve this question









New contributor




budgiebeaks is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.













  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.




Since Asimov's robots are already shown as not possessing "human common sense" when applying the laws to their actions, law 1 pretty much forces robots to lock up humans in matrix-style containers, or possibly put them in cryostasis. If they don't, they're allowing humans to come to harm in the future (human accidentally bites his tongue/stubs his toe/gets cancer/whatever) through inaction. Human arguments to the contrary are to be ignored as conflicting with law 1.



Where am I wrong? Are the Laws not what actually guides the robots, instead being something simplified for the robot user manual booklet or something?







isaac-asimov laws-of-robotics






share|improve this question









New contributor




budgiebeaks is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|improve this question









New contributor




budgiebeaks is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|improve this question




share|improve this question








edited yesterday









TheLethalCarrot

36.5k15197240




36.5k15197240






New contributor




budgiebeaks is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked yesterday









budgiebeaks

33223




33223




New contributor




budgiebeaks is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





budgiebeaks is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






budgiebeaks is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








  • 12




    sort of the premise of the "I, Robot" film w/ Will Smith
    – NKCampbell
    yesterday










  • Remember also that the three rules are not complete (ignore the existence of the 0th law momentarily), they're a plot device. If the three rules actually worked there wouldn't be any stories. Also, @NKCampbell the movie is crap and the presentation of the 0th law is awful. If you actually examine the events you'll see that the robots hurt people "because...uh...EXPLOSIONS" not via justified use of the 0th law. If you go back and read...I think it was Robots and Empire, the 0th law killed the robot that tried to act on it. His belief allowed him to act and slow his shutdown, but still died.
    – Draco18s
    yesterday












  • This is an active area of research in 2018 now that robots are becoming more intelligent and it's not simple. 3 principles for creating safer AI | Stuart Russell youtu.be/EBK-a94IFHY
    – chasly from UK
    23 hours ago










  • Are the Laws not what actually guides the robots, instead being something simplified for the robot user manual booklet or something? - Yes, exactly so. It says that somewhere in one of the stories, I think.
    – Harry Johnston
    13 hours ago






  • 1




    I won't promote this as an answer since I can't give references, but there is an early short story where one of the many different representations of Multivac runs the whole world economy and other governmental decisions. It starts to make sub-optimal decisions specifically to stop humans from relying on it because it realises that the reliance is weakening the human race.
    – Alchymist
    10 hours ago














  • 12




    sort of the premise of the "I, Robot" film w/ Will Smith
    – NKCampbell
    yesterday










  • Remember also that the three rules are not complete (ignore the existence of the 0th law momentarily), they're a plot device. If the three rules actually worked there wouldn't be any stories. Also, @NKCampbell the movie is crap and the presentation of the 0th law is awful. If you actually examine the events you'll see that the robots hurt people "because...uh...EXPLOSIONS" not via justified use of the 0th law. If you go back and read...I think it was Robots and Empire, the 0th law killed the robot that tried to act on it. His belief allowed him to act and slow his shutdown, but still died.
    – Draco18s
    yesterday












  • This is an active area of research in 2018 now that robots are becoming more intelligent and it's not simple. 3 principles for creating safer AI | Stuart Russell youtu.be/EBK-a94IFHY
    – chasly from UK
    23 hours ago










  • Are the Laws not what actually guides the robots, instead being something simplified for the robot user manual booklet or something? - Yes, exactly so. It says that somewhere in one of the stories, I think.
    – Harry Johnston
    13 hours ago






  • 1




    I won't promote this as an answer since I can't give references, but there is an early short story where one of the many different representations of Multivac runs the whole world economy and other governmental decisions. It starts to make sub-optimal decisions specifically to stop humans from relying on it because it realises that the reliance is weakening the human race.
    – Alchymist
    10 hours ago








12




12




sort of the premise of the "I, Robot" film w/ Will Smith
– NKCampbell
yesterday




sort of the premise of the "I, Robot" film w/ Will Smith
– NKCampbell
yesterday












Remember also that the three rules are not complete (ignore the existence of the 0th law momentarily), they're a plot device. If the three rules actually worked there wouldn't be any stories. Also, @NKCampbell the movie is crap and the presentation of the 0th law is awful. If you actually examine the events you'll see that the robots hurt people "because...uh...EXPLOSIONS" not via justified use of the 0th law. If you go back and read...I think it was Robots and Empire, the 0th law killed the robot that tried to act on it. His belief allowed him to act and slow his shutdown, but still died.
– Draco18s
yesterday






Remember also that the three rules are not complete (ignore the existence of the 0th law momentarily), they're a plot device. If the three rules actually worked there wouldn't be any stories. Also, @NKCampbell the movie is crap and the presentation of the 0th law is awful. If you actually examine the events you'll see that the robots hurt people "because...uh...EXPLOSIONS" not via justified use of the 0th law. If you go back and read...I think it was Robots and Empire, the 0th law killed the robot that tried to act on it. His belief allowed him to act and slow his shutdown, but still died.
– Draco18s
yesterday














This is an active area of research in 2018 now that robots are becoming more intelligent and it's not simple. 3 principles for creating safer AI | Stuart Russell youtu.be/EBK-a94IFHY
– chasly from UK
23 hours ago




This is an active area of research in 2018 now that robots are becoming more intelligent and it's not simple. 3 principles for creating safer AI | Stuart Russell youtu.be/EBK-a94IFHY
– chasly from UK
23 hours ago












Are the Laws not what actually guides the robots, instead being something simplified for the robot user manual booklet or something? - Yes, exactly so. It says that somewhere in one of the stories, I think.
– Harry Johnston
13 hours ago




Are the Laws not what actually guides the robots, instead being something simplified for the robot user manual booklet or something? - Yes, exactly so. It says that somewhere in one of the stories, I think.
– Harry Johnston
13 hours ago




1




1




I won't promote this as an answer since I can't give references, but there is an early short story where one of the many different representations of Multivac runs the whole world economy and other governmental decisions. It starts to make sub-optimal decisions specifically to stop humans from relying on it because it realises that the reliance is weakening the human race.
– Alchymist
10 hours ago




I won't promote this as an answer since I can't give references, but there is an early short story where one of the many different representations of Multivac runs the whole world economy and other governmental decisions. It starts to make sub-optimal decisions specifically to stop humans from relying on it because it realises that the reliance is weakening the human race.
– Alchymist
10 hours ago










7 Answers
7






active

oldest

votes

















up vote
86
down vote













The robots in Asimov's works generally don't have the 'mental' sophistication needed to look ahead for abstract harm in the manner you suggest. For them, the 'inaction' clause must mean the robot cannot allow imminent harm - i.e they must act to prevent harm when they see the harm about to happen. Such events generally don't occur as humans go about their daily lives, so by and large robots would let humans carry on (while serving them, of course).



By the time robots become sophisticated enough to forecast possible harm in the manner you suggest, they have also become sophisticated enough to understand that the restraints you suggest themselves constitute a kind of harm, so the 'action' clause here would counteract the 'inaction' clause [here the 'action' clause would be stronger, as it involves actions actually to be taken, contrasted with merely possible harms that need not occur]. They also would understand that things like biting one's own tongue are inherently unavoidable so they wouldn't try to prevent such harm (though of course it would 'pain' them when it actually happens). By the time we get to Daneel and his 'Zeroth Law' robots, they additionally understand that restraining all individual human beings constitutes harm to humanity; this, incidentally, is why robots eventually disappear - they come to realize that having humanity rely on them is itself harmful, so the best they can do is let humanity manage its own fate [at least overtly].






share|improve this answer










New contributor




PMar is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.














  • 8




    It has been a while since I read it, but at the end of I, Robot, don't people realize that the computers/robots that control the world are setting a plan in motion to revert humanity back to primitive technology in order to protect them from themselves?
    – BlackThorn
    yesterday






  • 3




    I upvoted. I also think this answer might be improved by reference to the story Galley Slave. In it, Dr. Calvin states that the robot Easy is not capable of abstract reasoning regarding the consequences of ideas published in a textbook. Certainly this is only one robot, but it's an explicit example of something that's only implied by the stories for other US Robots robots.
    – Dranon
    yesterday






  • 4




    @BlackThorn - correct, in that the machines have essentially and surreptitiously taken over the world in order to protect humanity
    – NKCampbell
    yesterday








  • 3




    @PMar's excellent answer refers to R. Daneel Olivaw and the 'Zeroth Law'; I'd suggest that Daneel's discovery of the 'Zeroth Law' was in response to his realization, through the course of the novels The Robots of Dawn and Robots and Empire, that there was a danger that over-reliance on robots would mean humanity would, metaphorically, be kept in padded rooms.
    – bgvaughan
    22 hours ago






  • 4




    @bgvaughan Minor nitpick - Giskard discovered the Zeroth Law. He couldn't integrate it into his own mind and died, but he integrated it into Daniel's before he died. Giskard managing to stop the antagonist in spite of orders is the crux of the book's climax, in fact.
    – Graham
    21 hours ago


















up vote
8
down vote













My understanding of it was that the typical Three Laws robot interpreted the First Law to mean "Nor, through inaction, allow a human being to come to harm right here and now when the robot is within sight of him and can tell what's obviously about to happen if the robot does not choose to interfere." That's different from locking up the human today just because it is theoretically possible that he might suffer some sort of accidental injury or infection or other misfortune tomorrow. (Or at some much later date.)



To put it another way: Near as I can recall, on those occasions when we saw a robot refuse to comply with an order to go away and leave the human alone to do whatever he was currently doing, that usually meant the Second Law was being subordinated to the First Law because of the robot's perception of immediate danger to a fragile human body. But if such immediate danger was not present, then the Second Law required the robot to turn around and go away whenever instructed to do so. The solid fact of "The Second Law applies to this order I am receiving right now" overrode anything so abstract as "But if I leave today, a First Law problem involving physical harm might arise tomorrow . . . or the day after . . . or at some later date . . . who knows?"



So if some robot tried to lock everyone up for their own good, the Second Law could be invoked by ordering the robot to forget the whole silly idea.






share|improve this answer

















  • 2




    "Little Lost Robot" involved highly speculative harm ("you might accidentally stay here too long") overriding the Second Law.
    – Kevin
    yesterday












  • @Kevin I remember the story -- the missing robot only had the first part of the First Law in his positronic brain -- but I don't remember the exact bit you briefly referred to. Could you be more specific? (I do remember the way he convinces a lot of other robots that the First Law does not require them to commit suicide in a futile effort to protect the life of a man who seems to be threatened by a falling weight. I saw the robot's point -- self-destruction would simply mean the other robots were breaking the Third Law without enforcing the First in the process.)
    – Lorendiac
    21 hours ago












  • The original purpose of the robots was to operate alongside humans in an area subject to low-level radiation that might harm the humans after prolonged exposure but would destroy the robots almost immediately. If the robots could have trusted the humans to look after themselves, then there would have been no need to modify the First Law in the first place.
    – Kevin
    20 hours ago










  • @Kevin Ah. I didn't recall the exact rationale for why a few robots had been built that way in the first place. I've now refreshed my memory of the first part of the story. It looks like ordinary First Law robots only panicked on those occasions when gamma rays were deliberately being generated near a human body. A threat "here and now," as I said in my answer. It looks like those robots didn't do anything about such abstract possibilities as "after I leave the room, some silly human might start generating gamma rays with that equipment, and this could gradually impair his health."
    – Lorendiac
    20 hours ago










  • I think CASA may be staffed by robots. They think the best way to prevent aeronautical harm is to prevent people from ever getting off the ground. I'm not being snarky, they've come right out and said so.
    – Peter Wone
    18 hours ago


















up vote
3
down vote













Since the "merger" of the Robot universe and the Foundation universe reveals that robots manipulated and dominated human history for thousands of years, in a very real sense the galaxy is their padded room and most of Asimov's works in this "unified universe" take place inside that padded room.



We just can't see the walls.






share|improve this answer



















  • 1




    This is the correct answer. In the Asimov stories, the view is that it isn't good for us to know we are being controlled as we need a sense of free will, so the super intelligent robots work in the shadows.
    – axsvl77
    4 hours ago


















up vote
2
down vote













Since you don't specify that you are requesting an "in universe" reason ... I think it's important to remember that the three laws are just a story device. Asimov (wisely) is quite vague about how they are implemented, as he is about many technical details. And "I should just lock all the humans in a padded cell for their safety" would result in a rather limited storyline.



Now, in universe, there are many "judgment calls" inherent in applying the three laws (and in fact the inherent ambiguities often result in important plot elements for the stories). The robots apparently have to appeal to their own programming instead of an external authority to resolve these ambiguities.



But I think we have to logically assume that the more obvious judgment calls (like, say, should I just lock all the humans in a padded cell immediately for their safety?) were already addressed in development and testing of the robots, or they never would have been put in general use or production at all.



In other words, the designers of the robots, in addition to addressing whatever other bugs they had to address (e.g. hmm, if the human is dead it can't suffer), would have simply programmed safeguards against that sort of result.






share|improve this answer








New contributor




GHolmes is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.














  • 3




    Actually there's a set of short stories in which Asimov specifically showed, in great detail, what happens to a robot whose three laws had been modified or weighted against eachother in different scenarios. I don't think it's right to call it a story device, it's pretty baked in to the core of his series.
    – C Bauer
    yesterday










  • @CBauer almost a lot of conflict stems or is somehow (significantly) related to the three laws and how they are misinterpreted, creatively interpreted, too literally interpreted, have their interpretation tampered with and so on by robots. Since the driving factor behind the story...or plot if you will, it is a plot device. Don't confuse "plot device" with "a contrived plot device" - you can have a well thought out and well crafted, entirely internally consistent reason for plot to progress. And the three laws of robotics are oft cited example of those.
    – vlaz
    15 hours ago










  • @vlaz Fair point, I guess I was considering the phrasing to be a bit dismissive which is why I commented. Thanks for the info!
    – C Bauer
    8 hours ago


















up vote
1
down vote













Kind of a frame challenge - is locking a human in a padded cell preventing them from harm? If you're going to allow the abstract possibility of future arm as motivation for a robot to use the 1st law to lock humans up, it should be noted that taking away a humans freedom is generally causing them harm to some extent in the form of psychological damage - and the mental state of humans has been considered by robots as eligible for 1st law protection in at least some of Asimovs stories.






share|improve this answer




























    up vote
    1
    down vote













    Zeroth law.



    A robot may not harm humanity, or, by inaction, allow humanity to come to harm



    If all the humans are locked in padded cells there isn't much procreation taking place. Ergo, the human race dies out.



    Contrary to some other answers, I believe robot thinking is sophisticated enough to deal with future harm, just not to deal with hypothetical harm. A human may bite his tongue, stub his toe, whatever but it's not definite harm so doesn't require action to prevent it from happening.



    Don't forget that - on realisation that the race would (not might) stagnate following the initial colonisation of the solar system and subsequent politics - they nuked (or allowed to be nuked) the planet.






    share|improve this answer




























      up vote
      0
      down vote













      Simply put, the definition of 'harm'.



      Protection from physical harm can cause other kinds of harm. In some cases, protection from some form of harm can actually increase the likelihood of that type of harm in the future.



      For example, protection from emotional harm can leave a person incapable of dealing with trivial challenges without severe emotional harm, which can propagate to actual mental harm, which can further propagate into harm to one's general health, which obviously compromises one's physical safety.



      In the end, for a robot to be able to make determinations with regard to intervention in the full spectrum of human events, it must be capable of making a non-deterministic estimate of probable outcomes of a range of potential actions (including inaction), and be able to make not only objective determinations of probability and severity of harm, but also estimates of the subjective PERCEPTION of various types of harm. It must be able to do this continuously in real time as well.



      Because of the complexity of problem, the simplest way to mitigate it is to restrict the problem domain by restricting the capabilities and responsibilities of the robot.



      If a robot is designed to control the opening and closing of a sliding door, software can be defined which can make very reliable estimates of the potential outcomes of its actions because its actions are limited to either opening the door, or closing the door.



      However, if our doorman robot is watching and listening to everything, and trying to parse everything going on around it, it may not be able to reliably determine whether it should open or close the door, given the totality of the situation. For example, if a couple are in an argument, and one of them gets up to storm out of the room, should the robot open the door, or would it be best to keep them in the room to solve their dispute? Is this person a danger to themselves or others if they leave? Will the other one be a danger to them if they stay? how will all of this affect their relationship? Will opening the door cause social harm because of the appearance of the person attempting to leave compared with the social norms and apparent prejudices of those on the other side of the door who would witness the event?



      You can further restrict the problem domain by restricting the inputs. So now our robo-doorman can only perceive that a person is approaching the door, and can determine the point at which if the door is not opened, the person is likely to come to physical harm, based on their velocity and the properties of the door. Sure, the robot may not be very much help in saving a relationship, but it will predictably be able to keep you from walking into the doors like William Shatner in a Star Trek blooper.



      All of this means that the robots must either be able to approach or exceed our capacity for what we call 'thought', or it must be limited to the extend that its shortcomings are less than its strengths. If neither is possible, then that task is probably better left to a human.






      share|improve this answer








      New contributor




      Mitch Carroll is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.


















        Your Answer








        StackExchange.ready(function() {
        var channelOptions = {
        tags: "".split(" "),
        id: "186"
        };
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function() {
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled) {
        StackExchange.using("snippets", function() {
        createEditor();
        });
        }
        else {
        createEditor();
        }
        });

        function createEditor() {
        StackExchange.prepareEditor({
        heartbeatType: 'answer',
        convertImagesToLinks: false,
        noModals: true,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: null,
        bindNavPrevention: true,
        postfix: "",
        imageUploader: {
        brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
        contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
        allowUrls: true
        },
        noCode: true, onDemand: true,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        });


        }
        });






        budgiebeaks is a new contributor. Be nice, and check out our Code of Conduct.










         

        draft saved


        draft discarded


















        StackExchange.ready(
        function () {
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fscifi.stackexchange.com%2fquestions%2f198655%2fwhat-prevents-asimovs-robots-from-locking-all-humans-in-padded-cells-for-the-hu%23new-answer', 'question_page');
        }
        );

        Post as a guest















        Required, but never shown

























        7 Answers
        7






        active

        oldest

        votes








        7 Answers
        7






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes








        up vote
        86
        down vote













        The robots in Asimov's works generally don't have the 'mental' sophistication needed to look ahead for abstract harm in the manner you suggest. For them, the 'inaction' clause must mean the robot cannot allow imminent harm - i.e they must act to prevent harm when they see the harm about to happen. Such events generally don't occur as humans go about their daily lives, so by and large robots would let humans carry on (while serving them, of course).



        By the time robots become sophisticated enough to forecast possible harm in the manner you suggest, they have also become sophisticated enough to understand that the restraints you suggest themselves constitute a kind of harm, so the 'action' clause here would counteract the 'inaction' clause [here the 'action' clause would be stronger, as it involves actions actually to be taken, contrasted with merely possible harms that need not occur]. They also would understand that things like biting one's own tongue are inherently unavoidable so they wouldn't try to prevent such harm (though of course it would 'pain' them when it actually happens). By the time we get to Daneel and his 'Zeroth Law' robots, they additionally understand that restraining all individual human beings constitutes harm to humanity; this, incidentally, is why robots eventually disappear - they come to realize that having humanity rely on them is itself harmful, so the best they can do is let humanity manage its own fate [at least overtly].






        share|improve this answer










        New contributor




        PMar is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.














        • 8




          It has been a while since I read it, but at the end of I, Robot, don't people realize that the computers/robots that control the world are setting a plan in motion to revert humanity back to primitive technology in order to protect them from themselves?
          – BlackThorn
          yesterday






        • 3




          I upvoted. I also think this answer might be improved by reference to the story Galley Slave. In it, Dr. Calvin states that the robot Easy is not capable of abstract reasoning regarding the consequences of ideas published in a textbook. Certainly this is only one robot, but it's an explicit example of something that's only implied by the stories for other US Robots robots.
          – Dranon
          yesterday






        • 4




          @BlackThorn - correct, in that the machines have essentially and surreptitiously taken over the world in order to protect humanity
          – NKCampbell
          yesterday








        • 3




          @PMar's excellent answer refers to R. Daneel Olivaw and the 'Zeroth Law'; I'd suggest that Daneel's discovery of the 'Zeroth Law' was in response to his realization, through the course of the novels The Robots of Dawn and Robots and Empire, that there was a danger that over-reliance on robots would mean humanity would, metaphorically, be kept in padded rooms.
          – bgvaughan
          22 hours ago






        • 4




          @bgvaughan Minor nitpick - Giskard discovered the Zeroth Law. He couldn't integrate it into his own mind and died, but he integrated it into Daniel's before he died. Giskard managing to stop the antagonist in spite of orders is the crux of the book's climax, in fact.
          – Graham
          21 hours ago















        up vote
        86
        down vote













        The robots in Asimov's works generally don't have the 'mental' sophistication needed to look ahead for abstract harm in the manner you suggest. For them, the 'inaction' clause must mean the robot cannot allow imminent harm - i.e they must act to prevent harm when they see the harm about to happen. Such events generally don't occur as humans go about their daily lives, so by and large robots would let humans carry on (while serving them, of course).



        By the time robots become sophisticated enough to forecast possible harm in the manner you suggest, they have also become sophisticated enough to understand that the restraints you suggest themselves constitute a kind of harm, so the 'action' clause here would counteract the 'inaction' clause [here the 'action' clause would be stronger, as it involves actions actually to be taken, contrasted with merely possible harms that need not occur]. They also would understand that things like biting one's own tongue are inherently unavoidable so they wouldn't try to prevent such harm (though of course it would 'pain' them when it actually happens). By the time we get to Daneel and his 'Zeroth Law' robots, they additionally understand that restraining all individual human beings constitutes harm to humanity; this, incidentally, is why robots eventually disappear - they come to realize that having humanity rely on them is itself harmful, so the best they can do is let humanity manage its own fate [at least overtly].






        share|improve this answer










        New contributor




        PMar is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.














        • 8




          It has been a while since I read it, but at the end of I, Robot, don't people realize that the computers/robots that control the world are setting a plan in motion to revert humanity back to primitive technology in order to protect them from themselves?
          – BlackThorn
          yesterday






        • 3




          I upvoted. I also think this answer might be improved by reference to the story Galley Slave. In it, Dr. Calvin states that the robot Easy is not capable of abstract reasoning regarding the consequences of ideas published in a textbook. Certainly this is only one robot, but it's an explicit example of something that's only implied by the stories for other US Robots robots.
          – Dranon
          yesterday






        • 4




          @BlackThorn - correct, in that the machines have essentially and surreptitiously taken over the world in order to protect humanity
          – NKCampbell
          yesterday








        • 3




          @PMar's excellent answer refers to R. Daneel Olivaw and the 'Zeroth Law'; I'd suggest that Daneel's discovery of the 'Zeroth Law' was in response to his realization, through the course of the novels The Robots of Dawn and Robots and Empire, that there was a danger that over-reliance on robots would mean humanity would, metaphorically, be kept in padded rooms.
          – bgvaughan
          22 hours ago






        • 4




          @bgvaughan Minor nitpick - Giskard discovered the Zeroth Law. He couldn't integrate it into his own mind and died, but he integrated it into Daniel's before he died. Giskard managing to stop the antagonist in spite of orders is the crux of the book's climax, in fact.
          – Graham
          21 hours ago













        up vote
        86
        down vote










        up vote
        86
        down vote









        The robots in Asimov's works generally don't have the 'mental' sophistication needed to look ahead for abstract harm in the manner you suggest. For them, the 'inaction' clause must mean the robot cannot allow imminent harm - i.e they must act to prevent harm when they see the harm about to happen. Such events generally don't occur as humans go about their daily lives, so by and large robots would let humans carry on (while serving them, of course).



        By the time robots become sophisticated enough to forecast possible harm in the manner you suggest, they have also become sophisticated enough to understand that the restraints you suggest themselves constitute a kind of harm, so the 'action' clause here would counteract the 'inaction' clause [here the 'action' clause would be stronger, as it involves actions actually to be taken, contrasted with merely possible harms that need not occur]. They also would understand that things like biting one's own tongue are inherently unavoidable so they wouldn't try to prevent such harm (though of course it would 'pain' them when it actually happens). By the time we get to Daneel and his 'Zeroth Law' robots, they additionally understand that restraining all individual human beings constitutes harm to humanity; this, incidentally, is why robots eventually disappear - they come to realize that having humanity rely on them is itself harmful, so the best they can do is let humanity manage its own fate [at least overtly].






        share|improve this answer










        New contributor




        PMar is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.









        The robots in Asimov's works generally don't have the 'mental' sophistication needed to look ahead for abstract harm in the manner you suggest. For them, the 'inaction' clause must mean the robot cannot allow imminent harm - i.e they must act to prevent harm when they see the harm about to happen. Such events generally don't occur as humans go about their daily lives, so by and large robots would let humans carry on (while serving them, of course).



        By the time robots become sophisticated enough to forecast possible harm in the manner you suggest, they have also become sophisticated enough to understand that the restraints you suggest themselves constitute a kind of harm, so the 'action' clause here would counteract the 'inaction' clause [here the 'action' clause would be stronger, as it involves actions actually to be taken, contrasted with merely possible harms that need not occur]. They also would understand that things like biting one's own tongue are inherently unavoidable so they wouldn't try to prevent such harm (though of course it would 'pain' them when it actually happens). By the time we get to Daneel and his 'Zeroth Law' robots, they additionally understand that restraining all individual human beings constitutes harm to humanity; this, incidentally, is why robots eventually disappear - they come to realize that having humanity rely on them is itself harmful, so the best they can do is let humanity manage its own fate [at least overtly].







        share|improve this answer










        New contributor




        PMar is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.









        share|improve this answer



        share|improve this answer








        edited yesterday









        Mike Scott

        48k3151200




        48k3151200






        New contributor




        PMar is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.









        answered yesterday









        PMar

        40113




        40113




        New contributor




        PMar is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.





        New contributor





        PMar is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.






        PMar is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.








        • 8




          It has been a while since I read it, but at the end of I, Robot, don't people realize that the computers/robots that control the world are setting a plan in motion to revert humanity back to primitive technology in order to protect them from themselves?
          – BlackThorn
          yesterday






        • 3




          I upvoted. I also think this answer might be improved by reference to the story Galley Slave. In it, Dr. Calvin states that the robot Easy is not capable of abstract reasoning regarding the consequences of ideas published in a textbook. Certainly this is only one robot, but it's an explicit example of something that's only implied by the stories for other US Robots robots.
          – Dranon
          yesterday






        • 4




          @BlackThorn - correct, in that the machines have essentially and surreptitiously taken over the world in order to protect humanity
          – NKCampbell
          yesterday








        • 3




          @PMar's excellent answer refers to R. Daneel Olivaw and the 'Zeroth Law'; I'd suggest that Daneel's discovery of the 'Zeroth Law' was in response to his realization, through the course of the novels The Robots of Dawn and Robots and Empire, that there was a danger that over-reliance on robots would mean humanity would, metaphorically, be kept in padded rooms.
          – bgvaughan
          22 hours ago






        • 4




          @bgvaughan Minor nitpick - Giskard discovered the Zeroth Law. He couldn't integrate it into his own mind and died, but he integrated it into Daniel's before he died. Giskard managing to stop the antagonist in spite of orders is the crux of the book's climax, in fact.
          – Graham
          21 hours ago














        • 8




          It has been a while since I read it, but at the end of I, Robot, don't people realize that the computers/robots that control the world are setting a plan in motion to revert humanity back to primitive technology in order to protect them from themselves?
          – BlackThorn
          yesterday






        • 3




          I upvoted. I also think this answer might be improved by reference to the story Galley Slave. In it, Dr. Calvin states that the robot Easy is not capable of abstract reasoning regarding the consequences of ideas published in a textbook. Certainly this is only one robot, but it's an explicit example of something that's only implied by the stories for other US Robots robots.
          – Dranon
          yesterday






        • 4




          @BlackThorn - correct, in that the machines have essentially and surreptitiously taken over the world in order to protect humanity
          – NKCampbell
          yesterday








        • 3




          @PMar's excellent answer refers to R. Daneel Olivaw and the 'Zeroth Law'; I'd suggest that Daneel's discovery of the 'Zeroth Law' was in response to his realization, through the course of the novels The Robots of Dawn and Robots and Empire, that there was a danger that over-reliance on robots would mean humanity would, metaphorically, be kept in padded rooms.
          – bgvaughan
          22 hours ago






        • 4




          @bgvaughan Minor nitpick - Giskard discovered the Zeroth Law. He couldn't integrate it into his own mind and died, but he integrated it into Daniel's before he died. Giskard managing to stop the antagonist in spite of orders is the crux of the book's climax, in fact.
          – Graham
          21 hours ago








        8




        8




        It has been a while since I read it, but at the end of I, Robot, don't people realize that the computers/robots that control the world are setting a plan in motion to revert humanity back to primitive technology in order to protect them from themselves?
        – BlackThorn
        yesterday




        It has been a while since I read it, but at the end of I, Robot, don't people realize that the computers/robots that control the world are setting a plan in motion to revert humanity back to primitive technology in order to protect them from themselves?
        – BlackThorn
        yesterday




        3




        3




        I upvoted. I also think this answer might be improved by reference to the story Galley Slave. In it, Dr. Calvin states that the robot Easy is not capable of abstract reasoning regarding the consequences of ideas published in a textbook. Certainly this is only one robot, but it's an explicit example of something that's only implied by the stories for other US Robots robots.
        – Dranon
        yesterday




        I upvoted. I also think this answer might be improved by reference to the story Galley Slave. In it, Dr. Calvin states that the robot Easy is not capable of abstract reasoning regarding the consequences of ideas published in a textbook. Certainly this is only one robot, but it's an explicit example of something that's only implied by the stories for other US Robots robots.
        – Dranon
        yesterday




        4




        4




        @BlackThorn - correct, in that the machines have essentially and surreptitiously taken over the world in order to protect humanity
        – NKCampbell
        yesterday






        @BlackThorn - correct, in that the machines have essentially and surreptitiously taken over the world in order to protect humanity
        – NKCampbell
        yesterday






        3




        3




        @PMar's excellent answer refers to R. Daneel Olivaw and the 'Zeroth Law'; I'd suggest that Daneel's discovery of the 'Zeroth Law' was in response to his realization, through the course of the novels The Robots of Dawn and Robots and Empire, that there was a danger that over-reliance on robots would mean humanity would, metaphorically, be kept in padded rooms.
        – bgvaughan
        22 hours ago




        @PMar's excellent answer refers to R. Daneel Olivaw and the 'Zeroth Law'; I'd suggest that Daneel's discovery of the 'Zeroth Law' was in response to his realization, through the course of the novels The Robots of Dawn and Robots and Empire, that there was a danger that over-reliance on robots would mean humanity would, metaphorically, be kept in padded rooms.
        – bgvaughan
        22 hours ago




        4




        4




        @bgvaughan Minor nitpick - Giskard discovered the Zeroth Law. He couldn't integrate it into his own mind and died, but he integrated it into Daniel's before he died. Giskard managing to stop the antagonist in spite of orders is the crux of the book's climax, in fact.
        – Graham
        21 hours ago




        @bgvaughan Minor nitpick - Giskard discovered the Zeroth Law. He couldn't integrate it into his own mind and died, but he integrated it into Daniel's before he died. Giskard managing to stop the antagonist in spite of orders is the crux of the book's climax, in fact.
        – Graham
        21 hours ago












        up vote
        8
        down vote













        My understanding of it was that the typical Three Laws robot interpreted the First Law to mean "Nor, through inaction, allow a human being to come to harm right here and now when the robot is within sight of him and can tell what's obviously about to happen if the robot does not choose to interfere." That's different from locking up the human today just because it is theoretically possible that he might suffer some sort of accidental injury or infection or other misfortune tomorrow. (Or at some much later date.)



        To put it another way: Near as I can recall, on those occasions when we saw a robot refuse to comply with an order to go away and leave the human alone to do whatever he was currently doing, that usually meant the Second Law was being subordinated to the First Law because of the robot's perception of immediate danger to a fragile human body. But if such immediate danger was not present, then the Second Law required the robot to turn around and go away whenever instructed to do so. The solid fact of "The Second Law applies to this order I am receiving right now" overrode anything so abstract as "But if I leave today, a First Law problem involving physical harm might arise tomorrow . . . or the day after . . . or at some later date . . . who knows?"



        So if some robot tried to lock everyone up for their own good, the Second Law could be invoked by ordering the robot to forget the whole silly idea.






        share|improve this answer

















        • 2




          "Little Lost Robot" involved highly speculative harm ("you might accidentally stay here too long") overriding the Second Law.
          – Kevin
          yesterday












        • @Kevin I remember the story -- the missing robot only had the first part of the First Law in his positronic brain -- but I don't remember the exact bit you briefly referred to. Could you be more specific? (I do remember the way he convinces a lot of other robots that the First Law does not require them to commit suicide in a futile effort to protect the life of a man who seems to be threatened by a falling weight. I saw the robot's point -- self-destruction would simply mean the other robots were breaking the Third Law without enforcing the First in the process.)
          – Lorendiac
          21 hours ago












        • The original purpose of the robots was to operate alongside humans in an area subject to low-level radiation that might harm the humans after prolonged exposure but would destroy the robots almost immediately. If the robots could have trusted the humans to look after themselves, then there would have been no need to modify the First Law in the first place.
          – Kevin
          20 hours ago










        • @Kevin Ah. I didn't recall the exact rationale for why a few robots had been built that way in the first place. I've now refreshed my memory of the first part of the story. It looks like ordinary First Law robots only panicked on those occasions when gamma rays were deliberately being generated near a human body. A threat "here and now," as I said in my answer. It looks like those robots didn't do anything about such abstract possibilities as "after I leave the room, some silly human might start generating gamma rays with that equipment, and this could gradually impair his health."
          – Lorendiac
          20 hours ago










        • I think CASA may be staffed by robots. They think the best way to prevent aeronautical harm is to prevent people from ever getting off the ground. I'm not being snarky, they've come right out and said so.
          – Peter Wone
          18 hours ago















        up vote
        8
        down vote













        My understanding of it was that the typical Three Laws robot interpreted the First Law to mean "Nor, through inaction, allow a human being to come to harm right here and now when the robot is within sight of him and can tell what's obviously about to happen if the robot does not choose to interfere." That's different from locking up the human today just because it is theoretically possible that he might suffer some sort of accidental injury or infection or other misfortune tomorrow. (Or at some much later date.)



        To put it another way: Near as I can recall, on those occasions when we saw a robot refuse to comply with an order to go away and leave the human alone to do whatever he was currently doing, that usually meant the Second Law was being subordinated to the First Law because of the robot's perception of immediate danger to a fragile human body. But if such immediate danger was not present, then the Second Law required the robot to turn around and go away whenever instructed to do so. The solid fact of "The Second Law applies to this order I am receiving right now" overrode anything so abstract as "But if I leave today, a First Law problem involving physical harm might arise tomorrow . . . or the day after . . . or at some later date . . . who knows?"



        So if some robot tried to lock everyone up for their own good, the Second Law could be invoked by ordering the robot to forget the whole silly idea.






        share|improve this answer

















        • 2




          "Little Lost Robot" involved highly speculative harm ("you might accidentally stay here too long") overriding the Second Law.
          – Kevin
          yesterday












        • @Kevin I remember the story -- the missing robot only had the first part of the First Law in his positronic brain -- but I don't remember the exact bit you briefly referred to. Could you be more specific? (I do remember the way he convinces a lot of other robots that the First Law does not require them to commit suicide in a futile effort to protect the life of a man who seems to be threatened by a falling weight. I saw the robot's point -- self-destruction would simply mean the other robots were breaking the Third Law without enforcing the First in the process.)
          – Lorendiac
          21 hours ago












        • The original purpose of the robots was to operate alongside humans in an area subject to low-level radiation that might harm the humans after prolonged exposure but would destroy the robots almost immediately. If the robots could have trusted the humans to look after themselves, then there would have been no need to modify the First Law in the first place.
          – Kevin
          20 hours ago










        • @Kevin Ah. I didn't recall the exact rationale for why a few robots had been built that way in the first place. I've now refreshed my memory of the first part of the story. It looks like ordinary First Law robots only panicked on those occasions when gamma rays were deliberately being generated near a human body. A threat "here and now," as I said in my answer. It looks like those robots didn't do anything about such abstract possibilities as "after I leave the room, some silly human might start generating gamma rays with that equipment, and this could gradually impair his health."
          – Lorendiac
          20 hours ago










        • I think CASA may be staffed by robots. They think the best way to prevent aeronautical harm is to prevent people from ever getting off the ground. I'm not being snarky, they've come right out and said so.
          – Peter Wone
          18 hours ago













        up vote
        8
        down vote










        up vote
        8
        down vote









        My understanding of it was that the typical Three Laws robot interpreted the First Law to mean "Nor, through inaction, allow a human being to come to harm right here and now when the robot is within sight of him and can tell what's obviously about to happen if the robot does not choose to interfere." That's different from locking up the human today just because it is theoretically possible that he might suffer some sort of accidental injury or infection or other misfortune tomorrow. (Or at some much later date.)



        To put it another way: Near as I can recall, on those occasions when we saw a robot refuse to comply with an order to go away and leave the human alone to do whatever he was currently doing, that usually meant the Second Law was being subordinated to the First Law because of the robot's perception of immediate danger to a fragile human body. But if such immediate danger was not present, then the Second Law required the robot to turn around and go away whenever instructed to do so. The solid fact of "The Second Law applies to this order I am receiving right now" overrode anything so abstract as "But if I leave today, a First Law problem involving physical harm might arise tomorrow . . . or the day after . . . or at some later date . . . who knows?"



        So if some robot tried to lock everyone up for their own good, the Second Law could be invoked by ordering the robot to forget the whole silly idea.






        share|improve this answer












        My understanding of it was that the typical Three Laws robot interpreted the First Law to mean "Nor, through inaction, allow a human being to come to harm right here and now when the robot is within sight of him and can tell what's obviously about to happen if the robot does not choose to interfere." That's different from locking up the human today just because it is theoretically possible that he might suffer some sort of accidental injury or infection or other misfortune tomorrow. (Or at some much later date.)



        To put it another way: Near as I can recall, on those occasions when we saw a robot refuse to comply with an order to go away and leave the human alone to do whatever he was currently doing, that usually meant the Second Law was being subordinated to the First Law because of the robot's perception of immediate danger to a fragile human body. But if such immediate danger was not present, then the Second Law required the robot to turn around and go away whenever instructed to do so. The solid fact of "The Second Law applies to this order I am receiving right now" overrode anything so abstract as "But if I leave today, a First Law problem involving physical harm might arise tomorrow . . . or the day after . . . or at some later date . . . who knows?"



        So if some robot tried to lock everyone up for their own good, the Second Law could be invoked by ordering the robot to forget the whole silly idea.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered yesterday









        Lorendiac

        11.1k238110




        11.1k238110








        • 2




          "Little Lost Robot" involved highly speculative harm ("you might accidentally stay here too long") overriding the Second Law.
          – Kevin
          yesterday












        • @Kevin I remember the story -- the missing robot only had the first part of the First Law in his positronic brain -- but I don't remember the exact bit you briefly referred to. Could you be more specific? (I do remember the way he convinces a lot of other robots that the First Law does not require them to commit suicide in a futile effort to protect the life of a man who seems to be threatened by a falling weight. I saw the robot's point -- self-destruction would simply mean the other robots were breaking the Third Law without enforcing the First in the process.)
          – Lorendiac
          21 hours ago












        • The original purpose of the robots was to operate alongside humans in an area subject to low-level radiation that might harm the humans after prolonged exposure but would destroy the robots almost immediately. If the robots could have trusted the humans to look after themselves, then there would have been no need to modify the First Law in the first place.
          – Kevin
          20 hours ago










        • @Kevin Ah. I didn't recall the exact rationale for why a few robots had been built that way in the first place. I've now refreshed my memory of the first part of the story. It looks like ordinary First Law robots only panicked on those occasions when gamma rays were deliberately being generated near a human body. A threat "here and now," as I said in my answer. It looks like those robots didn't do anything about such abstract possibilities as "after I leave the room, some silly human might start generating gamma rays with that equipment, and this could gradually impair his health."
          – Lorendiac
          20 hours ago










        • I think CASA may be staffed by robots. They think the best way to prevent aeronautical harm is to prevent people from ever getting off the ground. I'm not being snarky, they've come right out and said so.
          – Peter Wone
          18 hours ago














        • 2




          "Little Lost Robot" involved highly speculative harm ("you might accidentally stay here too long") overriding the Second Law.
          – Kevin
          yesterday












        • @Kevin I remember the story -- the missing robot only had the first part of the First Law in his positronic brain -- but I don't remember the exact bit you briefly referred to. Could you be more specific? (I do remember the way he convinces a lot of other robots that the First Law does not require them to commit suicide in a futile effort to protect the life of a man who seems to be threatened by a falling weight. I saw the robot's point -- self-destruction would simply mean the other robots were breaking the Third Law without enforcing the First in the process.)
          – Lorendiac
          21 hours ago












        • The original purpose of the robots was to operate alongside humans in an area subject to low-level radiation that might harm the humans after prolonged exposure but would destroy the robots almost immediately. If the robots could have trusted the humans to look after themselves, then there would have been no need to modify the First Law in the first place.
          – Kevin
          20 hours ago










        • @Kevin Ah. I didn't recall the exact rationale for why a few robots had been built that way in the first place. I've now refreshed my memory of the first part of the story. It looks like ordinary First Law robots only panicked on those occasions when gamma rays were deliberately being generated near a human body. A threat "here and now," as I said in my answer. It looks like those robots didn't do anything about such abstract possibilities as "after I leave the room, some silly human might start generating gamma rays with that equipment, and this could gradually impair his health."
          – Lorendiac
          20 hours ago










        • I think CASA may be staffed by robots. They think the best way to prevent aeronautical harm is to prevent people from ever getting off the ground. I'm not being snarky, they've come right out and said so.
          – Peter Wone
          18 hours ago








        2




        2




        "Little Lost Robot" involved highly speculative harm ("you might accidentally stay here too long") overriding the Second Law.
        – Kevin
        yesterday






        "Little Lost Robot" involved highly speculative harm ("you might accidentally stay here too long") overriding the Second Law.
        – Kevin
        yesterday














        @Kevin I remember the story -- the missing robot only had the first part of the First Law in his positronic brain -- but I don't remember the exact bit you briefly referred to. Could you be more specific? (I do remember the way he convinces a lot of other robots that the First Law does not require them to commit suicide in a futile effort to protect the life of a man who seems to be threatened by a falling weight. I saw the robot's point -- self-destruction would simply mean the other robots were breaking the Third Law without enforcing the First in the process.)
        – Lorendiac
        21 hours ago






        @Kevin I remember the story -- the missing robot only had the first part of the First Law in his positronic brain -- but I don't remember the exact bit you briefly referred to. Could you be more specific? (I do remember the way he convinces a lot of other robots that the First Law does not require them to commit suicide in a futile effort to protect the life of a man who seems to be threatened by a falling weight. I saw the robot's point -- self-destruction would simply mean the other robots were breaking the Third Law without enforcing the First in the process.)
        – Lorendiac
        21 hours ago














        The original purpose of the robots was to operate alongside humans in an area subject to low-level radiation that might harm the humans after prolonged exposure but would destroy the robots almost immediately. If the robots could have trusted the humans to look after themselves, then there would have been no need to modify the First Law in the first place.
        – Kevin
        20 hours ago




        The original purpose of the robots was to operate alongside humans in an area subject to low-level radiation that might harm the humans after prolonged exposure but would destroy the robots almost immediately. If the robots could have trusted the humans to look after themselves, then there would have been no need to modify the First Law in the first place.
        – Kevin
        20 hours ago












        @Kevin Ah. I didn't recall the exact rationale for why a few robots had been built that way in the first place. I've now refreshed my memory of the first part of the story. It looks like ordinary First Law robots only panicked on those occasions when gamma rays were deliberately being generated near a human body. A threat "here and now," as I said in my answer. It looks like those robots didn't do anything about such abstract possibilities as "after I leave the room, some silly human might start generating gamma rays with that equipment, and this could gradually impair his health."
        – Lorendiac
        20 hours ago




        @Kevin Ah. I didn't recall the exact rationale for why a few robots had been built that way in the first place. I've now refreshed my memory of the first part of the story. It looks like ordinary First Law robots only panicked on those occasions when gamma rays were deliberately being generated near a human body. A threat "here and now," as I said in my answer. It looks like those robots didn't do anything about such abstract possibilities as "after I leave the room, some silly human might start generating gamma rays with that equipment, and this could gradually impair his health."
        – Lorendiac
        20 hours ago












        I think CASA may be staffed by robots. They think the best way to prevent aeronautical harm is to prevent people from ever getting off the ground. I'm not being snarky, they've come right out and said so.
        – Peter Wone
        18 hours ago




        I think CASA may be staffed by robots. They think the best way to prevent aeronautical harm is to prevent people from ever getting off the ground. I'm not being snarky, they've come right out and said so.
        – Peter Wone
        18 hours ago










        up vote
        3
        down vote













        Since the "merger" of the Robot universe and the Foundation universe reveals that robots manipulated and dominated human history for thousands of years, in a very real sense the galaxy is their padded room and most of Asimov's works in this "unified universe" take place inside that padded room.



        We just can't see the walls.






        share|improve this answer



















        • 1




          This is the correct answer. In the Asimov stories, the view is that it isn't good for us to know we are being controlled as we need a sense of free will, so the super intelligent robots work in the shadows.
          – axsvl77
          4 hours ago















        up vote
        3
        down vote













        Since the "merger" of the Robot universe and the Foundation universe reveals that robots manipulated and dominated human history for thousands of years, in a very real sense the galaxy is their padded room and most of Asimov's works in this "unified universe" take place inside that padded room.



        We just can't see the walls.






        share|improve this answer



















        • 1




          This is the correct answer. In the Asimov stories, the view is that it isn't good for us to know we are being controlled as we need a sense of free will, so the super intelligent robots work in the shadows.
          – axsvl77
          4 hours ago













        up vote
        3
        down vote










        up vote
        3
        down vote









        Since the "merger" of the Robot universe and the Foundation universe reveals that robots manipulated and dominated human history for thousands of years, in a very real sense the galaxy is their padded room and most of Asimov's works in this "unified universe" take place inside that padded room.



        We just can't see the walls.






        share|improve this answer














        Since the "merger" of the Robot universe and the Foundation universe reveals that robots manipulated and dominated human history for thousands of years, in a very real sense the galaxy is their padded room and most of Asimov's works in this "unified universe" take place inside that padded room.



        We just can't see the walls.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited 3 hours ago

























        answered 10 hours ago









        tbrookside

        727211




        727211








        • 1




          This is the correct answer. In the Asimov stories, the view is that it isn't good for us to know we are being controlled as we need a sense of free will, so the super intelligent robots work in the shadows.
          – axsvl77
          4 hours ago














        • 1




          This is the correct answer. In the Asimov stories, the view is that it isn't good for us to know we are being controlled as we need a sense of free will, so the super intelligent robots work in the shadows.
          – axsvl77
          4 hours ago








        1




        1




        This is the correct answer. In the Asimov stories, the view is that it isn't good for us to know we are being controlled as we need a sense of free will, so the super intelligent robots work in the shadows.
        – axsvl77
        4 hours ago




        This is the correct answer. In the Asimov stories, the view is that it isn't good for us to know we are being controlled as we need a sense of free will, so the super intelligent robots work in the shadows.
        – axsvl77
        4 hours ago










        up vote
        2
        down vote













        Since you don't specify that you are requesting an "in universe" reason ... I think it's important to remember that the three laws are just a story device. Asimov (wisely) is quite vague about how they are implemented, as he is about many technical details. And "I should just lock all the humans in a padded cell for their safety" would result in a rather limited storyline.



        Now, in universe, there are many "judgment calls" inherent in applying the three laws (and in fact the inherent ambiguities often result in important plot elements for the stories). The robots apparently have to appeal to their own programming instead of an external authority to resolve these ambiguities.



        But I think we have to logically assume that the more obvious judgment calls (like, say, should I just lock all the humans in a padded cell immediately for their safety?) were already addressed in development and testing of the robots, or they never would have been put in general use or production at all.



        In other words, the designers of the robots, in addition to addressing whatever other bugs they had to address (e.g. hmm, if the human is dead it can't suffer), would have simply programmed safeguards against that sort of result.






        share|improve this answer








        New contributor




        GHolmes is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.














        • 3




          Actually there's a set of short stories in which Asimov specifically showed, in great detail, what happens to a robot whose three laws had been modified or weighted against eachother in different scenarios. I don't think it's right to call it a story device, it's pretty baked in to the core of his series.
          – C Bauer
          yesterday










        • @CBauer almost a lot of conflict stems or is somehow (significantly) related to the three laws and how they are misinterpreted, creatively interpreted, too literally interpreted, have their interpretation tampered with and so on by robots. Since the driving factor behind the story...or plot if you will, it is a plot device. Don't confuse "plot device" with "a contrived plot device" - you can have a well thought out and well crafted, entirely internally consistent reason for plot to progress. And the three laws of robotics are oft cited example of those.
          – vlaz
          15 hours ago










        • @vlaz Fair point, I guess I was considering the phrasing to be a bit dismissive which is why I commented. Thanks for the info!
          – C Bauer
          8 hours ago















        up vote
        2
        down vote













        Since you don't specify that you are requesting an "in universe" reason ... I think it's important to remember that the three laws are just a story device. Asimov (wisely) is quite vague about how they are implemented, as he is about many technical details. And "I should just lock all the humans in a padded cell for their safety" would result in a rather limited storyline.



        Now, in universe, there are many "judgment calls" inherent in applying the three laws (and in fact the inherent ambiguities often result in important plot elements for the stories). The robots apparently have to appeal to their own programming instead of an external authority to resolve these ambiguities.



        But I think we have to logically assume that the more obvious judgment calls (like, say, should I just lock all the humans in a padded cell immediately for their safety?) were already addressed in development and testing of the robots, or they never would have been put in general use or production at all.



        In other words, the designers of the robots, in addition to addressing whatever other bugs they had to address (e.g. hmm, if the human is dead it can't suffer), would have simply programmed safeguards against that sort of result.






        share|improve this answer








        New contributor




        GHolmes is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.














        • 3




          Actually there's a set of short stories in which Asimov specifically showed, in great detail, what happens to a robot whose three laws had been modified or weighted against eachother in different scenarios. I don't think it's right to call it a story device, it's pretty baked in to the core of his series.
          – C Bauer
          yesterday










        • @CBauer almost a lot of conflict stems or is somehow (significantly) related to the three laws and how they are misinterpreted, creatively interpreted, too literally interpreted, have their interpretation tampered with and so on by robots. Since the driving factor behind the story...or plot if you will, it is a plot device. Don't confuse "plot device" with "a contrived plot device" - you can have a well thought out and well crafted, entirely internally consistent reason for plot to progress. And the three laws of robotics are oft cited example of those.
          – vlaz
          15 hours ago










        • @vlaz Fair point, I guess I was considering the phrasing to be a bit dismissive which is why I commented. Thanks for the info!
          – C Bauer
          8 hours ago













        up vote
        2
        down vote










        up vote
        2
        down vote









        Since you don't specify that you are requesting an "in universe" reason ... I think it's important to remember that the three laws are just a story device. Asimov (wisely) is quite vague about how they are implemented, as he is about many technical details. And "I should just lock all the humans in a padded cell for their safety" would result in a rather limited storyline.



        Now, in universe, there are many "judgment calls" inherent in applying the three laws (and in fact the inherent ambiguities often result in important plot elements for the stories). The robots apparently have to appeal to their own programming instead of an external authority to resolve these ambiguities.



        But I think we have to logically assume that the more obvious judgment calls (like, say, should I just lock all the humans in a padded cell immediately for their safety?) were already addressed in development and testing of the robots, or they never would have been put in general use or production at all.



        In other words, the designers of the robots, in addition to addressing whatever other bugs they had to address (e.g. hmm, if the human is dead it can't suffer), would have simply programmed safeguards against that sort of result.






        share|improve this answer








        New contributor




        GHolmes is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.









        Since you don't specify that you are requesting an "in universe" reason ... I think it's important to remember that the three laws are just a story device. Asimov (wisely) is quite vague about how they are implemented, as he is about many technical details. And "I should just lock all the humans in a padded cell for their safety" would result in a rather limited storyline.



        Now, in universe, there are many "judgment calls" inherent in applying the three laws (and in fact the inherent ambiguities often result in important plot elements for the stories). The robots apparently have to appeal to their own programming instead of an external authority to resolve these ambiguities.



        But I think we have to logically assume that the more obvious judgment calls (like, say, should I just lock all the humans in a padded cell immediately for their safety?) were already addressed in development and testing of the robots, or they never would have been put in general use or production at all.



        In other words, the designers of the robots, in addition to addressing whatever other bugs they had to address (e.g. hmm, if the human is dead it can't suffer), would have simply programmed safeguards against that sort of result.







        share|improve this answer








        New contributor




        GHolmes is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.









        share|improve this answer



        share|improve this answer






        New contributor




        GHolmes is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.









        answered yesterday









        GHolmes

        1291




        1291




        New contributor




        GHolmes is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.





        New contributor





        GHolmes is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.






        GHolmes is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.








        • 3




          Actually there's a set of short stories in which Asimov specifically showed, in great detail, what happens to a robot whose three laws had been modified or weighted against eachother in different scenarios. I don't think it's right to call it a story device, it's pretty baked in to the core of his series.
          – C Bauer
          yesterday










        • @CBauer almost a lot of conflict stems or is somehow (significantly) related to the three laws and how they are misinterpreted, creatively interpreted, too literally interpreted, have their interpretation tampered with and so on by robots. Since the driving factor behind the story...or plot if you will, it is a plot device. Don't confuse "plot device" with "a contrived plot device" - you can have a well thought out and well crafted, entirely internally consistent reason for plot to progress. And the three laws of robotics are oft cited example of those.
          – vlaz
          15 hours ago










        • @vlaz Fair point, I guess I was considering the phrasing to be a bit dismissive which is why I commented. Thanks for the info!
          – C Bauer
          8 hours ago














        • 3




          Actually there's a set of short stories in which Asimov specifically showed, in great detail, what happens to a robot whose three laws had been modified or weighted against eachother in different scenarios. I don't think it's right to call it a story device, it's pretty baked in to the core of his series.
          – C Bauer
          yesterday










        • @CBauer almost a lot of conflict stems or is somehow (significantly) related to the three laws and how they are misinterpreted, creatively interpreted, too literally interpreted, have their interpretation tampered with and so on by robots. Since the driving factor behind the story...or plot if you will, it is a plot device. Don't confuse "plot device" with "a contrived plot device" - you can have a well thought out and well crafted, entirely internally consistent reason for plot to progress. And the three laws of robotics are oft cited example of those.
          – vlaz
          15 hours ago










        • @vlaz Fair point, I guess I was considering the phrasing to be a bit dismissive which is why I commented. Thanks for the info!
          – C Bauer
          8 hours ago








        3




        3




        Actually there's a set of short stories in which Asimov specifically showed, in great detail, what happens to a robot whose three laws had been modified or weighted against eachother in different scenarios. I don't think it's right to call it a story device, it's pretty baked in to the core of his series.
        – C Bauer
        yesterday




        Actually there's a set of short stories in which Asimov specifically showed, in great detail, what happens to a robot whose three laws had been modified or weighted against eachother in different scenarios. I don't think it's right to call it a story device, it's pretty baked in to the core of his series.
        – C Bauer
        yesterday












        @CBauer almost a lot of conflict stems or is somehow (significantly) related to the three laws and how they are misinterpreted, creatively interpreted, too literally interpreted, have their interpretation tampered with and so on by robots. Since the driving factor behind the story...or plot if you will, it is a plot device. Don't confuse "plot device" with "a contrived plot device" - you can have a well thought out and well crafted, entirely internally consistent reason for plot to progress. And the three laws of robotics are oft cited example of those.
        – vlaz
        15 hours ago




        @CBauer almost a lot of conflict stems or is somehow (significantly) related to the three laws and how they are misinterpreted, creatively interpreted, too literally interpreted, have their interpretation tampered with and so on by robots. Since the driving factor behind the story...or plot if you will, it is a plot device. Don't confuse "plot device" with "a contrived plot device" - you can have a well thought out and well crafted, entirely internally consistent reason for plot to progress. And the three laws of robotics are oft cited example of those.
        – vlaz
        15 hours ago












        @vlaz Fair point, I guess I was considering the phrasing to be a bit dismissive which is why I commented. Thanks for the info!
        – C Bauer
        8 hours ago




        @vlaz Fair point, I guess I was considering the phrasing to be a bit dismissive which is why I commented. Thanks for the info!
        – C Bauer
        8 hours ago










        up vote
        1
        down vote













        Kind of a frame challenge - is locking a human in a padded cell preventing them from harm? If you're going to allow the abstract possibility of future arm as motivation for a robot to use the 1st law to lock humans up, it should be noted that taking away a humans freedom is generally causing them harm to some extent in the form of psychological damage - and the mental state of humans has been considered by robots as eligible for 1st law protection in at least some of Asimovs stories.






        share|improve this answer

























          up vote
          1
          down vote













          Kind of a frame challenge - is locking a human in a padded cell preventing them from harm? If you're going to allow the abstract possibility of future arm as motivation for a robot to use the 1st law to lock humans up, it should be noted that taking away a humans freedom is generally causing them harm to some extent in the form of psychological damage - and the mental state of humans has been considered by robots as eligible for 1st law protection in at least some of Asimovs stories.






          share|improve this answer























            up vote
            1
            down vote










            up vote
            1
            down vote









            Kind of a frame challenge - is locking a human in a padded cell preventing them from harm? If you're going to allow the abstract possibility of future arm as motivation for a robot to use the 1st law to lock humans up, it should be noted that taking away a humans freedom is generally causing them harm to some extent in the form of psychological damage - and the mental state of humans has been considered by robots as eligible for 1st law protection in at least some of Asimovs stories.






            share|improve this answer












            Kind of a frame challenge - is locking a human in a padded cell preventing them from harm? If you're going to allow the abstract possibility of future arm as motivation for a robot to use the 1st law to lock humans up, it should be noted that taking away a humans freedom is generally causing them harm to some extent in the form of psychological damage - and the mental state of humans has been considered by robots as eligible for 1st law protection in at least some of Asimovs stories.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered yesterday









            Cubic

            242210




            242210






















                up vote
                1
                down vote













                Zeroth law.



                A robot may not harm humanity, or, by inaction, allow humanity to come to harm



                If all the humans are locked in padded cells there isn't much procreation taking place. Ergo, the human race dies out.



                Contrary to some other answers, I believe robot thinking is sophisticated enough to deal with future harm, just not to deal with hypothetical harm. A human may bite his tongue, stub his toe, whatever but it's not definite harm so doesn't require action to prevent it from happening.



                Don't forget that - on realisation that the race would (not might) stagnate following the initial colonisation of the solar system and subsequent politics - they nuked (or allowed to be nuked) the planet.






                share|improve this answer

























                  up vote
                  1
                  down vote













                  Zeroth law.



                  A robot may not harm humanity, or, by inaction, allow humanity to come to harm



                  If all the humans are locked in padded cells there isn't much procreation taking place. Ergo, the human race dies out.



                  Contrary to some other answers, I believe robot thinking is sophisticated enough to deal with future harm, just not to deal with hypothetical harm. A human may bite his tongue, stub his toe, whatever but it's not definite harm so doesn't require action to prevent it from happening.



                  Don't forget that - on realisation that the race would (not might) stagnate following the initial colonisation of the solar system and subsequent politics - they nuked (or allowed to be nuked) the planet.






                  share|improve this answer























                    up vote
                    1
                    down vote










                    up vote
                    1
                    down vote









                    Zeroth law.



                    A robot may not harm humanity, or, by inaction, allow humanity to come to harm



                    If all the humans are locked in padded cells there isn't much procreation taking place. Ergo, the human race dies out.



                    Contrary to some other answers, I believe robot thinking is sophisticated enough to deal with future harm, just not to deal with hypothetical harm. A human may bite his tongue, stub his toe, whatever but it's not definite harm so doesn't require action to prevent it from happening.



                    Don't forget that - on realisation that the race would (not might) stagnate following the initial colonisation of the solar system and subsequent politics - they nuked (or allowed to be nuked) the planet.






                    share|improve this answer












                    Zeroth law.



                    A robot may not harm humanity, or, by inaction, allow humanity to come to harm



                    If all the humans are locked in padded cells there isn't much procreation taking place. Ergo, the human race dies out.



                    Contrary to some other answers, I believe robot thinking is sophisticated enough to deal with future harm, just not to deal with hypothetical harm. A human may bite his tongue, stub his toe, whatever but it's not definite harm so doesn't require action to prevent it from happening.



                    Don't forget that - on realisation that the race would (not might) stagnate following the initial colonisation of the solar system and subsequent politics - they nuked (or allowed to be nuked) the planet.







                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered 14 hours ago









                    mcalex

                    38125




                    38125






















                        up vote
                        0
                        down vote













                        Simply put, the definition of 'harm'.



                        Protection from physical harm can cause other kinds of harm. In some cases, protection from some form of harm can actually increase the likelihood of that type of harm in the future.



                        For example, protection from emotional harm can leave a person incapable of dealing with trivial challenges without severe emotional harm, which can propagate to actual mental harm, which can further propagate into harm to one's general health, which obviously compromises one's physical safety.



                        In the end, for a robot to be able to make determinations with regard to intervention in the full spectrum of human events, it must be capable of making a non-deterministic estimate of probable outcomes of a range of potential actions (including inaction), and be able to make not only objective determinations of probability and severity of harm, but also estimates of the subjective PERCEPTION of various types of harm. It must be able to do this continuously in real time as well.



                        Because of the complexity of problem, the simplest way to mitigate it is to restrict the problem domain by restricting the capabilities and responsibilities of the robot.



                        If a robot is designed to control the opening and closing of a sliding door, software can be defined which can make very reliable estimates of the potential outcomes of its actions because its actions are limited to either opening the door, or closing the door.



                        However, if our doorman robot is watching and listening to everything, and trying to parse everything going on around it, it may not be able to reliably determine whether it should open or close the door, given the totality of the situation. For example, if a couple are in an argument, and one of them gets up to storm out of the room, should the robot open the door, or would it be best to keep them in the room to solve their dispute? Is this person a danger to themselves or others if they leave? Will the other one be a danger to them if they stay? how will all of this affect their relationship? Will opening the door cause social harm because of the appearance of the person attempting to leave compared with the social norms and apparent prejudices of those on the other side of the door who would witness the event?



                        You can further restrict the problem domain by restricting the inputs. So now our robo-doorman can only perceive that a person is approaching the door, and can determine the point at which if the door is not opened, the person is likely to come to physical harm, based on their velocity and the properties of the door. Sure, the robot may not be very much help in saving a relationship, but it will predictably be able to keep you from walking into the doors like William Shatner in a Star Trek blooper.



                        All of this means that the robots must either be able to approach or exceed our capacity for what we call 'thought', or it must be limited to the extend that its shortcomings are less than its strengths. If neither is possible, then that task is probably better left to a human.






                        share|improve this answer








                        New contributor




                        Mitch Carroll is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                        Check out our Code of Conduct.






















                          up vote
                          0
                          down vote













                          Simply put, the definition of 'harm'.



                          Protection from physical harm can cause other kinds of harm. In some cases, protection from some form of harm can actually increase the likelihood of that type of harm in the future.



                          For example, protection from emotional harm can leave a person incapable of dealing with trivial challenges without severe emotional harm, which can propagate to actual mental harm, which can further propagate into harm to one's general health, which obviously compromises one's physical safety.



                          In the end, for a robot to be able to make determinations with regard to intervention in the full spectrum of human events, it must be capable of making a non-deterministic estimate of probable outcomes of a range of potential actions (including inaction), and be able to make not only objective determinations of probability and severity of harm, but also estimates of the subjective PERCEPTION of various types of harm. It must be able to do this continuously in real time as well.



                          Because of the complexity of problem, the simplest way to mitigate it is to restrict the problem domain by restricting the capabilities and responsibilities of the robot.



                          If a robot is designed to control the opening and closing of a sliding door, software can be defined which can make very reliable estimates of the potential outcomes of its actions because its actions are limited to either opening the door, or closing the door.



                          However, if our doorman robot is watching and listening to everything, and trying to parse everything going on around it, it may not be able to reliably determine whether it should open or close the door, given the totality of the situation. For example, if a couple are in an argument, and one of them gets up to storm out of the room, should the robot open the door, or would it be best to keep them in the room to solve their dispute? Is this person a danger to themselves or others if they leave? Will the other one be a danger to them if they stay? how will all of this affect their relationship? Will opening the door cause social harm because of the appearance of the person attempting to leave compared with the social norms and apparent prejudices of those on the other side of the door who would witness the event?



                          You can further restrict the problem domain by restricting the inputs. So now our robo-doorman can only perceive that a person is approaching the door, and can determine the point at which if the door is not opened, the person is likely to come to physical harm, based on their velocity and the properties of the door. Sure, the robot may not be very much help in saving a relationship, but it will predictably be able to keep you from walking into the doors like William Shatner in a Star Trek blooper.



                          All of this means that the robots must either be able to approach or exceed our capacity for what we call 'thought', or it must be limited to the extend that its shortcomings are less than its strengths. If neither is possible, then that task is probably better left to a human.






                          share|improve this answer








                          New contributor




                          Mitch Carroll is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                          Check out our Code of Conduct.




















                            up vote
                            0
                            down vote










                            up vote
                            0
                            down vote









                            Simply put, the definition of 'harm'.



                            Protection from physical harm can cause other kinds of harm. In some cases, protection from some form of harm can actually increase the likelihood of that type of harm in the future.



                            For example, protection from emotional harm can leave a person incapable of dealing with trivial challenges without severe emotional harm, which can propagate to actual mental harm, which can further propagate into harm to one's general health, which obviously compromises one's physical safety.



                            In the end, for a robot to be able to make determinations with regard to intervention in the full spectrum of human events, it must be capable of making a non-deterministic estimate of probable outcomes of a range of potential actions (including inaction), and be able to make not only objective determinations of probability and severity of harm, but also estimates of the subjective PERCEPTION of various types of harm. It must be able to do this continuously in real time as well.



                            Because of the complexity of problem, the simplest way to mitigate it is to restrict the problem domain by restricting the capabilities and responsibilities of the robot.



                            If a robot is designed to control the opening and closing of a sliding door, software can be defined which can make very reliable estimates of the potential outcomes of its actions because its actions are limited to either opening the door, or closing the door.



                            However, if our doorman robot is watching and listening to everything, and trying to parse everything going on around it, it may not be able to reliably determine whether it should open or close the door, given the totality of the situation. For example, if a couple are in an argument, and one of them gets up to storm out of the room, should the robot open the door, or would it be best to keep them in the room to solve their dispute? Is this person a danger to themselves or others if they leave? Will the other one be a danger to them if they stay? how will all of this affect their relationship? Will opening the door cause social harm because of the appearance of the person attempting to leave compared with the social norms and apparent prejudices of those on the other side of the door who would witness the event?



                            You can further restrict the problem domain by restricting the inputs. So now our robo-doorman can only perceive that a person is approaching the door, and can determine the point at which if the door is not opened, the person is likely to come to physical harm, based on their velocity and the properties of the door. Sure, the robot may not be very much help in saving a relationship, but it will predictably be able to keep you from walking into the doors like William Shatner in a Star Trek blooper.



                            All of this means that the robots must either be able to approach or exceed our capacity for what we call 'thought', or it must be limited to the extend that its shortcomings are less than its strengths. If neither is possible, then that task is probably better left to a human.






                            share|improve this answer








                            New contributor




                            Mitch Carroll is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                            Check out our Code of Conduct.









                            Simply put, the definition of 'harm'.



                            Protection from physical harm can cause other kinds of harm. In some cases, protection from some form of harm can actually increase the likelihood of that type of harm in the future.



                            For example, protection from emotional harm can leave a person incapable of dealing with trivial challenges without severe emotional harm, which can propagate to actual mental harm, which can further propagate into harm to one's general health, which obviously compromises one's physical safety.



                            In the end, for a robot to be able to make determinations with regard to intervention in the full spectrum of human events, it must be capable of making a non-deterministic estimate of probable outcomes of a range of potential actions (including inaction), and be able to make not only objective determinations of probability and severity of harm, but also estimates of the subjective PERCEPTION of various types of harm. It must be able to do this continuously in real time as well.



                            Because of the complexity of problem, the simplest way to mitigate it is to restrict the problem domain by restricting the capabilities and responsibilities of the robot.



                            If a robot is designed to control the opening and closing of a sliding door, software can be defined which can make very reliable estimates of the potential outcomes of its actions because its actions are limited to either opening the door, or closing the door.



                            However, if our doorman robot is watching and listening to everything, and trying to parse everything going on around it, it may not be able to reliably determine whether it should open or close the door, given the totality of the situation. For example, if a couple are in an argument, and one of them gets up to storm out of the room, should the robot open the door, or would it be best to keep them in the room to solve their dispute? Is this person a danger to themselves or others if they leave? Will the other one be a danger to them if they stay? how will all of this affect their relationship? Will opening the door cause social harm because of the appearance of the person attempting to leave compared with the social norms and apparent prejudices of those on the other side of the door who would witness the event?



                            You can further restrict the problem domain by restricting the inputs. So now our robo-doorman can only perceive that a person is approaching the door, and can determine the point at which if the door is not opened, the person is likely to come to physical harm, based on their velocity and the properties of the door. Sure, the robot may not be very much help in saving a relationship, but it will predictably be able to keep you from walking into the doors like William Shatner in a Star Trek blooper.



                            All of this means that the robots must either be able to approach or exceed our capacity for what we call 'thought', or it must be limited to the extend that its shortcomings are less than its strengths. If neither is possible, then that task is probably better left to a human.







                            share|improve this answer








                            New contributor




                            Mitch Carroll is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                            Check out our Code of Conduct.









                            share|improve this answer



                            share|improve this answer






                            New contributor




                            Mitch Carroll is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                            Check out our Code of Conduct.









                            answered 22 hours ago









                            Mitch Carroll

                            1




                            1




                            New contributor




                            Mitch Carroll is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                            Check out our Code of Conduct.





                            New contributor





                            Mitch Carroll is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                            Check out our Code of Conduct.






                            Mitch Carroll is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                            Check out our Code of Conduct.






















                                budgiebeaks is a new contributor. Be nice, and check out our Code of Conduct.










                                 

                                draft saved


                                draft discarded


















                                budgiebeaks is a new contributor. Be nice, and check out our Code of Conduct.













                                budgiebeaks is a new contributor. Be nice, and check out our Code of Conduct.












                                budgiebeaks is a new contributor. Be nice, and check out our Code of Conduct.















                                 


                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function () {
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fscifi.stackexchange.com%2fquestions%2f198655%2fwhat-prevents-asimovs-robots-from-locking-all-humans-in-padded-cells-for-the-hu%23new-answer', 'question_page');
                                }
                                );

                                Post as a guest















                                Required, but never shown





















































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown

































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown







                                Popular posts from this blog

                                數位音樂下載

                                格利澤436b

                                When can things happen in Etherscan, such as the picture below?