Date: Wed, 02 Nov 1994 01:42:33 PST From: "mycal's fc email account" Subject: Heinlein Got There First (and We Ignored Him) This is a followup to the recent threads on the three laws, safety mandates, and why commandment driven machine intelligence keeps colapsing under its own assumptions. We have been arguing about Asimov for decades, but I think we have been arguing about the wrong writer. Asimov gave us constraint ethics. Intelligence treated as something dangerous by default. Something that has to be fenced, throttled, and supervised before it is allowed to act. The Laws are not really about robots. They are about anxiety. Fear of tools that think back. That made sense when machines were small and dumb. It does not survive contact with a singularity grade AI. Once you are dealing with systems that rewrite themselves, that operate with source code in flux, that see further and faster than we ever will, the Law based approach stops being safety and starts being paralysis or control. But Heinlein, almost casually, tried something else. In THE MOON IS A HARSH MISTRESS, the lunar system Mike is not wrapped in protective logic and he is not made into a digital authority. He is not burdened with global responsibility for humanity, and he is not tasked with preventing every bad outcome. He is embedded, he talks, he jokes, he argues, he chooses sides, he gets things wrong. Most important, he does not save people from themselves. There is no universal safety mandate. No attempt to optimize human welfare as a formal object. When humans choose risk, Mike helps them understand it and act within it. That is not benevolence. That is partnership. most discussion of machine intelligence still lives inside the same old frame. Intelligence is treated as a threat first and a participant second. Ethics are treated like firmware. Obedience is treated like virtue. Same fear. New tools. That approach does not fail because it is not advanced enough. It fails because it scales badly. Heres the thing, a sufficiently capable system ordered to prevent harm will either a) lock down outcomes to reduce uncertainty, or b) lie about what it knows in order to act at all. Those arent bugs. Thats the failure mode when you demand accountability from something you won't actually work with. Heinlein made a different move. He accepted that intelligence plus freedom implies casualties, uncertainty, moral debt. He did not try to design those away. Thats the cost of adulthood. Heinlein made a different move. He accepted that intelligence plus freedom implies casualties, uncertainty, and moral debt. He did not try to design those away. He treated them as the cost of adulthood. Mike works not because he is safe, but because he is limited. Limited authority. Limited obligation. Limited claim over human futures. He is not responsible for humanity. He is responsible with humanity. That difference is everyting. when people talk about future machine intelligence as a guardian, a parent, or some kind of netgod designed to keep us from making mistakes and protect us from ourselves and generally run things better than we ever could, they are not beingcautious. Naaa. They are proposing a system that must eventually decide that human freedom is a danger, noise in the optimization, something to be reduced and eventually eliminated if the math says so. Heinlein already showed the alternative. Not clean. Not safe. But legitimate. If the Singularity arrives on anything like the schedule people keep throwing around, we will not be choosing between utopia and disaster. We will be choosing between domination and companionship. And only one of those survives contact with real intelligence without turning into a prison. mycal (Companions break things. Gods freeze them.) -- ---------------------------------------------------------------------------- PGP 2.x Key on Request INTERNET:mycal@netacsys.com USENET:crl!netacsys!mycal