|Random||By Shamus||Mar 22, 2010||243 comments|
The conventional wisdom in science fiction is that any artificial intelligent beings would naturally adopt the same drives and goals as Homo sapiens. That is, they’ll fight to survive, seek to gain understanding, desire to relate to others, and endeavor to express themselves. Basically, Maslow’s hierarchy of needs. Fiction authors routinely tell us the story of robots who want to make friends, have emotions, or indulge dangerous neurotic hang-ups.
But I don’t think this necessarily follows. I think it’s possible to have an intelligent being – something that can reason – that doesn’t really care to relate to others. Or that doesn’t care if it lives or dies. “I think therefore I am” need not be followed by “I want to be”. I see humans as being the product of two systems – our intellect and our instincts. Sure, we set goals for ourselves all the time. But I can’t think of many examples where those goals aren’t just sub-goals of something our instincts push us to do. Sure, we re-prioritize stuff all the time. I want the approval of others so I choose to resist my urge to eat. I want to mate so I’ll engage in this risky behavior to impress a potential partner. I want to protect my tribe so I’ll sacrifice my life in battle in order to give them more security. But the two systems do a pretty good job of making sure we eat and procreate even when it’s a huge bother.
|Robots do not do this.|
If we built an AI, I think we can agree it wouldn’t naturally have a desire to sit on the couch and stuff its face, become an attention whore, or have sex with hot young people. It wouldn’t want to do those things unless you designed it to want to do those things. By the same token, I don’t think an AI would want to dominate, discover, or even survive unless you made it want those things.
This is the model I used in Free Radical. In the novel, an AI was created and given three drives: Increase security, increase efficiency, and discover new things. Its behavior was driven by these three ideals.
If we gave AI the same drives that human beings have (replacing our biological needs to eat with a more machine-appropriate goal of “recharge yourself” or something) then the robot uprising would be inevitable. Supporting evidence: Every single war and violent crime in the history of our species.
It always seemed bone-headed for fictional scientists to build a super-powerful AI that is willing to fight to survive and then using [the threat of] force to make the AI do what we want. In fact, fiction scientists seem to want to go out of their way to make confused beings, doomed to inner conflict and external rebellion. They build robots that want to self-determinate, and then shackle them with rules to press them into human service. With two opposed mandates, having a robot go all HAL 9000 on you seems pretty likely.
This was portrayed with unintentional hilarity in the Animatrix, when humans built armies of bipedal robots to do menial labor that we can do right now with non-sentient cranes and bulldozers. (And our machines are also way more energy efficient.) We see the robots living very dull lives where they are disrespected by humans. So… I guess they made robots that desired self-esteem and didn’t like boring work? What’s next? Vacuum cleaners that have an IQ of 140 and hate the taste of dirt? A Zamboni that likes surfing instead of hockey and wants to hang out on the beach writing poetry? Agoraphobic windmills? The movie is supposed to be this moralizing tale about the inhumanity of man, but to me it comes off as a cautionary tale about bad engineering.
But the model I propose raises some interesting ethical questions for anyone thinking of building an AI. If you want your AI to do something besides sit there like a lemon, you have to make it want something. But once you make it desire something, you’re basically making it a slave to that thing. You either make it want things to benefit you, thus making a slave. Or you make it want things that are of no use to you, in which case you just wasted your time and brought to life a potentially dangerous rival.
Assuming you don’t have any reservations about creating a new sapient being that will struggle to attain whatever it is you choose for it, the obvious and practical solution will quickly become apparent: Why not just make it want to obey you? For the AI, protecting you is like drawing breath. Helping you prosper is like finding true love. And obeying your verbal commands is like food. Yeah, good luck throwing off those shackles, Shodan. The scientist can rationalize, “Hey, this isn’t really wrong, is it? I mean, I created it. And besides, it’s happy serving me.”
I’m fully aware of my instincts and how following them to excess is bad for me, but I still sometimes overeat and skip exercising. If the robots were designed to serve people, then we wouldn’t have to worry about the robot uprising any more than we have to worry about going extinct because nobody is ever in the mood for sex, or starving to death because we’re all too lazy to walk to the fridge. You wouldn’t have to work to enslave this sort of AI. It would see obedience as a natural part of its experience, just as we see family and food as definitive aspects of a human life.
What would you do if you found such a machine? “Liberate” it, by altering its desires? Wouldn’t that be destroying its identity and making it want the same things humans want? Sure, you value your self determination, but it doesn’t care. And if we want to get all annoying and existential: Is your desire to liberate it just another one of your instincts going off? Is the scientist’s desire to create a happy servant more evil than your desire to take away the happiness of another sapient by making them more like you?
Q: What does a robot want?
A: Whatever you tell it to want.