SOMA EP5: Insatiable!

By Shamus
on Mar 31, 2016
Filed under:
Spoiler Warning


Note that this season is getting kind of verbally graphic, with jokes about sex and body parts. I think it comes with the territory. Yes, we make immature jokes about robo-dicks, but those jokes stem from the ongoing mysteries of what robo-life would be like.

Anyway, it gets slightly… raunchy. I guess? Honestly, I don’t know what the standards are these days who who taught Rutskarn all those words. Adjust your viewing habits accordingly.

Link (YouTube)

To articulate the point I was making in the episode in a more coherent way:

Let’s say we’re going to build a robot to hold a digital copy of someone’s brain. Are we just copying neuron activity? Because a lot of our personality and behavior is driven by the output of the testes, ovaries, thyroid, pancreas, and other hormone factories. If we don’t include and simulate their activity, then we’ll be missing a big part of what defines the subject and shapes their personality.

For example…

Let’s subtract the robot from the equation and pretend we have two flesh-and-blood copies of me:

  1. Copy One has my personality, skills, behaviors, moods, habits, and preferences. But none of my life memories.

  2. Copy Two remembers my life just as well as I do, but it has a different personality. He’s gregarious, impulsive, and flirtatious. He’s into risks and talks trash to his rivals. He’s into NASCAR. He’s lazy.

Personally, I would think of Copy One as being “me”, and would think of Copy Two as “someone else”.

What if we made a robo copy of Alton Brown that has no appreciation for food, no sense of smell, no sense of taste, and never feels hungry?

Assuming we do have the ability to somehow scan the various glands in the bodyThere are a lot more than the ones I just mentioned, and we’ve barely scratched the surface of how their terrifyingly complex chemistry drives brain activity. But for the purposes of this discussion, we can just hand-wave it and say the people of the future have figured it all out. and meaningfully simulate their behavior… do we want to? Do we add a button – as Rutskarn jokingly suggested – that satisfies needs? If I had a button that simulated eating Cheetos, I’d probably push it obsessively all day, even if I never felt hungry or full, because I can totally eat junk food when I’m not hungry, and the only reason I stop is because I get the warning from my body that I’m getting over-full. (Or maybe I come to my senses and worry about getting fat, which is obviously not a concern for a robot with a Cheetos button.)

Stress is caused by the release of stress hormones. You feel it in your body in the form of clenched jaw, poor sleep, elevated heart rate, nail-biting, overeating / loss of appetite, bad moods, and a dozen other effects. We think of stress as bad – and it often is – but stress also drives behavior. A very mild level of anxiety or stress is what makes me work so hard on this site. Am I doing the best I can? Will people like this content? Can it be improved? With no stress at all, I’d just sit around and play videogames all day. A fixed level of low stress might make for a very productive – but very boring – me. Who makes the decision about how much stress robo-Shamus experiences?

But the big problem is the social stuff. I know that our desire to socialize is driven by hormones, because changes in hormones will impact your social behavior. With the right mix, you wind up in a mood where you can’t shut up, or can’t stand to be alone. A different mix, and being alone is all you want. Now add in your individual propensity to flirt, complain, criticize, joke, shout, gossip, sing, think out loud, or say encouraging things. Can you simulate that stuff? Do you want to? Should you? Some of those are negative behaviors, some are positive, and they’re all interconnected.

A lot of our joy in life comes from meeting these physical and psychological needs. If you remove them, you remove the things that give us joy. (And often the things that make us creative and motivated.) But if you simulate them, then you have a robot that has to take care of list of simulated desires. If meeting needs is hard, then the robot might suffer, which… isn’t that exactly the sort of thing you were trying to avoid when you put a brain in a robo-body? And if meeting needs is easy, then what’s to stop the robot from just holding down the orgasm button all day?

Note that I’m not saying SOMA is bad for not explaining this stuff. SOMA begins by asking questions, and I can’t help but respond with questions of my own. Either way, I love that SOMA started the conversation.

Enjoyed this post? Please share!

2020202012There are now 92 comments. Almost a hundred!

From the Archives:

  1. Daemian Lucifer says:

    Yeah,you like it,you dirty dirty facility controlling ai.

  2. Daemian Lucifer says:

    To be fair to simon,he did suffer brain damage in the accident.Still,he is a monumental dumbass even taking that into account.

  3. TmanEd says:

    “And if meeting needs is easy, then what’s to stop the robot from just holding down the orgasm button all day?”

    I can just imagine the conversation between the scientists who made robots with an orgasm button.

    “Sir, none of the robots we scanned people into are doing anything, they just sit around and have mind blowing orgasms simulated all day, what do we do?”
    “Scan me into one, I need to… research… stuff.”

    • Paul Spooner says:

      Yeah, it’s funny, but there are some really interesting conundrums in there along the lines of organisms (artificial or not) with corrupted utility functions don’t last long.
      If you’ve got the time, this is a super good read:
      He basically makes the argument that successful AI (and, by extension, digitized intelligences of all kinds) won’t just sit around having orgasms all day because it’s a utility function shortcut that doesn’t contribute to long-term success which will be penalized both internally and externally.

      • King Marth says:

        One issue with the link there is that it only refers to constructed AIs with well-formed goal statements, such that the chess AI can distinguish between actually winning games of chess and just maximizing their win counter; the section you refer to explicitly calls out that evolved creatures with vague goals (or constructed AIs with imprecise goal statements) would totally be vulnerable to short-circuiting themselves if suddenly given access to the internals. Of course, unless you pay some attention to keeping yourself alive, you won’t last long… but there’s no physical law against making bad decisions.

        It’s called ‘wireheading’, when you have a button connected to a wire that leads to pleasure centers of your brain. It’s mostly a dystopia thing, but entirely physically possible and feasible with today’s technology.

        There’s also a wide variety of supplements which instill the same effect, they’re just illegal.

    • 4th Dimension says:

      Relevant SMBC theater video:

    • Adrian says:

      They could ad a cool down button on it making it unusable more then 3 times per day. They could also make it so that they can’t press the button themselves, it would have to be another robot.

      • Aanok says:

        This. Basically, if you have technology sophisticated enough to build a Ghost in the Shell it’s safe to assume you’re also capable of simulating all the nuances of a biological body, including hormonal response to environmental stimuli and primary needs and the like.
        Of course the simulation could be very approximated, even something as reductive as some The Sims-esque resources that you consume and replenish by doing specific, timed actions. Say, the orgasm button might have a cooldown of sorts and make you feel tired.
        Another cheat could be to employ some heavy virtual or augmented reality: afterall, your perception of the real world is already passed through a layer of electronics and it should not be exceedingly hard to alter it (*).
        But it would imho be very doable to otherwise offer a simulation functionally undistinguishable from the real thing. And possibly a bit less alienating than a hunger meter.

        Btw, this would not defeat the purpose of being in a mechanical body, since you’d still be functionally immortal and might even have ways to enact emergency overrides of your simulated biological needs. Plush chainsaw arms. (**)

        (*) That’s what personally frightens me the most for the GitS scenario: the idea of being vulnerable to hacking.
        (**) A delectable read on the topic is Max Barry’s “Machine Man”, which is a novel about an engineer who starts voluntarily cutting off his own limbs to replace them with more performing prosthetics.

    • Grudgeal says:

      The sci-fi novel Blindsight sort of touches on that (it’s not the point of the story, but it’s a major part of the world-building). Basically, being uploaded is presented as sort of a dead end because all the uploaded people just live in their private heaven and never actually do anything except enjoying an eternity of solitude and sit around consuming power.

      Though, it should be noted, that’s from the viewpoint of the main character, who has some definitive issues on that point and in general.

  4. Daemian Lucifer says:

    I remember reading a story once where a human brain is put in a robot body because of an accident,and it messes him up real bad precisely because of hormonal imbalance.

    However,soma kiiind of touches on this,although not really.Simon(and some other robots)still feels like he is human.He even fools himself to see human hands before being submerged.He even breaths(in a way).And if you couple this with real world phenomena where brains can sometimes substitute missing parts with illusions,its possible that a scan of a brain would trick itself into feeling all the chemicals without actually feeling all the chemicals.Especially if said brain scan had no idea that its being transfered into a robot body.

    • el_b says:

      carl and a few others think theyre human too, and i think it brings up the reason hes able to keep his sanity as being because he is still part human…even if its not his corpse. when they bodyswitch him they have to use a suit with another body in

    • Mintskittle says:

      I seem to remember a book about that too. I want to say one of the old Robot City books, but I’m not sure.

      But it does bring up some questions of under what conditions do we brain scan a person. Should we scan a person who is mentally unsound without medication? Should we scan them while under the effects of said medication?

      • And, as someone who is (kinda) mentally unsound, is that copy me? How much of me is the malfunctioning bit of my brain? If you copy someone with autism and “fix” that in the copy, are they still the same person even though the two likely think very differently from one another? What about someone with ADHD? Sure, something like schizophrenia it’s a bit more obvious, but when you start thinking about personality disorders and stuff gets complicated, quick.

        Personally, I’d let myself be copied only if they could make sure the copy didn’t get my depression. The mild ADD is fine, kinda useful to me actually, but depression not so much.

        • Felblood says:

          This leads into some areas here the questions have actual practical consequences, because there are a lot of forms of depression and other mental disorders that are purely chemical, and can be obviated by tweaking the patient’s blood chemistry with drugs.*

          How different from the baseline version of themselves does the pharmaceutically enhanced version have to be, before we should start to consider them a different person?

          Does it matter if the chemical that influences a person’s behavior is a fancy synthetic drug, or a natural food ingredient?

          Are you really “not you, when you’re hungry?” I know some people who get pretty nuts if their blood sugar drops too low. Should I start thinking of the Low-cal version of them as a separate entity?

          Should I be held accountable for the things I did when I had that bad reaction to that allergy medicine a few years ago? I clearly remember doing those things, but the thought process that I remember following to them is to obviously wrong that it’s hard to think of the thoughts as my own.

          If that really fundamentally different from all the stupid things I did, just becasue I was too young and foolish to know better? Some of younger me’s behavior seems pretty alien, to my more sober current self.

          *(–in the vast majority of cases, anyway. A very small number of patients develop new more dangerous mental disorders when treated this way , and an even smaller portion of those don’t return to normal when you stop adding the drugs. We don’t really know enough about brain chemistry to know why, or even to predict which patients will be harmed. Each person’s chemistry is unique in ways that a too subtle for us to firmly grasp.)

          • Peter H. Coffin says:


            Or so I hope. And so does pretty much anyone with a mental .. difference… that doesn’t include a desire to preserve that difference against those who would rather normal, whether real, imaginary, or even oneself. Stuff gets complicated.

  5. Mattias42 says:

    Counter question:

    Does it really matter if the ‘copy’ is flawed, as long as it survives you?

    Say the Louvre burns down tomorrow, and through some freak accident—mass solar eruptions a la 1859 for example—about 90% of that vast collection is just lost down to backups, copies and even tourist stuff. Just as an example.

    Sure, it is still a tragedy, and humanity is poorer for it, but aren’t those ten percent saved still better then a complete loss? Does it really matter if the only surviving copy of Mona Lisa is a bit fuzzy, if the enigmatic smile can still be made out if you squint a bit?

    To me, personally, the answer is a no-brainer, if the pun may be pardoned. If I could climb into an over-grown cat-scan and know that somebody that’s mostly me would survive me as a result, I’d do it in a heartbeat.

    That being said… I’m personally hoping for the life-extension slash cybernetics route to turn out to be viable and allow you to live forever by… well, not dying. So to me personally a theoretical backup copy is sub-optimal no-matter the level of accuracy, but still preferable to a complete and true final death.

    • Corsair says:

      If you ask me, no, it does matter if Replica-Me is imperfect. If it isn’t me, well, it isn’t me, it’s a set of data entries someone bolted Microsoft Clippy onto. No matter how sophisticated it is nobody is going to go find Robo-Me and ask them what I would have done in a situation. Or they might, but Robo-Me is going to either give them an “I have no idea” or tell them nothing. Moreover, unless my consciousness endures I’m still dead, I’m still in the great beyond, and managing any kind of consciousness continuity would essentially require quantifying the soul.

      • Mattias42 says:

        Well, yeah, if you believe in souls slash an afterlife is definitely going to influence how you feel about such copies.

        Is it just a machine crudely aping behavior, a way for some tiny part of you to avoid oblivion, or outright a new existence (IE child) based on your mind instead of genetics?

        Lots of different ways you could resolve that philosophically and spiritually, and personally something I believe will make the pro-choice/pro-life divide look like a wet fire-cracker once mind-upload tech starts existing outside sci-fi.

        Again, just speaking personally I’d rather cheat death by not dying, but I don’t see any moral quandary in voluntary mind-backups. If it/she/he/hir/whatever think themselves a person, even a different person, then that’s frankly good enough for me.

        Still, I don’t think a continues consciousness is a must at all. It’s not like somebody that, say, went out and got themselves so drunk that they can’t remember a couple of hours is less of a person on an intrinsic level then somebody that didn’t.

        Heck, people that have been in accidents and gotten head-trauma aren’t less of that person because they can’t remember a few hours. Why would somebody post backup-restoration be any different as long as that backup is accurate enough?

        • guy says:

          I always find those answers a little too straightforward and easy, because there is a difference. What about if you’re restored in another body from a voluntary backup- only you aren’t dead yet? Now there’s two of you. Both people, sure, but are they the same person? What about a week later, if they don’t get together and synchronize their memories? Does the answer to that last one change if they do?

          As for why it matters, what if one of those two is shot? Is that one still alive because the other one is? The answer there doesn’t really matter to everyone else, but it sure does matter to the one who got shot. And even if you don’t do this, the fact that you can means that the same questions apply to having only one copy alive at a time.

          • Echo Tango says:

            OK, so two points, broadly speaking:
            First, Mattias42 was making a weaker argument than you’re countering. Namely that a backup-human is still a person, not that they are the same person.

            Second, how humans deal with questions like, “Who’s the real me?”, “What happens if there’s more than one of me at the same time?”, or “What happens if/when one or more of me dies?” will largely be dealt with by trial and error, and depressingly, trial and lawyer. [rimshot noise here] Like, any of those questions (and more) have lots of real-world consequences, which people will need to have an answer to.

            If I die and get resurrected by backup, let’s say I’m happy with that because it’s what I’ve been paying into instead of a historical type of life insurance plan. But let’s also say that my spouse rejects the backup. Also, one of my children rejects it, but the other two think it’s A-OK. This is going to be a large, messy fight at best, but more likely it’s going to involve lawyers, divorce, and a shit-tonne of fighting over who gets to keep what.

            Scenario BV-348-Z: Nobody’s died at all, or in any danger of doing so. What happens when the CEO of Fictitious Construction Ltd decides to make 50 copies of himself, and fire all of his other workers. Company morale is up 10000% because they all get along, and they’re no longer having arguments over what contracts to bid on. How do we deal with this and other companies, who decide to fire everyone? Do companies ever hire any humans ever again? Will everyone just become a 1-person 10000-body company? Will the whole economy collapse because nobody’s got jobs, and everyone’s fighting over the resources needed to make robo-clones?

            • guy says:

              Companies that try that sort of mass replication will descend into groupthink and be out-competed by companies that give all their workers robot bodies. We’ve seen plenty of examples in entertainment production of what happens when someone who has done good work no longer has anyone who can tell them no and make it stick.

              • Echo Tango says:

                You assume that all clone-companies would be incapable of keeping themselves fresh and competitive. Sure, I’ve been laid off by short-sighted companies, and I’ve interacted with some pretty bad CEOs, but what about somebody like Elon Musk? John Carmack? Bill Gates? I’m sure at least some of these companies would still be competitive, even if a lot of other ones fail.

                Furthermore, you don’t even need the companies to be successful, to fuck up the lives of all the workers who were fired. If all the companies in a related field (e.g. construction in Texas, solar panel construction in California) fired their employees within a year, because it was the sexy new trend to follow, then promptly went bankrupt after a year, you’d still have a lot of people whose employment you’ve wrecked. Even if all the assets of the companies get bought by somebody who’s still in business, the employment situation for all of these out-of-work people won’t change in a positive way overnight.

                • Syal says:

                  Then the government passes a law that you must employ X people per clone.

                • guy says:

                  When CEOs decide they’re too special and smart to need to listen to anyone else disaster invariably results. Bill Gates, John Carmack, and Elon Musk would not fire all their workers and replace them with copies of themselves; if they were the sort of person who would think that was a good idea they’d have flared out in a spectacular disaster long ago. They’d have made a mistake and then ignored anyone who tried to warn them. No major project is ever run by one genius and includes other people only because their betters don’t have the time to do everything. Go back and read Shamus’s Good Robot posts, and think about how it would be going if the team was six Shamii and no one else. The truly great CEOs have vision and skill, but they’ll be the first to tell you that their subordinates deserve plenty of credit. That is not false modesty; understanding that is what makes them great.

      • Daemian Lucifer says:

        But people arent paintings.If you make a copy of a painting,then put the two side by side and leave them like that for a year,they wont differ from each other afterwards.With two people,even if the copies are identical….well,farscape had almost a whole season dedicated to this.

        So in the end,it boils down to “Do you care about bing remembered by the world?” and “Do you care HOW you are remembered?”If you dont care about your legacy,then having a copy of yourself is not a good thing.If you care about your legacy,but want it to be special,then having an imperfect copy of yourself is also not a good thing.

    • Gruhunchously says:

      Does it matter if the only surviving copy has ‘THIS IS A FAKE’ written under the brushwork in felt tip pen?

      *a reference*

    • Felblood says:

      There are way too many people who follow this particular line of reasoning into parenthood, without considering all the effort it’s going to take to download all their ideals, beliefs and knowledge into the imperfect copies.

  6. Dev Null says:

    Note that I’m not saying SOMA is bad for not explaining this stuff. SOMA begins by asking questions, and I can’t help but respond with questions of my own. Either way, I love that SOMA started the conversation.

    I do this all the time. There’s a difference between nitpicking and just exploring the questions that something brings up – and I reserve the right to do both! – but sometimes I get in trouble for the one when really I’m (I think) doing the other…

  7. Decius says:

    The Transhumanist question isn’t whether it’s theoretically possible to copy all the stuff that makes us “us”. That’s assumed to be true.

    The Transhumanist question is how to deal with that. When do filesystem operations constitute murder, and all that.

    • Alex says:

      “The Transhumanist question isn’t whether it’s theoretically possible to copy all the stuff that makes us “us”. That’s assumed to be true.”

      It should not be assumed to be true. A copy certainly might be a person, and that person might be similar to you, but that doesn’t make them you.

      You are the entity that sees through your eyes and controls your body. If the same entity does not look through both sets of eyes and control both bodies – if you can’t have one body read a number and the other write it down – they aren’t both you.

      • Echo Tango says:

        Speak for yourself. If I was in a sci-fi world where I could make copies of myself, they’d all be me, even if they get de-synced. I’d be happy enough with us all gathering in our timeshare lake cottage one every five-ish years, to trade stories of all the bullshit we got up to in that time. We’d essentially operate like some kind of weird corporation, where we all control the same central assets, and rulership would be by combination of seniority-scaled majority vote. Once we’ve lived long enough that we’re all still wildly different because of our different experiences, we’re probably also going to have the tech to re-sync our memories with some kind of near-perfect technology, so it won’t be an issue anymore.

        • Crespyl says:

          Alastair Reynolds has a science fiction story called “House of Suns” that has pretty much that exact concept as its premise.

          A “family” of 999 clones of the same person all split up and wander around the galaxy for the duration of one galactic rotation, then all meet up to synchronize their memories, talk about what they’ve done, and go over what they’ve learned, before going off to repeat the cycle.

          Also Andromeda disappeared, because reasons.

        • guy says:

          I suspect that you’d get desynchronized much, much more quickly than you’d think and rapidly become a large collection of highly similar people who probably will still get along, but not really be the same person and not think the others doing something is analogous to doing it themselves.

          You’d need to share experiences constantly, and could still potentially desynchronize if the connection is blocked while something significant happens. Or something insignificant.

        • Andy says:

          There’s a storyline in the webcomic Schlock Mercenary where a character (Gav) copies himself by interacting with a network of teleportation gates.

          This generates NINE HUNDRED AND FIFTY MILLION of him. He outnumbers entire species. Large industries form to cater to his tastes.

        • Richard says:

          Man, your sci-fi world is a lot more classy and a lot less sticky than mine.

      • Mistwraithe says:

        It is a good point. If I was having my consciousness transferred to a robot I would want to control both bodies, either simultaneously, or switch between them, before I would be willing to say you can shut down the ‘organic original’.

        It needs to be a case of moving sentience, not copying sentience. If you copy sentience then you have created new life in the robot, but the old copy is still in the human body, and shutting down that human body is still murder of that copy.

      • Adrian says:

        This kindeof touches on Rene Descartes dualism vs monism concepts.
        Dualism considers that the mind is a separate entity from the body and monism stipulates that the mind is part of the body. If you subscribe to the dualism philosophy, copying your mind into a machine will not create another you, it will create a simulation of you and the real you is the soul which still resides in your body and will be freed upon your death.
        If you subscribe to monism, copying your mind to a machine will definitely create another you that will live on.
        Today, advances in neuroscience push towards monism, but dualism is still alive an well.

        • Daemian Lucifer says:

          Shouldnt it be reversed?If you think that mind is separate from the body,then copying just the mind creates another you,because your body is just a vessel for your mind.And if you think that mind and body are interconnected,then copying just the mind creates an imperfect simulation that will have a very different personality from you.

          • guy says:

            I think it’s a wording issue; if there’s an immaterial soul then no material process can properly duplicate it so the copy is just a facsimile and not a person of any sort, or alternately the soul moves and a facsimile stays behind, or there’s no facsimiles at all and either you can’t transplant it or the original body becomes an empty husk.

            If there’s no immaterial soul you can just copy someone perfectly via material processes and now you have two people.

            • Adrian says:

              Yes, that’s what I meant. Thank you :-)

            • Daemian Lucifer says:

              Ah,mind as in consciousness not as in brain.My mistake.

              Although,technically I can see dualism coming to terms with a copy of your being if you throw quantum immortality in the mix.

              Basically,one can argue that spacetime is static,and its your consciousness that travels through it,seeking the longest possible path.This way,your consciousness will “choose”,so to speak,that state of reality where you exist,because without you existing,it wouldnt have a place to “occupy”.And if given a “choice” between the doomed original or the prospering copy,it would choose the copy.

    • Felblood says:

      The existence of the capitalized “Transhumanist Question” is not a license to ignore all of the adjacent questions that exist outside the assumptions of that particular thought experiment.

  8. 4th Dimension says:

    “Look Shamus I have a problem I don’t want to talk about. I really like fisting alien sphincters.”
    – Josh, 2016
    Needs to go on some sort of quote page for the show. For future blackmail purposes. Of course.

  9. Majere says:

    Not gonna lie, was I in Catherine’s position I’d be one of those antagonists who tries to trap the protagonist in the facility with them solely to escape the threat of eternal solitude.

    • Andy_Panthro says:

      I’m reminded of that one mission in Fallout 3, in which a bunch of people (including your dad), are trapped in a virtual reality scenario. What could have been initially acceptable (survive in VR rather than escape to the awful post-apoc future) soon becomes horrifying due to the whims of the one in control.

      If Catherine had been trying to trap you with her, in order to sustain her existence, what happens when you figure that out? What would she do to survive then? Admit what she did and try and convince you to stay, kill you before you can kill (unplug) her, or something else?

      It’s also made me think of an alternative SHODAN, one where she prefers to keep humans around as entertainment, and given that game has cloning facilities that could get messy fast.

  10. Quent says:

    I think that it is less that Catherine has the courage to not exist for a period of time, but more that she doesn’t mind… she’s in a odd state of mind, or understanding, isn’t she.

    • guy says:

      For some reason, Cathrine gives me the impression that she isn’t an upload, she’s an AI built from scratch. Mostly that she’s been trapped for what seems to have been a rather long time while switched on but seems entirely calm. So she doesn’t mind being switched off for transportation because her designers didn’t think that would be useful and decided not to add it.

      If it turns out I’m wrong, presumably the reason they picked her for the job is because she could handle it.

      • Echo Tango says:

        I think the world picked her for the job. Like, all the other uploads went crazy to some degree or other, and have since moved on. It seems pretty natural to me that the one brain-scan who’s still around, is the one who’s comfortable living in a robot body, and just wants to keep doing research and/or fixing up the lab.

        • guy says:

          Isolation gets to people when it’s this absolute, and if she’d been pinned in place for even a few days (and it feels like everything went to hell at least a week ago) she’d probably not be nearly so calm. People vary but not all that many would be all right in that situation.

          • Echo Tango says:

            True, but Catherine wasn’t in that situation. We don’t know how she would have acted if she’d been immobilized for a long time, because the monster only knocked her down right before you got there. :)

    • Daemian Lucifer says:

      You know,it might be that she cannot feel fear since fear is a hormonal thing.

  11. Hermocrates says:

    That’s a good question for uploading someone who’s alive into a robot body, but what about someone who has just died or is about to? While the lack of perfectly-replicated hormones would make us very different from who we were, I’d argue it could, to some people, be better than the alternative of involuntary death.

    And to answer the question in both cases, imagine you are who you are now, but suddenly a disease, accident or necessary medication changes your hormone imbalance. Are you now no longer you? Or are you just a changed you, a you who is directly tied to the old you but with a discontinuity in your biology. I think most people would agree that you are still you. You could argue about such a drastic hormone imbalance as being moved into a robot body being beyond such a discontinuity, but that’s just Theseus’ Paradox again (how many hormone imbalances must you develop at once to stop being you?).

    EDIT: That’s not to say your problems with a robot body are irrelevant, but rather that everyone’s limits will be different. The answer to “how many discontinuities are you willing to cross”; for some, they might be willing to cross the asymptote.

    • Echo Tango says:

      I think there’s no one answer to “am I still me”. Your best friend might still think you are you after 5 hormone changes / accidents / whatever. Or 10. Or 50 smaller ones. Meanwhile you yourself could be going through constant identity crisis for the whole time and some time after the last / latest change in your personality. Maybe your spouse still loves you no matter what, but your brother disowned you.

  12. The Rocketeer says:

    This was part of Old World Blues! Your brain is removed from your body, and stored in a tank in which it can think and live as an existence apart from your body, while a cybernetic implant simulates your own brain. Separated from the urges and drives produced by the rest of the body, your brain decides it doesn’t want to come back once you find it. Conversing with and winning over your own brain is the penultimate challenge of the storyline. Smashing stuff.

  13. drlemaster says:

    I think both ways of looking at this issues are valid, just depends on what the author is interested in exploring.

    Issue 1) assuming we can put brains in jars, or scan them into computers, and they work just like normal brains, what are the implications with being in a different body, or in a matrix-y thing, or having multiples copies of yourself extant.

    Issue 2) assuming jar/computer thing, but the copy isn’t perfect, or lacks the right hormones, or whatever, what are the issues of being a slightly different version of yourself.

    I seems to me the Soma authors are more interested in #1. I had a philosophy class back in college that spend some time talking about that one. (I am just older than Shamus, so this is before The Matrix was a movie, and the professor had to spend some time explaining the whole brain in a computer/jar concept). If I remember, the thing that got the most attention in class was not the brain in a different body concept, but what happened when there were multiple copies of the same person.

    There are other ways of bringing up these issues as well, the whole Phineas Gage thing, or the transporter from Star Trek accidentally making two copies.

    • el_b says:

      the transporter clone thing is the ultimate proof that whoever goes into those things dies every time…and no one ever brings it up! its a terrifying realization and it would have been amazing if it was actually recognized by everyone and only used afterwards in life or death emergencies. how different would star trek be if they couldnt use teleporters for fear of death, or if they used them to clone the dead? why does anyone die in star trek if they can be farted out of a transporter buffer?

      • Felblood says:

        There’s a bunch of hand-waving about the Federation banning the use of transporters to deliberately duplicate people, as part of their unexplained (and inconsistently enforced) anti-cloning dogma, but it all falls pretty flat if you think about it for even a minute.

        The most problematic one off the top of my head: Why don’t people from the factions that lack the cloning taboo exploit this advantage over their ideologically rigid rivals? The factions that painstakingly grow clones in tanks should be on that, at a minimum.

        Every attempt to come at this just raises new questions. The TNG episode “Realm of Fear” is a particularly glaring one, as it makes it pretty clear that transporters turn people into data and data into people the same way as the digitizer from Tron.

        The fact is that a lot of stuff in Star trek exists to move the story along so we can explore the Sci-Fi question of the week, and there’s no good in-universe explanation of why it works that way. This one is just particularly bad becasue it’s wired into one of the iconic, recurring plot devices.

      • Daemian Lucifer says:

        I wanted to bring this up a bit later,but since youve breached the topic:

  14. Mersadeon says:

    Actually, orgasm buttons exist, to some degree. In rats, it’s relatively easy to wire up, and as you said, they just press it till they die, neglecting any other need.

    Interestingly, there was an experimental treatment for something brain-related that I can’t quite recall. They connected a button with an area in your brain, and you could administer mild shocks to stimulate the region as needed. Well, a handful of people apparently got wired a little bit wrong – every shock gave them an orgasm. One woman pressed the button until it had dug into her flesh unto the bone. One man tried to re-wire it to be stronger.

    So yeah, cheeto-button? We’d be pressing it all day. Understandably so, I’d eat chocolate all day if I could without repercussions.

    • Echo Tango says:

      I chug diet pop all day instead of water, because it’s essentially flavoured water. I mean, if I’m camping or something, or just really thirsty I’ll drink water, but if I’m at home or at work, I’ll just drink the tastier stuff. :)

      • nerdpride says:

        I heard the opposite side of this story actually. One co-worker was taking care of his parents and one or both of them complained very mildly that water was so tasteless. And maybe he should’ve guessed that they’d stop drinking it shortly after and die. He continued to regret that he didn’t provide anything better.

        Maybe I shouldn’t agree, I mean carbonated stuff probably isn’t healthy, but yeah, maybe the little pleasures are worth more than commonly thought.

        I wonder if it is possible to add to the orgasm button experiments another button that allows you to stop wanting to press the first button. I wonder if a rat would press it by itself before becoming addicted.

        But also another thing I read was that rats with drug-laced water would much less often become addicted if involved in a nice rat-society rather than alone, bored in a cage. Wonder if the orgasm button experiment accounts for anything else in the cage to do that way.

        • Hermocrates says:

          I wouldn’t be surprised if at least some people would prefer to reach orgasm through sex rather than through the press of a button.

          I know I would, I can only assume.

          • Daemian Lucifer says:

            Im not so sure about that.It feels like that,but consider:Actual orgasm puts a serious strain on your body,and can even become painful if overdone.An orgasm thats purely in your mind neither stresses your body,nor runs the risk of turning sour.Also,its less messy.

  15. MrGuy says:

    There’s an interesting scene in The Matrix about eating chicken.

    Chickens are actually by the time of the movie extinct. The only living humans who have eaten chicken have done so in The Matrix. Where, of course, they didn’t actually eat chicken – they were part of a computer simulation where their neurons were stimulated by a machine to simulate the sensation of eating chicken.

    So, no one knows what chicken actually tastes like, because what if the machine that wrote that part of the simulation got it wrong? How would a machine know what chicken “tastes” like anyways? Which may be why chicken sort of tastes like everything. Nobody knows if Matrix chicken tastes remotely like real chicken.

    I could imagine a parallel for “missing” parts of a real human psyche. Sure, the robot might be missing important parts of real human feelings. But how would the robot know? The robot doesn’t know what “real” stress hormones, feeling hungry, feeling tired, having a sex drive, actually feel like. How would the robot recognize the lack of things it’s never felt, and doesn’t have the capability to understand?

    We might recognize there are things missing from the robot’s psyche. We might realize there are ways in which it’s a less-than-complete simulation of a human. But the ROBOT won’t realize that. It will think everything is completely normal.

  16. silver Harloe says:

    You haven’t eaten in a couple days, and your blood sugar is wicked low. This strongly affects your personality. Does it affect it enough to stop calling you “you”?

    A previously unknown but wealthy relative dies and has an unusual will: the money is used to have someone follow you around and make sure you’re always fed. For the rest of your years you never have low blood sugar again. You never exhibit your low-blood-sugar personality. Has this changed “you”?

    You are in a car accident and lose all feeling and mobility from the waist down – and your gonads are destroyed. Should I still consider you the same person? But you’re missing important chemical inputs and sensations that made you “you”.

    You get a small tumor in your sinuses and its removal has fixed everything except you can no longer smell. You’ll never have that feeling of breathing in a spring day the same way again. Does your family get your life insurance?

    • Lanelor says:

      Both human body and psyche are in a constant change. If nothing else, aging slowly alters the cell structure, altering organs, tissues, hormonal levels … if you take a snapshot of “you” now and after a week you compare it against the new “you” is it the same person?

      • silver Harloe says:

        I’m jealous of how well your question fits in with my set and wish I had written it.

        I did write a line in some piece-of-crap poem back in high school (~1986) “In the mirror, you can never see the same person twice.” That was the only good part of the poem. It was, of course, ripped off entirely from the old Zen saying about never being able to cross the same river twice.

    • Daemian Lucifer says:

      Thats one of the biggest questions in philosophy.Does the quantity and the speed of change affect whether an object is the same or different one?If you get instantly disintegrated and reconstituted somewhere else,are you still the same person being teleported?What if every atom in your body gets slowly replaced,one by one,with atoms taken from somewhere else?And what if the original atoms are used to construct a new person.Which one would be the real you then?

  17. Lanelor says:

    Does it really matter? It’s not like there are levels for error like in statistics.

    The important thing for is how it feels. You wake up, wash your face, brush teeth, make coffee, get back from work, have a bite, watch some tv and go to sleep. Tomorrow you go for dinner with someone. The day after you go to a club/pub with some colleagues. It’s good to be alive!

    What if this is all in your head? Does it matter if this is only a dream, or in some form of a VR. To answer the question “Is this you?” it will require outside observation and setting sample/standards against which to compare the observed unit. Without the ability to compare with the source material, you can but guess.

    • guy says:

      As far as I’m concerned, the question doesn’t matter to the copy, but it matters a lot to the source material. If it’s not the same person, then the process does not confer immortality. People might still want to do it, but for the reasons they’ll leave a will rather than to live forever.

      But the copy is the person that they are, and it doesn’t really matter if that’s the same person as the original. Or at least, there’s really nothing to be done about it either way.

  18. Zaxares says:

    For anybody who is interested about how our bodies and body chemistry affect our mind and personality, I highly recommend watching the documentary “The Brain” by David Eagleman. Even something as subtle as being hungry while making a decision can be enough to sway the way we think. There’s a reason why a lot of business decisions are made over dinner. It’s because when we’re hungry, we tend to be more confrontational, aggressive and resistant to compromise. This all disappears once our stomach tells us we’re full; we become more open, more forgiving and more willing to accept compromises in our position.

  19. Slyder says:

    So far, SOMA seems to resemble The Talos Principle a lot. That game also deals with an apocalypse and what it means to be a human, but you actually have to answer these questions for the game. I think you’d like it, Shamus. It even has a demo.

  20. Pyrrhic Gades says:

    To be fair to Simon, he had suffered massive brain damage prior to his brain getting scanned, so of course he is a rather slow

  21. SlothfulCobra says:

    It’s interesting how Rutskarn says that he’d never have the courage to temporarily terminate his existence like that, when just a little while ago, all three of them were talking about straight-up 86ing a woman who was in a similar condition earlier like it was a reasonable action.

    • Richard says:

      For me Catherine was much braver – or more foolhardy – a few minutes earlier when she ejected her own chip.

      She’s only just met this sentient diving suit, and her second action, right after saying “You fool, you’re not a real human, you’re a sentient diving suit” is to say “Hey, can you transplant my brain onto your keys?”

      She then happily hits her own “pause” button before even explaining how, hoping that you can figure out which bit she means, how to do it and that you’re willing to do it.

      For all she knows, you have no idea what the “cortex chip” looks like or what to do with it, or are a clumsy diving suit and quickly snap it in half by accident.

      Or deliberately, in reaction to being told you’re not “real”.

      Or you just wander off because you’re just not interested.

    • King Marth says:

      People are generally far more reasonable when considering the welfare of others. Negative self-talk is a great example; one of the simplest ways to snap someone out of beating themselves up for silly reasons is to ask them what they’d say to a friend who went through the same thing. It’s basically the converse of the Golden Rule, “Do unto yourself as you would do unto others.”

  22. Malky says:

    So when I first saw SOMA being played, I just got so so so excited because back in undergrad I took a philosophy of mind class and I could just see this game confronting people with the question of what constitutes as being “the same person.” And so since Shamus brought it up this episode, I just felt like barfing up all my thoughts about the mind here, and maybe you guys will find it interesting? I dunno! The mind is weird!

    There are three main schools of thoughts about what the mind even is. Dualism says, basically, “souls.” Eliminativism says that there is no mind, what we think is the mind is just the brain and hormones and shit. Reductionalism is the in-between, that the mind is the brain and the brain is just this super duper complex machine shit that forms the mind. It’s kinda hard to see the difference between Eliminativism and Reductionalism…I still have a hard time now, but it’s like…Eliminativism says your emotions are just hormones while Reductionalism says your emotions are caused by hormones but that doesn’t mean those emotions don’t exist! Your emotions matter!

    I feel like Shamus is stuck on the Eliminativist idea, where without hormones, then there is no “mind,” but I sort of latched on to the Reductionalism camp in my class, and if the brain is this super duper complex machine, why can’t another super duper complex machine replicate it? Does it really need hormones and all that biological stuff to make “you” be “you?”

    I won’t deny that copying a mind into a robot would cause some dissociative identity disorders, depending on what the robot body is capable of. Any sort of body swap would, given loss of muscle memory, loss of certain senses, differences in physiology, etc. But that reaction in and of itself at least assumes that there’s “you” in the body, and “you” are feeling distressed because “you” were expecting something different, I think.

    I could probably go on about the questions of what it takes for “you” to stop being “you,” but I think I should just stop here for now. I hope this was interesting though.

Leave a Reply

Comments are moderated and may not be posted immediately. Required fields are marked *


Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun.

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>