SOMA EP9: Go Away. Nobody Loves You.

By Shamus Posted Friday Apr 15, 2016

Filed under: Spoiler Warning 105 comments


Link (YouTube)

So humanity is doomed. A few dozenYou know, I’m not really sure on the numbers, but it’s probably somewhere under 100. people are left alive in this undersea base. The surface is uninhabitable. Sooner or later these folks will starve, and that will be the end of the speciesAssuming there aren’t any other humans living in a bunker somewhere. That’s certainly possible, but there’s no way to contact them. So for the purposes of this story, let’s go ahead assume this base contains everyone still living.. Cathy gets the idea to save people by putting brain scans into a simulator and launching it into space.

Like I said in the show: To me this doesn’t save humanity. It might arguably create some new thing that’s just as interesting and important, but saving a couple dozen brain scans isn’t the same as having an ever-evolving population of reproducing organisms. The ARK is stagnation. The only good thing you can say about stagnation is that it’s preferable to oblivion.

But your average grunt-level worker – let’s call him Bob – has a problem. Bob doesn’t want to sit here in the dark, slowly starving to death. Or freezing to death. Or slowly going mad from being forever trapped in a small base at the bottom of the ocean of a ruined world, never again to feel the sun on his face or the smell of wet grass after a rainstorm. He wants some other option. Catherine is offering to scan his brain so it can live on, but Bob knows that after the brain scan, he’ll still be here in sucktown.

Bob very much wishes he could live in this simulated world. And so he gets the idea into his head that there’s only ever one version of him in the world. If he kills himself, then the “real” Bob – or perhaps the “current” Bob – will be the one in the robot. I’d love to know how Bob’s mental model works, here. Does he think that he’ll shoot himself in the head, and then suddenly find himself in a robot or whatever?

It sounds like a strange idea to me, but that’s how he sees it. And to be fair, this metaphysical shit can be really tricky sometimes. It’s hard enough to consider this rationally when presented with various ethical dilemmas at the best of times. So when you’re half-mad and facing a lingering, hopeless death, it’s probably easy to bend your thinking in ways that will give you hope for the future.

Having said all that, this would make for an interesting thought experiment for the various Bobs in this undersea base. If Bob and Carl both agree that killing your meat body should make the copy into the “real” you, then Carl could test this hypothesis for himself. Once Bob is dead, go over to robo-Bob and tell him what happened. Ask him if the demise of his physical body impacted him in any way. Ask him if he remembers killing himself. I imagine that Robo-Bob’s answers really ought to give Carl something to think about.

(Yes, I’m aware that their goal is actually to kill themselves before the copy is up and running. I’m just playing around with the idea.)

 

Footnotes:

[1] You know, I’m not really sure on the numbers, but it’s probably somewhere under 100.

[2] Assuming there aren’t any other humans living in a bunker somewhere. That’s certainly possible, but there’s no way to contact them. So for the purposes of this story, let’s go ahead assume this base contains everyone still living.



From The Archives:
 

105 thoughts on “SOMA EP9: Go Away. Nobody Loves You.

  1. Tizzy says:

    I get the sense that it would be a lot easier to simulate the whole brain rather than to pick and choose which parts to simulate. The latter requires a lot deeper understanding, I’d think.

    1. el_b says:

      catherine mentions simon is a flatter scan, more basic. other than brain damage, thats probably a good explanation for his learning difficulties. he can reason and is self aware, but doesnt take a lot of stuff in… leading to his reactions at the end. first time round i can get he’d be surprised and upset at his AI being copied instead of transferred, but by then he should have got it, but he had to be told like 4 times and still didnt learn.

      1. Arthflaidd says:

        I think she was talking about the original scan she had access to; I highly doubt that would have been enough to recreate Simon to the degree he is in the game, even contemporary scans from the 2100’s aren’t capable of simulating a person – see the difference between Catherine’s original simulations and the Imogene Reed one she gets from the WAU. Obviously it is “filling in the blanks” to some degree, so I wouldn’t blame Simon’s, uh, idiocy(?) on the scan quality.

        I really think he is aware of the truth but just ignoring the implications, because frankly his situation is utterly hopeless. It’s a good balance of awareness and ignorance I guess…

  2. Ninety-Three says:

    I recognize that I’m in a minority on this, but I actually do think that if I brain-scan myself into a robot, that robot is me, and if my meat body is still alive, there are now two mes. Obviously, this is handwaving all the issues of whether it’s a perfect copy and what about hormones and itchy noses and and and.

    Put aside the brain-scan and pretend that instead we’re dealing with a malfunctioning Star Trek transporter that has beamed you up, then beamed out two copies of you, identical down to the nanoangstrom. Unlike a robot which is notionally an identical mind, these transporter clones are literally identical, in every way, so we must say that either both of them are you, or neither of them are you. “Neither of them” doesn’t make any sense, so they must both be you.

    Now let’s say that instead of the transporter beaming you up and spitting out two of you, you walked into the transporter, it scanned you, and beamed out an identical copy, without ever touching the original. In this situation there is technically an original, in that the copy is made out of atoms which didn’t exist a minute ago, whereas the original is made out of atoms that just walked through the door of the teleporter room. But, the copy and the original are still identical, unless someone was watching and saw which walked into the room and which was beamed out by the transporter, there would be no way to tell which is which. Are they both you? I say yes, in every sense but the most uselessly pedantic.

    Now we bring it back to SOMA: you walk into the room, the machine scans you, and then it outputs an identical mind into a robot body. If we say that the brain scan is so perfect that it simulates hormones and anxiety and itchy noses (and your robot body has a nose to itch, we’re handwaving the logistics here), why is that different from a magic Star Trek machine building a human clone of you? The mind is the same, the only difference is the body. If you say the above clone is you, but a SOMA robot isn’t you, then your definition of “you” must be at least partially dependent on the body.

    1. King Marth says:

      Yes, but which you is you? If you’re unconscious at the time of scan, which body do you expect to wake up in? You will wake up in both, but do you expect to only experience waking up in one or do you claim to literally experience events from both sets of sensors without any physical mechanism to bridge the gap between bodies? You should probably give a 50-50 chance for which one you’ll follow… but what if a dozen copies are made while you’re unconscious? What if you aren’t sure how many copies will be made, or when they will possibly be made in the future from a stored scan? What if twenty copies are made and, before any of them wake up, ten of the copies are destroyed? What if the atoms of the source body are used to construct one or more copies?

      Would you resent another body which is you making use of your resources, or spending time with your friends and family? Even if your current body rendered you incapable of doing those things? If one of you went to university for four years and the other you went volunteering in foreign countries for the same time, would they both still be the same you? What if one goes into space and the other is alone and miserable trapped under the ocean?

      Altering the definition of ‘you’ doesn’t actually resolve these problems. There’s no physical reason to expect two separate physical minds to not be different entities, even if they’re initialized as the same person.

      1. Ninety-Three says:

        Of course I wouldn’t be experiencing two minds simultaneously, there would be two different “me”s waking up, but at the moment of waking up we’d be identical. The clone and I start to diverge as we accumulate new experiences. I’m not the same person as I was four years ago, I’m not the same as I will be four years from now, and the me that spends the next four years working a day job will be different from the me that spends the next four years backpacking around Europe.

        I’m a different person than I was four years ago, but the me of today and the me of four years ago have more in common than the me of today and the me of eight years ago, and the me of today vs the me of yesterday are basically the same. There’s some fuzzy, hard-to-define point at which the clone becomes a meaningfully different person, but I’d rather not get into that, for the same reason I’d rather not get into questions like “How many trees make a forest?”

    2. Mephane says:

      It is for reasons such as the Star Trek transporter or the conundrum in the game that I subscribe to the notion that conscious requires (among other things) a form of continuity. Just making a snapshot of the information state of the brain and then creating a copy, be it virtual or physical, technological or biological, would be a different entity, it would be identical, yes, but still not me as in the conscious mind that I am.

      It is important to distinguish this notion of “me”, because it is more than just the sum of all my physical (and and therefore informational) properties. Just like if you had a file and copied it, yes they are identical copies, but they are now two distinct files, and modifying one does not affect the other.

      For the same reason I believe that a Star Trek style transporter “disassemble into particles then reassemble elsewhere” is equivalent to being killed, even if the new copy of you continues to live afterwards.

      However, for the conundrum in the game, I would act similarly, but not for the same reasons: I’d agree to the copy+upload, and then kill myself. I would know that killing myself changes nothing for the copy, won’t merge me into the copy or whatever; it’d just be to avoid some long and gruesome death of starvation or similar.

      1. Echo Tango says:

        What about using structure gel to try and repair everything? Like, figure out a way to cure the WAU-cancer, or reprogram it, then just start globbing everything up with goop. Rebuild and terraform the planet, to save the humans! :)

      2. Daemian Lucifer says:

        But what if stuff happens to you like in this game?You get into a machine,then you start perceiving things from hundred years in the future,from the point of view of the copy.Then you copy yourself again and experience stuff from the point of view of that copy.Then you copy yourself a third time,but experience stuff from the point of view of the second copy.You still have perfect continuity of consciousness(even more so,because you never go to sleep,you never get knocked out).So is only the biological original the real you?Is only the copy you follow the real you?

    3. Smiley_Face says:

      I personally agree, in a way. If the copy is identical to me in all the ways that are important to me, then it’s me, and whichever one is original doesn’t particularly matter, from an objective perspective. However, both versions of me will naturally be concerned about their own particular instance in a way that they’re not about anything or anyone else, so from that self-perspective, the other me is me in some senses, but not in others. With that said, the sense in which the other me is not me is not particularly significant, and it does hedge against death somewhat; if one me dies and the other lives, then the survivor would recognize that they had survived, but also not survived, which seems strange.

    4. Daemian Lucifer says:

      You(and everyone else for that matter)should definitely watch farscape.Specifically,season 3,from episode 6 onward.There,the main character,crichton,gets doubled,both copies identical.They even play rock,paper,scissors with each other and keep drawing for hours on end.But both get separated afterwards,and not only get to live different lives,but also get perceived differently by the rest of the crew.Its a brilliant piece of science fiction.

      1. Echo Tango says:

        Woo, Farscape!

        P.S.
        I think you mean science “fantasy”. :P

        Seriously, though, that distinction is too divisive.
        I prefer a more nuanced approach to how I identify my fiction. :)

        1. Nimas says:

          I love me some Frelling Farscape (bonus points that most of the cast is Australian and thus have better accents :D)

    5. AR+ says:

      Something worth noting in the replicator situation is that atoms do not have individual identity, so there isn’t really such a thing as these atoms vs those atoms if they are observably identical. This can be shown empirically by the way that massive particles’ wave-functions overlap and interfere, which is different from how they would interact if they were distinct particles.

    6. Anonymous says:

      I’m so tempted to jump in with observational data from meeting my own freshly-split dissociative identities, but I’d hate to turn a lovely discussion into something weird. I’d probably fail to convey it properly, anyway. It’s hardly any less surreal or confusing in real life than in thought experiments.

    7. Steve C says:

      The Star Trek transporter? You mean the one that is really a suicide booth?
      A good video that is highly relevant to the themes in SOMA:
      https://www.youtube.com/watch?v=nQHBAdShgYI

    8. Taellosse says:

      There’s a widely held philosophical position that no version of post-transporter you is actually “you” – that there is one or more observationally-perfect replicas of “you” running around, believing themselves to be you, but that “you” were annihilated when the transporter dematerialized you.

      This is actually somewhat supported by our present understanding of quantum mechanics (which post-dates the conception of Star Trek transporters by a number of years, it’s worth noting) – the only way within the bounds of currently-understood physics to make a perfect replica of a given object (live or not) is to destroy the original version in the process of identifying the exact status of all the matter on a quantum scale. It is then theoretically possible to replicate the precise matrix of matter, exactly as it was scanned, and create a perfect copy. If a living being, theoretically such a copy would believe itself to be the original, but it would necessarily be made of entirely new matter that was just in an identical state, at the moment of its creation, to the matter that had been scanned at the moment of its destruction.

      There’s multiple ways to look at this sort of question. On the one hand, we’ve got the argument that “you” are not a stable structure – over the course of time, all the cells (and therefore all the matter) in your body are replaced by new ones. “You” from 20 years ago has no matter in common with “you” from today. So what’s the philosophical difference between doing that over time and doing it instantaneously? Post-transporter “you” thinks of itself as you, so it is, right?

      But on the other hand, there’s the matter of the difference between equivalent information and identity. If I have 2 copies of the same book, from the same print run, they look the same, they contain all the same information. If they’re brand new and fresh off the press, it’s reasonable to suppose there’s no distinguishable difference between them. But they’re NOT the same book, are they? They can’t be, because I can put them next to each other, and see that they’re two distinct objects. They may be functionally identical, but they’re still different. And over time, they’re going to grow increasingly distinct, as they develop divergent wear patterns – this one’s cover gets bent, that one acquires a torn page, this one has some coffee spilled on it, that one gets some notes written in the margins, etc.

  3. The Rocketeer says:

    Is it just me, or is there no [1] footnote? Just [2] and [3]?

    1. McNutcase says:

      That’s a thing that happens. On the front page, there are [1] and [2] footnotes, but on the article’s own page, they magically turn into [2] and [3]. It’s no doubt some exceedingly recondite bug that’s not worth trying to fix. I used to think it was related to “read more” links, but this entry doesn’t have one of those…

      1. MichaelGC says:

        Aye – it happens when the footnote is in the first few lines of text. You’d think it would happen each time the footnote was early enough to also display on the main page of the site; that’d make some sort of sense somehow. But no: the previous SW post also had a footnote (late) in the first paragraph, but that showed correctly as [1] in both the post itself, and on the main page.

  4. Ninety-Three says:

    If he kills himself, then the “real”Âť Bob ““ or perhaps the “current”Âť Bob ““ will be the one in the robot. I'd love to know how Bob's mental model works, here.

    So this is not the way Bob thinks, as Bob clearly believes in the idea of a “real” Bob, but for different reasons I would scan myself and commit suicide.

    Basically, I believe both the scan and my meat body are me. Once the scan is taken, there are now two “me”s. One of them gets to live in virtual reality utopia, and one of them is still in Sucktown. This is an improvement on the pre-scan situation, where 100% of “me”s were in Sucktown.

    Sucktown is still lame, and being stuck at the bottom of the ocean with nothing to do doesn’t really appeal to me, so I would commit suicide rather than wait for starvation. There is now one me left, the scan in the ARK. If Sucktown weren’t quite so bad, if I had Playstation 12 down there with me or something, I wouldn’t commit suicide, but I’d still go through with the scan, because two “me”s, one in virtual utopia and one in Alrighttown sounds better than just one me in Alrighttown.

    1. King Marth says:

      This makes more sense to me than your previous comment I picked at. I have a vested interest in the future enjoyment of copies of me, in much the same way I’m interested in the imperfect copies of me that make up my family, but once they’ve been created I recognize they’re distinct people.

      I’m reminded now of the Twilight Imperium Brotherhood of Yin faction made up exclusively of clones of a single person, who made their own cult.

    2. Warclam says:

      This is exactly how I look at it. As an added bonus, here’s how the “kill yourself before robo-self wakes up” might make sense without some kind of soul transferral.

      Meat-me is fucked. Absolutely, irredeemably fucked. The most it has to look forward to is death, because the alternative is becoming a WAU-zombie. Meanwhile, metal-me has a chance here. If there’s no hope for meat-me anyway, let’s make things as easy as possible for metal-me. If I’m the sort of person to be disturbed at the possibility of there being two mes, then disposing of meat-me will make metal-me more comfortable.

      So it’s not that meat-me’s “soul” is literally being transported into metal-me, it’s a simple equation of 1+1-1=1.

      1. Echo Tango says:

        So…metal-you is comfortable with murder-suicide? ^^;

        1. Warclam says:

          It’s better than watching meat-me rot, yeah.

  5. Wonderduck says:

    Totally off-topic, but DIECAST autoplays on the front page. If this is a bug, consider yourself informed! If this is a feature… ugh.

      1. Wonderduck says:

        Great, so it’s a known, but ignored, issue. Color me less than impressed.

        1. Matt Downie says:

          Did you try the ‘stop using Firefox’ fix?

          1. Daemian Lucifer says:

            One bad thing about that fix:Some programs are made to specifically work only on firefox*.I know this,because I had to install firefox on my fathers laptop so he could watch cable there.I know,its shitty,but thems the breaks.

            *Also ie,but no one uses that.

        2. Echo Tango says:

          Install the VLC plugin for Firefox. Even if it’s shut off, it flibbles the right dambles, and makes the audio not auto-play.

        3. Steve C says:

          It’s a known problem with -a- version of Firefox, not this website. Firefox fixed it in a new version. Update (or downgrade) your Firefox and the bug will be solved.

        4. Galad says:

          Hey, if you haven’t found it yet, you can go to the ‘about:config’ page, and turn off the “autoplay” option, which is on by default – just search for autoplay in that Firefox config page. Hope this helps!

  6. Decius says:

    The true answer to the Ship Of Theseus problem is that it is the same as Heraclitus’ river: The ship is not what it used to be when it becomes weathered, even before you replace a portion of it.

    It is meaningless to say that a brain scan of you that gets put into a robot is the “same person” as the person who got up out of the scanner, and it is also meaningless to say that the person who got into the scanner is the same person who drank the potion that was needed to get scanned. They have causal interactions, but aren’t identical.

    The real question here is at what points data are people. Clearly, when that data is encoded in synapses in a brain, destroying that data is murder. It’s less clear whether to hold a funeral for a corroded data platter.

    1. Tizzy says:

      After all, there is a Ship of Theseus paradox for humans already: it doesn’t take that long for all the cells in your body to be replaced.

      1. Daemian Lucifer says:

        It takes quite a bit of time,considering that even now that we know neurons do get replaced sometimes,that process takes years.

        1. silver Harloe says:

          It’s a recursive Ship of Theseus thing, though. I don’t mean the ones that get replaced. I mean, the ones that haven’t been replaced yet. They are still living cells, in a constant state of replacing bits of themselves with nutrients from the blood stream. Heck, even the bit where DNA unzips and rezips doesn’t just happen during mitosis. So when a neuron has undergone enough routine maintenance to have replaced all its own bits with different atoms but plugged into the same spots… same neuron?

          1. Arthflaidd says:

            Same data? I think the main issue is seeing consciousness as a series of pieces where each piece is important rather than the “emergent entity” of all of those pieces working together as the continuity people here believe. You can replace any or all of the pieces, what matters is the gestalt entity continues to be largely the same (or changes only in so small a way it is unnoticeable, which is pretty much what happens to people on a day-by-day basis anyway).

            In the context of the ship it may not physically be the same ship, but the concept of the, let’s say “Salty Bob”, is what matters, and would still be the same for the crew. They don’t care that after 20 years they’ve rebuilt the entire thing, it’s the same conceptual entity. Just like sim-Catherine considers herself to be as legitimate a person as the original, her concept of self really hasn’t been changed by the physical reality of her new chip-brain. Simon is a bit more disturbed, but he seems like an annoyingly introspective guy in general, and he still gets by rather well, all things considered.

  7. The Rocketeer says:

    Hey, guys, stop me if you’ve heard this one before:

    “Self” as a meaningful construct.

    HAHAHAHAAAAA!! Gets me every time!

    1. Veylon says:

      I know, but without weird concepts to ponder, philosophers would be bored. We wouldn’t get to argue about the line between alive and non-alive or consider whether p-zombies are a thing or not.

      1. The Rocketeer says:

        The line between alive and not alive? Is it harmed or healed by restorative magic? SOLVED.

        Also, probability zombies are a thing and also not a thing. SOLVED. Next!

        1. Daemian Lucifer says:

          What about constructs(golems,robots and such)?They are neither harmed nor healed by restorative magic.

          1. Nimas says:

            Obviously it’s not alive if it isn’t healed by the power of ‘insert generic fantasy god here’! Maybe it’s just really good at pretending?

  8. Hermocrates says:

    The Ship of Theseus isn’t an analogy, it’s a thought experiment. It isn’t saying “if you do it one piecemeal, it’s the ‘same thing,’ but all at once is a ‘new thing’.” It’s saying “here’s a new way to think about the difference between doing it piecemeal and all at once. Now, where do you draw the line? Can you?” Philosophers have struggled to come up with a convincing, single answer, but everyone is going to think about it differently.

    As to questions about “how accurately do we represent me, including my anxiety or what have you”, I think the only moral way to go about that would be, assuming you want to upload into a computer/robit, to start you off with as accurate a simulation as possible, and then let YOU control which parts to shear off or tweak. Let you figure out what level of change you’re comfortable with, and then let you define that as “you.”

  9. Yummychickenblue says:

    Alright so who’s excited for Reginald Cuftberts return? I for one can’t wait for him to be told he’s not a mercenary by the Brotherhood of Steel again.

  10. Clevername says:

    I’m not gonna get too deep into this cuz lots of people already have but I have to complain about your use of the term metaphysical. What “Bob” is proposing is that there is no metaphysical realm or spirit. Therefore a perfect copy of you is the same as the original. Just like a copy of a computer file is the same as the original. There is no way to tell the difference, because there is no difference.

    If humans have a “spirit” or some metaphysical nonsense like that then there is no way that the brain scan of Bob is the same as human Bob. But if we live in reality then if I kill you and then make a perfect copy of you at that exact moment then that is still just as much you as the original. Your consciousness doesn’t need to transfer over because suggesting something like that is to suggest the existence of something metaphysical about consciousness which there is absolutely no evidence for.

    Another way to think about this is that you right now could already be a perfect copy of yourself and you would have absolutely no way of knowing it. Maybe your entire life up to this point was actually lived by someone else and you are just a perfect copy of them. Hopefully I’ve explained what I mean. I’m not the smartest so I could be misunderstanding something here (hopefully that doesn’t sound sarcastic cuz I don’t mean it to be), but thats my interpretation of what the game is saying.

  11. Wraith says:

    > Like I said in the show: To me this doesn't save humanity. It might arguably create some new thing that's just as interesting and important, but saving a couple dozen brain scans isn't the same as having an ever-evolving population of reproducing organisms. The ARK is stagnation. The only good thing you can say about stagnation is that it's preferable to oblivion.

    The thing is, the Pathos-II crew know they’re doomed, and the ARK is their best proposal to stave off oblivion. And yes, it’s critically flawed in terms of stagnation, but I’m not sure in the same way you feel – when one spends the rest of eternity in an eternally limited and unchanging environment, inevitably everyone is going to get that nagging idea/memory into their heads that their world isn’t real. Which raises further questions about the ARK that the game never really addresses – is it possible to kill oneself in the ARK, or is everybody trapped there forever like some warped version of Purgatory?

    And these flaws demonstrate why the ARK is not the answer to humanity’s salvation despite it being the focus of the game’s plot – the WAU is. Which is why I didn’t kill the WAU in my playthrough, and think it’s the most foolish and short-sighted choice you can make if you do.

    Yeah, the WAU went a “little” off the rails of its programming and destroyed the Pathos-II crew, killing most of them. But the WAU is also actively attempting to recreate human life is some new fashion. And the monsters you face throughout the game are the products of these attempts with gradually increasing success…culminating in Simon 2.0 (ignore that the gameplay mechanics cause ludo-narrative dissonance by having the other WAU constructs be hostile to you, which makes absolutely no sense). The WAU has, for all intents and purposes, succeeded in recreating human life with the Simon 2.0 as its model. And it’s a model that even has free will.

    The ambiguity of the WAU is one of the best parts of the game. It’s a pretty great villain/anti-hero. And one of the best parts about it is that the game never allows you to communicate with it, which builds upon the heavy implications that the WAU went off the rails in the first place because it has evolved a morality system that humans wouldn’t be able to comprehend properly.

    1. Echo Tango says:

      I also thought the WAU was the best chance of survival, but only in the long term. Like, eventually, it’d either upgrade itself to the point where it can make really good humanoids and also start terraforming the planet, or else it’d mutate into something that could do that. Which is effectively the same thing (unless it mutated into a degenerate form). However, I decided to kill it, because at the time of playing the game, I had not yet thought about the situation enough to come to that conclusion. :S

  12. nerdpride says:

    As much as I like this setting, it’s starting to get a little boring now. Same kinds of weird holes about how the brain works. I really hope the Simon dude doesn’t keep saying that he’s not human anymore. Could use an injection of something interesting. Although it sounds like we’re just starting the generic, repetitive enemy part of it now, ugh. Similar to the feeling where I left off in Amnesia.

  13. “Why would you make a machine brain to feel anxiety, fear, belligerence, etc.?”

    Because you want to simulate being a human, in this case, to maintain the illusion of being flesh and blood. I’d look at the problem along an engineering angle: Why would you make a robot that’s human-shaped? The answer: So the robot can interface with the world that’s been built for humans. It’s not as efficient, it’s bloody difficult, but it’s not seen as unreasonable. For all our flaws, the simulation of those flaws is what would make the experience seem “real.” I don’t itch my nose constantly, and I might not even miss it if I didn’t have to. We rarely think about our biological functions (unless that’s an obsession of some kind) until they become urgent. We also don’t know if there are other subroutines that basically distract the “mind” of the robot away from thinking about things that would be too distracting, like itching, sneezing, etc. We’ve already seen that something in these machines allows them to not perceive themselves as robots, so who knows how far down that path the machine’s structure goes?

    We also don’t know if there are any safeguards in place. The Asmovian model of a positronic brain was, in essence, a mechanical approximation of a human brain, but vastly improved in the areas of speed, recall, efficiency, etc., but it was restrained by the Three Laws. Obviously nothing that restrictive is in place here, but some background processes could be at work.

    Also, I thought it was mentioned that one hope was that these scans would be merged with their organic counterparts (if they still lived) at some point. How better to get that to mesh with a human than to have human perceptions to write into their memories? Otherwise, you might as well watch a video of their actions for all the meaning it would have.

    Assuming you have this amazing simulational ability in your robots, the purpose of all those “flaws” is because you want the robot to be about as human as possible. They’re not being made as remote-control machines, so there’s the same risk of them doing something unpredictable, being anxious, going nuts, etc. as you’d have with any human. You might as well ask why this place relied on unreliable humans in the first place? They’re all just a bundle of nerves, prone to screwing everything up.

  14. Kibbles says:

    I think the ways y’all debated the Ship of Theseus is close, but not quite the same, as what the Brain Scans are experiencing in this game. It’s less like the ship was changed or replaced, it’s more like: The Theseus goes down in a wreck, but the captain of the ship survives because he was in port at the time. When he finds out the Theseus has gone down he commissions another ship that can do all the same verbs as the Theseus, but is also different in every design feature. Is that ship the Theseus? The captain in charge is exactly the same, but it’s not the same ship anymore.

    Not gonna think about it too much on my end just for fear of existential anxiety, but just figured I’d put in my two cents regarding how to frame it.

    1. Warclam says:

      Seems right to me.

    2. Joe Leigh says:

      Ok, this needs to be said by someone. The “Ship of Theseus” means Theseus’ ship. Theseus is an ancient greek dude (same guy that went through the minotaur’s labyrinth), his ship doesn’t have a name as far as I know.
      So, if you replace all the parts of the ship, is the ship still Theseus? No, because Theseus is not a ship. Case closed, everyone.

  15. Oh, and all this talk of clones and duplication isn’t new. CGP Grey recently posted a video on the time-worn concept of the trouble with transporters in science fiction.

    I mean, given that you don’t have any evidence that you are the you that went to bed last night, a lot of the discussion about consciousness in Soma seems kind of moot.

  16. Mintskittle says:

    For the audio log right at the beginning of the episode, I’m wondering why they decided to use this version of it and not the one from the Theta trailer.

    http://www.youtube.com/watch?v=AH4y2tZei2o

    In the trailer Catherine sounds thoroughly distraught that the test subjects keep killing themselves, while the ingame version is only somewhat miffed. It just doesn’t sound right to me.

    1. Ninety-Three says:

      It does make her seem terribly callous, but I thought that was the point. Not that she’s supposed to be callous just to be a callous jerk, but she has some very transhumanist views that lead to her not really caring if a meat body dies once it’s been scanned.

      1. Alex says:

        The impression I got wasn’t that she is callous, but that she just doesn’t get people. And really, I can imagine someone like that assuming a stint on a research station in the middle of the Atlantic Ocean is just what she needs.

  17. Thomas Adamson says:

    It’s more the “teleporter” problem than the “Ship of Theseus” problem.

    When you “teleport” is the teleported version of you “you”, or just a copy of you while the original “you” is destroyed? What about if the teleporter malfunctions and copy of you is created while the original “you” remains? Which is the “real you”?

    The only difference here is that your mind – and for the purpose of this fiction let”s go with the neurological scan and simulation version of recording a brain – is placed in control of a piece of mechatronics. Or is in a ‘Matrix’ style simulated world.

    The question I would ask Shamus is: If I scanned your brain, then destroyed your real brain and replaced it with a bit of wet-hardware that was running a perfect simulation of your brain, are you still you? What if I did it gradually “Ship of Theseus” style replacing single lobes each time?

    1. That was the work-around for one of Heinlein’s Lazarus Long stories. When they were “rejuvinating” the titular character, making him young again, they swapped out his brain bit by bit so that there was never not a living brain running Lazarus’ body.

  18. Micamo says:

    To me, SOMA depicts a dystopian setting (even before the comet falls) where we have mind-uploading but no understanding of consciousness or continuity of self.

  19. drlemaster says:

    There is short philosophy lesson disguised as a cute little 8-page sci-fi story called Where Am I? by Daniel C Dennett. If you google it, some college has the PDF online, but I am not sure if I should post the link directly. It mainly deals with putting your brain in a vat, but still having it in control of your body, but gets in to brain scans and divergent versions of yourself on the last couple of pages. In this story, both the brain version and computer version of the narrator thinks of himself as the real version, and the other version as his brother or twin.

    Applying the concepts of this story to Soma, once you were scanned, the bio folks would go on as before, not perceiving anything differently, thinking of themselves as the real version. The computer version, when woken up in The Ark or a robot, would also think of itself as the real version. Now, I could certainly see the folks on the station being suicidally depressed about their general situation, but there would not seem to be any advantage to immediately killing yourself. In fact, if I were is such a situation, I think I would be pretty motivated to make pretty damn sure The Ark was fully functional and safely in orbit before considering killing myself. Meat-me might be in a very bad place, but meat-me would want to make sure his computer-twin got off safely.

  20. Nick Weaver says:

    This is quickly diving into Ghost in the Machine/Shell territory.

  21. Jabrwock says:

    I suppose the whole “transfer me via suicide” depends on your beliefs.

    One, that your consciousness can be duplicated. Like down to the point where the Ship of Theseus built from duplicate parts is still mostly the same ship. Enough that it’s compatible, but missing that “essential” element.

    Two, that there is a second port to what makes you “you” that is un-copyable, and that this is somehow transferable to the duplicate ship via the mechanism of suicide.

    I can’t think of a comparable religious equivalent. Unless you count ancient Egyptian, but even then they didn’t really contemplate the idea that you could duplicate the body down to the nitty gritty, which is why you had to preserve the body so the soul had some place to return to.

    On the Star Trek angle, this makes the whole “you get destroyed and then rebuilt” a bit less problematic, because your “soul” transfers from the destroyed you to the newly created you. Otherwise, yeah, I can see why this is so dark. Because you’re copying you, and then murdering the “old” you…

    1. Tizzy says:

      I have a simpler motivation for killing yourself after the brain scan, but I don’t know if it works for the story.

      My argument would be that if you want to preserve the “real” you, you would have to kill yourself before your post-scan experiences turned your scan into an obsolete copy of you. Not the real you anymore.

  22. DGM says:

    When Shamus and Josh talked about making AI copies of their minds and having them start a blog, did anyone else suddenly think of Max Headroom?

  23. Daemian Lucifer says:

    I dont think catherine was odd.So someone killed themselves after THE WHOLE HUMANITY WAS WIPED OUT.Why should that one act of selfishness phase her when her whole race is dead now?

    1. Sunshine says:

      That’s easy: having someone you know commit suicide is going to hit you harder than a million people you don’t know. And while the extinction of humanity is bigger than that, it’s still going to affect Catherine more to have someone in front of her take their own life.

      And she sees this project as the way to preserve humanity (or at least, it’s a way to build a memorial or fling a light into the future, as TV tropes puts it) and that could be endangered by this.

      Also, as noted in the video, she’s a bit odd with people. It’s possible that she would just see the news from the surface as theoretically terrible, but not as resonant as what’s happening around her.

    2. Jabrwock says:

      One death is a tragedy. A million is a statistic.

      It’s hard to wrap your mind around the rest of humanity being wiped out. Your co-worker, you have a personal connection.

      1. Daemian Lucifer says:

        Sure,but that means that she had no one on the surface who she cared for.No mother,father,children,no family or friends who she talked to before the apocalypse.

        1. Jabrwock says:

          Which is possible. Maybe she’s a single child, her parents are already dead, and she was so focused on her work she wasn’t emotionally involved with anyone outside the project.

  24. Daemian Lucifer says:

    If you have general anxiety and filter everything through it,then build a robot that operates exactly like that,is that robot still you?What if you then take anti anxiety meds that completely change your brain chemistry,are you then still you?Is the robot now more you than you?And what if you then build a robot that operates exactly like you are operating now,with changed brain chemistry,is that robot you?

    This can get even weirder.What if you rebuild your whole body,brain and all,from biological components.Which of the four is the real you now?The original,the original on meds,the first copy or the second copy?

  25. Daemian Lucifer says:

    “We will shoot you in the face while doing the scan”

    Actually,that would work.Thats basically how transporters in fiction operate.But in order for that to happen,you have to have the person doing the scan be willing to kill you,which is dubious.It also isnt the situation here,where the person doing the scans doesnt want anyone to die.

    1. el_b says:

      theres an episode of the outer limits where humanity meets aliens who have teleporters. the problem is they just copy you and the original has to submit to death to keep things balanced.

      1. Daemian Lucifer says:

        Something similar was used in the prestige.

        1. silver Harloe says:

          Though in that movie, he chose “one will die” because he didn’t want to deal with the complications of copying himself. In fact, there’s a time span during the trick where there are two of him alive and running around. If the one backstage chickens out of the getting the tank, then he’s going to have issues with himself – but there’s no mystical “balance” he has to maintain.

  26. Daemian Lucifer says:

    Surprised you didnt title this one “Some canadian died!”

  27. Fizban says:

    I don’t really understand how people can debate about it because to me it’s mind bogglingly simple: you are you. A copy is not you, it’s the second you. If the copy is aware it was copied, it also knows it’s the second you, otherwise it knows something strange happened at the least. I’m rather annoyed by the “cult of continuity” in the game, because it’s taking the correct word and using it for dumb.

    Continuity is the correct word, you are the continuation of you. The only way become something else while remaining you is to do so via gradual replacement. Your mind is a program that never stops running, built on meat computer of processors and storage media: make a “scan” and put it onto a new storage media, it’s something new. Replace the pieces one by one while the program is running and you remain yourself, including the changes that happened as a result of becoming a robot like un-simulated biological processes.

    A significant catch remains: a computer could probably free up some system resources by stopping unnecessary programs and shift stuff around so nothing is “lost” when pieces are replaced, but even if we knew how the brain worked that doesn’t mean we could force it to do that. So you would remain you, but you’d also know that the transformation could have resulted in damaged memories or altered thinking: yanking out pieces of the computer while it’s running and replacing them with new pieces, even if they’ve been filled with the same data, will probably cause some sort of problems. The only mitigating factor I can see is performing the process as slowly and in as small of steps as possible, there should be some limit under which the system will avoid errors. But you’re still unambiguously yourself, transformed but yourself.

    A copy of you from before said transformation is obviously a copy, of your old self (of whatever quality it is). Might be good to have a reference to see how much the transformation changed, but you should also be able to remember your old self anyway (unless data/program corruption has occurred, which this might help to check).

    1. Christopher says:

      This is where I come down on it too. Say you make a painting in photoshop, and after first saving it as Me1.psd you save a backup file named Me2.psd. It’s technically identical, but that doesn’t mean it’s not a different file.

      1. silver Harloe says:

        Now you put your backup on a disk, rename it to Me1.psd and give it to someone else.
        Jokes on them, right? They thought they were getting the original, but they got a copy. Ha ha.

        Except, no, as far as they are concerned, they did get the original. Wait a week and tell them “ha ha, all this time you’ve been working on a copy!” and they’ll just look at you funny.

        1. Christopher says:

          I guess that’s where the metaphor breaks down. Nobody cares about which digital file is the original, but I sure would care if my copy started using my name and hanging with my friends while I was stuck doing my homework._They_ might care more or less considering he’s my exact clone though.

    2. Daemian Lucifer says:

      Your mind is a program that never stops running

      Except when you are knocked out.Or in a coma.Or every time you go to sleep.There are plenty of times when someones continuity gets interrupted,or distorted,or erased.

      Also,you fail to consider the situation presented by this game:simon gets copied,then the copy gets copied,then the copy gets copied the second time.But you,the player,experience stuff from the real simon,then the first copy,then the second copy,then the second copy again.Which one is the real simon?The one that got copied first,or the one whose point of view you have been following?

      Or better yet,what happens when someone does a duplication like in farscape where neither of the two is a copy,both are the original,only duplicated?

      So yeah,it seems simple on the surface,but if it were it wouldnt be pondered by countless genius philosophers for millennia.

      1. Jabrwock says:

        The upper level processes maybe, but there’s some system control daemons that are always running, until you’re dead. Somebody has to sort the day’s memories and file them away. Part of what makes you “you” might be in how those systems choose which memories to store.

    3. silver Harloe says:

      The only way you know you have been continuously you is because you remember it that way. It’s an illusion. And each of your copies will remember being continuously themselves and fall for the same illusion.

      1. Fizban says:

        It doesn’t matter that your copies think they are you, they’re still separate beings, and as long as you yourself can grasp the fact they can too. If you start up a copy while the original is running they don’t become some sort of hive mind (unless they actually are linked, since that’s the other way to move to a new body: by adding new hardware and migrating-but copy implies independence), though I don’t think anyone’s seriously suggesting that. Anyway, the whole idea that there’s a “real” one is barking up the wrong tree. It doesn’t matter which one is “real,” what matters is that you are you, and you’re stuck in your body.

        (Spoilers:) This is addressed at the end of the game when Simon just doesn’t get the fact that he was never going to “get on” the ark. They describe it as a random chance but it’s not that either: no coin flips, the one in the robot stays in the robot and that’s it. Two Simons exist and one’s on the ark, but the one in the robot never stopped being in the robot.

        (End Spoilers). As for sleep, Jabrwock made part of the point. As for further extremes, like someone going brain dead and then making a miracle recovery, that’s still booting from the same storage media. While brain dead you ceased to exist, all mental functions suspended, just like when Catherine is unplugged. I’d like to hope that enough info is stored outside of brain waves that waking up from brain dead is the same as waking up from sleep, but without more knowledge of the systems there’s no way to be sure. For practical purposes it’s good enough for everyone who’s still around.

  28. AR+ says:

    This situation doesn’t necessary have a straightforward answer either way to the question, “is this the same person?”

    The answer might be, “it’s as much the same person as it’d be if you just went to sleep and woke up again.”

    “So, it’s completely the same person?”

    “That’s not what I said.”

  29. Daemian Lucifer says:

    Does he think that he'll shoot himself in the head, and then suddenly find himself in a robot or whatever?

    No.The idea comes from projecting the idea about quantum wave function collapse onto the macroscopic world(not a good idea).Basically,once there are the copy of bob is made,there are two universes,one where bob sees things from his original body,one where boob sees things from his new body.These two exist simultaneously,and which one bob will perceive,from his point of view,is a 50/50 bet.

    But if bob decides to kill himself after the scan,that universe stops existing*,so chances of bob experiencing stuff from the other universe are now 100%.Bob wont remember that he killed himself and suddenly woke up in the simulation,because from the moment of being copied,bob perceived stuff from the “copy universe”.

    The problem with this is:Even if we accept this as true,and the act of deciding to off yourself is what increases your chances of being in a copied body,what happens if after the scan you change your mind?Or if you get prevented from offing yourself by the unforeseeable actions of others?Or if you botch your suicide attempt?Or if the copy gets corrupted and destroyed before the original gets to die?

    *Rather bob stops perceiving stuff from that universe,while the universe still exists,only without bob to experience it.

  30. Pinkhair says:

    These people in the setting were under some pretty outrageous stress, in an environment that made it all too easy for small pockets of the already small population to be stuck together a bunch. And that’s before humanity went extinct on the surface.

    And it isn’t like suicide cults with much shakier ideas about hitching rides on passing comets haven’t been a thing in living memory in the first world. Katherine didn’t share their beliefs, and likely neither did many of the folks who were involved, but in an environment like that it would only take one charismatic person to get the people liable to go for it on board with an idea like that.

  31. Parkhorse says:

    It sounds like Bob is the sort of person who would be worried about Roko’s Basilisk.

  32. Mersadeon says:

    Just as an anecdote, I know someone with a belief that completely baffles me. So, I know lots of differently-believing people. And to some degree, what we believe and what we hope is real isn’t always the same. I’m an Atheist, but I wish there was a life after death, because I’m scared of it. Pretty simple.

    I’ve met someone that is the other way around, and I don’t get it.

    She believes with absolute certainty that SOMETHING happens after death, reincarnation or an afterlife. But she hates that. She wishes there wasn’t anything. Oblivion, non-existence, the most scary thing I can think of, is what she hopes for.

    1. Daemian Lucifer says:

      I dont fear death,nor oblivion,at all.But dying itself?That I dread.Especially if its a slow and agonizing death,like from a terminal illness.Which leads into my worst fear:Eternal life without eternal youth or invulnerability.Lucky for me,thats probably not a thing that can happen.Maybe.I hope.

      1. el_b says:

        you should check out torchwood miracle day…its basically about the whole world being given that, it drags on in parts but its got some interesting ideas. at the least check out sfdebris runthrough of the miniseries :)

        1. Daemian Lucifer says:

          I saw both.Because Im a fan of both John Barrowman and Chuck Sonnenburg.Torchwood was full of interesting ideas.

    2. silver Harloe says:

      Eternity is a really, REALLY long time. Especially to not be able to interact with the world. Oblivion is preferable to a google years of boredom. And after a google years you’ve still not even started to touch eternity, because it goes on forfreakingever.

  33. 4th Dimension says:

    For me the issue is simple. Both the robotic and flesh you is you, running the same inital program on different hardware.

    There is another reason that for me ARK is failing to preserve humanity. Oh from the point of the last remaints of humanity it’s their best shot, but it’s also terribly limited. ARK lacks giving the ability to the humans to have agency in the real world. Without agency, they are not living more than those kept alive by WAU ale living.

  34. silver Harloe says:

    Shamus keeps saying these biological feelings are an important part of “self” and simulating a brain, but I’m unconvinced. Prosthetic arms don’t itch, yet we don’t tell those people they died and someone else is living their life. If someone’s genitals are damaged and stop making certain hormones, we don’t say the person died. If someone got a prosthetic body full of artificial organs, but their brain remained intact, we’d probably grant that person “lived through it.” (At least we seemed willing to for RoboCop, though Omnicorp’s lawyers disagreed).

    Back to a person who still has all their squishy parts: If we replaced a single neuron with a chip that had the same inputs and outputs, have we killed that person? A hundred neurons? A million?

    1. silver Harloe says:

      So why is Simon any different than RoboCop? Both will not itch or be hungry or lustful.
      Because RoboCop has a squishy brain instead of a simulation of a squishy brain?

    2. Shamus says:

      No no no. I’m not really saying that feeling your body is an important part of being you. I’m using it as an example of the millions of little details that would go into simulating a person and the kinds of questions it raises. I realize it’s not clear because it all comes out a a rambling back-and-forth conversation, but the “does your nose itch?” question was me looking for the most innocuous little details that an engineer somewhere would have needed to think about when building this machine.

      So many of the thought experiments in this game begin with the premise of the machine that makes an “exact copy” of your brain. I’m pointing to all these little biological details as a way of demonstrating that the copy is clearly NOT an exact copy. It’s different somehow in obviously superficial ways. And if all of these superficial, easy-to-observe details are different, then what ELSE is different? How deep do the changes go, and how can you possibly measure them?

      Game: If I make an exact copy of your brain, is that copy you?

      Me: You’re obviously not making an exact copy. I have no idea how deep the imperfections are.

      1. Thomas Adamson says:

        What if I scan your brain, surgically remove your real brain, and then put an exact simulation of your brain back in control of your body?

      2. silver Harloe says:

        But other than anxiety, you’re basing all your “that’s obviously not an exact copy” on external inputs being missing.

        Now let me philosophize briefly on anxiety and emotion: you say these are imperfections in the bio brain and thus we shouldn’t re-implement them in the robo-brain. But then the robo-brain can’t be an exact copy. So let’s consider two cases:

        1) you want to actually create an exact copy of a bio-brain, but in software. Obviously, then, you’ll implement a hippocampus, amygdala, hypothalamus, et cetera. [These play strong roles in emotion even without hormone inputs from the rest of the body. Your software brain will experience emotions because of these elements, and they’ll mostly be recognizable human emotions]. So you don’t really have an argument anymore than there are superficially obviously uncopied bits of brain that lead into wondering what else is copied poorly. Unless you disagree with the two sentences I enclosed in []s, of course. Which is a fine position to take given that humans don’t really understand the human brain yet.

        2) you want to implement a robo-brain from scratch that cares only about the needs of the robo-body and isn’t “designed” by evolution to have feelings and inputs related to maintaining a squishy body. I would argue first that your robo-body may not be AS squishy as a human body, but it still needs maintenance. So you’ll probably want sensors to detect damage and report that damage to the robo-brain. You’ll still want a sensor to detect your battery is low and needs a recharge. You may think “well, sure, but why make them like unto pain and hunger?” I can’t confirm this, but I think if people registered pain as “just” a notion and hunger as “just a number telling you it is low” a lot of people would starve to death or push their bodies too far and break them and die. I can’t cite real sources here, but my gut tells me that we experience hunger as a drive not just because we’re imperfect biological squishy beings, but because previous generations that felt too much or too little hunger died off from doing too little or too much before prioritizing eating again, respectively. I feel that we evolved to the “correct” level of hunger-as-an-emotional-factor. Similarly for pain – if we don’t prioritize pain high enough, we leave our hands in fire too long, but if we prioritize it too highly, we can’t wash in hot water. I could be way wrong here, and there are certainly arguments to be made that I have no fisking clue what I’m on about. But I’m going to press on as if I do know what I’m talking about. What about other emotional states like anxiety or stress or boredom or love or hate? Surely robo-brains need none of that business. We can just design them to not have a “dumb” emotion like boredom. Except they still are physical beings in a physical world that is changing around them. They still have to make decisions in “real time” about what to do next – they can’t spend a decade deciding whether or not to run or stand their ground when faced with a threat. They *need* shortcuts to get out of analysis paralysis. And that, in my view, is what emotions *are*. They are handy heuristics that keep us from spending all our time thinking when we need to act. Perhaps if the robo-brain can think a million zillion times faster than us, emotions may seem superfluous, since they can analyze so much more data and possible outcomes and come to the “right” decision without any short-cuts or heuristics. Or perhaps that is a *total waste of energy* and they need to think slower and short-cut more, so they aren’t going through their battery every minute thinking a million times faster than they need to for real-world operations. Again I defer to evolution and suppose there were probably ancestors of ours who thought a bit faster, but they needed more food and couldn’t get it and died off. I could be way wrong here, but I suspect we feel anxiety as a side-effect of having import cost-saving heuristics for dealing with scary situations. And I suspect if we tried to build robots without those heuristics, their energy demands would outstrip their energy-gathering abilities.

        In short, I think AIs *have* to have emotions. They may not be recognizably human emotions, but certainly AIs based on human brain-scans would have human emotions. So we’re back to your lacking evidence that the copies are imperfect – that you’re making a distinction between RoboCop and Simon that you don’t really *know* you can make.

        Of course, this is all speculation. I know significantly less than the human race knows about brains, and the human race knows significantly less than it needs to to provide real answers to these questions.

        1. silver Harloe says:

          Side note: part of the reason we have so many “imperfections” like itches and horomones and whatnot is because we’re a zillion times more complex than any machinery we’ve come up with. We can take inputs of basically water and “just about any other formerly living material you can think of” and not just maintain and grow our bodies, but about half of us can use those inputs to make a whole new instance of ourselves.

          All the robo-bodies we’ve thought of would need industry and factories to make spare parts to maintain themselves. Damage to your arm? Get a replacement. Not like your arm will just freaking heal if you eat “pretty much anything you chew”. We think of them as miracles of science, but generally we conceive of robots and androids that are *far too simple*.

          A robo-body complex enough to self-heal based on random things ingested will be sooo complex that it will probably need subparts that communicate with each other in summary form – otherwise there would be too much data for the brain to process. Afterall, the brain doesn’t really need to know “cell #2334313461 needs 1% more iron”, it just wants to know “subsystem #23 (digestion) operating at normal capacity, but has some bits it couldn’t integrate into the body which need to be disposed of in the next few hours to make room for more input.” Side-effects of a self-repairing, self-lubricating, self-protecting skin mechanism may include occasional sensory confusion like … itching. There are probably ways to make a complex skin mechanism that never ever feel anything “incorrect” like an itch, but maybe they require excessive processing power than can be better spent elsewhere.

          I think humans *are* robots – super complex ones that make our best engineering look like crayon drawings made with our off hand. I feel a lot of the reason you think Simon won’t be a proper copy is because we’re going to move him from an amazing machine into baby’s first tinker-toy gear wheel. And maybe that’s a really good point, but maybe it’s just the same question I’ve had before: why is Simon different than RoboCop? Both are inheriting robo-bodies with all their incredible simplicity.

          The best guess I have is “if we can only make a simple robot body, what the heck makes us think we can make software sophisticated enough for all the complexity of the human brain?” And that’s a good point, but firstly I’d contend our computers are vastly more complex than our machines, and second I’d contend that “it’s a given of the game” that they can make computers that sophisticated. They’re obviously sticking them in terrible bodies, but que sera sera. RoboCop got a crap body, too (from my point of view).

          Finally, in case it wasn’t clear or my writing has the wrong tone, I’m not trying to argue with you but discuss with you. One thing I know for sure is that I don’t have all the right answers, and I’m guessing you don’t, either, and that even between us we’ll be pretty far off the mark and people a couple centuries from now will wonder how grown adults could get the things they learned in middle school so laughably wrong. Nevertheless, I feel like we can fruitfully talk about this :)

          Well, I feel like we can talk about this provided we both agree with this basic hypothesis: non-living matter can be “arranged” in a way we consider living, and unintelligent matter can be “arranged” in a way we consider intelligent. This “arrangement” is complex, but nevertheless possible – possible enough that about half our population can make this arrangement, sometimes without even meaning to do it.

  35. Daemian Lucifer says:

    The best answer to the ship of theseus problem:

    https://www.youtube.com/watch?v=BUl6PooveJE

    1. Sunshine says:

      Or at least, the most concise telling of it.

  36. Sunshine says:

    There was no room in the philosophical discussion to mention Catherine asking “What the hell are you doing?” when Josh fists the WAU again. Simon’s response is worth highlight because it fits for the story or Josh’s reasons: “I know it looks bad, but makes me feel good.”

    Is it right to assume that the immobile guy in the basement and the monster have be pumped full of structure gel by the WAU in a well-meaning attempt to preserve them, and using the sphincters means Simon risks going the same way? (or is that spoiling the story for later?)

  37. JJR says:

    I think Shamus falls in to a trap of assuming the mind-body problem is solved and physicalism is the only reasonable solution. There are some big flaws with it (notably the Chinese Room, China Brain and Inverted Spectrum problems) and it’s by no means a finished topic in philosophy. Bob could just as rationally come to a conclusion other than physicalism wherein his conscious is separate from his body. It seems kind of dismissive to put this conclusion as being a product of being really desperate for hope and believing irrational ideas, when they can be constructed completely rationally.

    That being said I’m sure Shamus probably isn’t actually dismissive of non-physicalists, it just sometimes rubs me the wrong way (and I’m a physicalist too).

  38. Arki says:

    The funny thing is that all this blathering about ships and realness has a very simple answer. Things are what they do. If something does what something else does, they’re both the same thing. Everything that performs the same function is the same. All ships ship, and some are more alike than others according to design and shit. The only thing that really changes in building a new ship is weird societal constructs concerning ownership or whatever. Alternatively, it’s just a troll argument about semantics.
    And a brain isn’t flawed. To have a flaw, you must have a proposed purpose. This means you’re musing on your own uncertainty about what an ideal mind would look like, not on whether particular features would be included, which is a perfectly fair thing to do. The value seems to be on preserving Human, and Human involves all those things. Just as preserving Baseball would mean having everything that Baseball does. Baseball would be better at Cat if we changed it a bit, but we’re not trying for Baseball-Doing-Cat.
    There are no significant differences between a “machine” brain and the sort we have, as… for one, those others don’t exist yet, so we can’t make judgment calls on their fidelity. For two, it seems clear that the purpose is to have brainactivitystuffpatterns highly similar to what happens in us. Same pattern, different materials. And we wouldn’t want to sound materialistic, eh? Playing Crash Bandicoot on an emulated Playstation on your computer is still playing Crash Bandicoot. Even with hiccups and a different controller. And savestatescumming.
    A simulated world, if you value the pattern of Human stuff over the materials, is just a muuuch more efficient version of weird things touching butts.
    Eh, all rambly. I do like that there’s thought going on for this stuff, but it does feel like Shamus seems to be talking more about the difficulty of achieving a copy, rather than against the idea of a copy, y’know? At least, that’s what the points he brings up mean to me. Which is assumed to already have been a success. Probably. Within an unknown threshold of error. So it all sounds a bit inane in that context, though still valid considerations, as it’s rather important to define terms and thresholds.
    Oh! On the people killing themselves! One angle is so they don’t die, ya dig? Using reasoning similar to what I think you lot use in regards to whether a copy is “you”. See, if they continue after the scan is complete, then from that splitting point they become a “different person”, and so properly die while a “different person” is in the simulation. If they terminate themselves, then there’s no divergence. Say years go by since they took the scan, etc. I mean, maybe there’s some weird spiritual thinking going on, but aside from that it seems like a solid reason. With which I do not agree.
    Of course, my own view is that we’re changing all the godsdamned time and mostly have no idea what we are or what we’re doing, and so should be fine with a copy not looking like us. We never look like ourselves to perfection, and we always want to change things about ourselves. Learning about stars, picking at scabs less, overcoming a crippling fear of deep water that prevents you from even playing games like SOMA… I’m me because I’m still Arbitrary Degree similar to the vague haze of behaviors and thoughts I usually carry out, not because I’m exactly who I was yesterday that’d be fucking weird. If my arm goes numb, if I lose my sense of smell, if I acquire AR glasses… Though I suppose that’s another issue? Thinking of ourselves as static pictures frozen in time, rather than the reality of we’re kinda doofy and constantly changing. Every time you remember something, it’s altered a bit. Every moment that passes. Walking into a room with pink walls, rather than a room with green walls. Happening across candy laced with a nondamaging though still perceptionaltering substance. Talking to another person. Reading an entirely too long, ranty, and likely redundant comment.

    Summary? Things are what they do. What I do is change. Therefore, vaguely resembling me is a sufficient copy. Me, but I like chocolate. Me, but I spent three weeks in a harem. Me, but three years from now.

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply to Fizban Cancel reply

Your email address will not be published.