A Matter of Life and Geth

By Shamus Posted Monday Mar 7, 2011

Filed under: Video Games 343 comments

splash_robbie.jpg

Last week Reader Cody211282 was having an interesting debate in the comments about the nature of AI. Should the Geth in Mass Effect 2 be considered “alive”? Here is just a bit of the exchange:

But they aren't living they are machines, it's like comparing my computer to my dog(I know I'm dumbing it down a bit here), they can both do basic tasks and I love them both but one of them isn't alive and the other is. The Geth have software that dictates what they do, even Legion says the that only reason the Heretic Geth follow the Reapers is because of a computer glitch.

Points to the Mass Effect setting for this. This is exactly what I love about science fiction. When it’s good, it brings up questions about what life is, what intelligence is, how we perceive the universe, and why humans behave the way they do. When it’s really good, it asks these questions in a way that leads to bigger questions, and allows the reader to ponder the answers for themselves.

In this particular debate, Cody is advocating the idea that no matter how smart a robot is, it’s never entitled to be regarded as alive, because it’s a machine. Which is true – it’s not alive. But a lot of sci-fi fans (myself included) would insist that it be treated with [some of?] the rights we give to humans, simply because it’s intelligent. That is, your person-ness is based on intellect, not on construction methods.

But if someone accepts the position that persons must be organic, then as a sci-fi author you can play around with the idea by forcing the reader to figure out where they really draw the line. What if we build a computer, but we used neurons instead of transistors? (But it otherwise operated like a normal computer. Maybe build yourself a nice organic Linux box, for example.) What if you built a robot, but you used an organic brain? What if you made a synthetic brain that operated identical to the human brain, Asimov-style? What exactly denotes personhood? The design, or the building materials?

Douglas Adams played around with this idea in The Restaurant at the End of the Universe when he introduced an intelligent animal that had been bred to be eaten:

‘That’s absolutely horrible,’ exclaimed Arthur, ‘the most revolting thing I’ve ever heard.’

‘What’s the problem Earthman?’ said Zaphod, now transfering his attention to the animal’s enormous rump.

‘I just don’t want to eat an animal that’s standing there inviting me to,’ said Arthur, ‘It’s heartless.’

‘Better than eating an animal that doesn’t want to be eaten,’ said Zaphod.

Of course the real problem here – and the one they never talk about because it would kill the humor – is that Arthur is actually upset at eating something intelligent. In his mind he’s granted person-hood to the creature, which takes it off the menu. The question isn’t “does this animal want to be eaten?” but “is this creature capable of understanding the question in the first place?”

A lot of sci-fi authors maintain that intelligent beings will naturally desire to survive, express themselves, find acceptance, and experience love. I’m on the record saying that I don’t think this is the case. In that article I said…

The conventional wisdom in science fiction is that any artificial intelligent beings would naturally adopt the same drives and goals as Homo sapiens. That is, they'll fight to survive, seek to gain understanding, desire to relate to others, and endeavor to express themselves. Basically, Maslow's hierarchy of needs. Fiction authors routinely tell us the story of robots who want to make friends, have emotions, or indulge dangerous neurotic hang-ups.

But I don't think this necessarily follows. I think it's possible to have an intelligent being â€" something that can reason â€" that doesn't really care to relate to others. Or that doesn't care if it lives or dies. “I think therefore I am” need not be followed by “I want to be”.

Shepard Commander. We once captured a gill-bearing aquatic vertebrate of this size.
Shepard Commander. We once captured a gill-bearing aquatic vertebrate of this size.

It doesn’t matter if you think the drives of organics came from a creator God, or through millions of years of merciless Darwinistic vetting. (And please don’t sidetrack this with THAT debate.) Organic creatures [believe that they] have purpose, and their drives flow naturally from those. If you ask a human, what their “purpose” was, you’d get a lot of different answers. “To enjoy life”, “to understand the universe we live in”, “to make the world a better place”, “to be loved”, “to die with the most toys”. Few people would say that the purpose of their life is to simply survive and propagate the species.

Say Johnny, what do you want to do when you grow up?

I want to find a mate and reproduce according to my biological imperatives!

But what is the purpose of the Geth? What do they want, beyond basic survival? Or rather, what would the Geth say if you asked them that question? It’s obvious that the Geth are more or less classical sci-fi AI that – as soon as they become sapient – immediately attain all of these biological imperatives for survival of self & kind, not to mention the practice of making such distinctions. (Hey, this other living thing is unlike me, so I value it lower than my own kind.) That’s fine. An AI like the kind I write and think about would feel strange to people who take familiar sci-fi robot tropes for granted. It would require a lot of exposition get the audience on the same page.

I’ll probably scratch this itch by writing some AI fiction again someday. But in the meantime it’s fun to play with and speculate about the ones from BioWare:

What DO the Geth want?

 


From The Archives:
 

343 thoughts on “A Matter of Life and Geth

  1. Josh R says:

    I think that all the geth want is to be left alone and sit on the internet playing computer games all day like an adhd teen.

    1. Shamus says:

      It would be interesting to know what kind of games amuse them. Assuming they don’t have the will to dominate the way humans do, then an FPS might not interest them.

      Pattern-matching and sorting games are probably a sure thing, though. Anything intelligent will probably enjoy combating entropy and making order.

      I can see it now, all those server farms with millions of Geth. All playing Bejeweled.

      1. poiumty says:

        Go do Liara’s DLC and you’ll see Legion’s gamer profile. Contrary to your reply, he’s quite a fan of Medal of Duty and Galaxy of Fantasy.

        1. Kanodin says:

          I think it’s safe to qualify that as more of a gag instead of real insight into who Legion and the Geth are.

          1. Will says:

            It’s entirely possible that the programs on Legion’s platform gain some form of amusement or satisfaction out of playing games.

            Plus, i always got the impression that acting as an ‘individual’ and having all those programs on one platform had made Legion go a little loopy by Geth standards.

      2. DanMan says:

        No, they would definitely play FarmVille. What better for a computer to do in it’s “free time” than dull, repetetive tasks?

        1. Felblood says:

          I’m sure Galaxy of Fantasy combines their desire to grind repetitive actions with their desire to explore and gather information.

          –and with his supercomputer reflexes and aim, playing Metal of Duty is just going to be a way to explore how people react to a perceived superior rival. No wonder he has so little faith in organics.

          1. Veloxyll says:

            I bet people accuse him of botting all the time though

            1. Aldowyn says:

              … I see what you did there. That was a good one.

            2. Bret says:

              They do.

              He’s had the charges overturned every time. On the other hand, the charges for unsportsmanlike behavior were quietly accepted.

              1. Felblood says:

                *H17_MaNnZ* Player, your level of skill is inadequate.

      3. Cerapa says:

        It would be interesting to see computers playing strategy games among themselves, since they would have perfect managment and would snowball like hell.

        “You have achieved a 0.0001% advantage, GG”

        1. Daemian Lucifer says:

          – mate in 143 moves.
          – Oh, pooh. You win again!

          1. Irridium says:

            Oh no! Nerds!

            1. Raynooo says:

              Again with Futurama references ? Stop it !

              1. Daemian Lucifer says:

                Never!A deal’s a deal,even with a dirty dealer.

        2. Will says:

          Actually it would come down to strategy being the deciding factor, since micro and macromanagement would be equal (that is to say, perfect).

          So, arguably, it might actually improve the genre.

          1. Klay F. says:

            Actually, in Korean professional tournaments, strategy IS the deciding factor in Starcraft matches. Once you get your jaw off the floor after witnessing their speed, their strategies are quite ingenious. Its just at all other levels of play, build order and Actions Per Minute is the deciding factor. Another reason Starcraft 2 is slow to gain popularity in Korea is that no one wants to give up their tried and true strategies.

            1. Will says:

              Sort of, while definitely the case in Starcraft I due to approximately equal skill levels and APM, Starcraft II is sufficiently unknown that micromanagement can actually play a significant part. Players like BitByBitPrime are an excellent example of this; BBB basically defeated strategically superior forces by ‘out-tacticsing’ them. As the game has become better understood, BBB’s tactics have become less and less effective, which is why he was actually quite strong early in the GSL, but no longer is.

              Assuming you have two equal computers, their technical ability is identical, so the ‘skill’ factor never enters the equation as both sides have exactly the same multitasking and APM ability, something which just doesn’t happen with people.

      4. krellen says:

        I would write the Geth as desiring to explore the galaxy. The entire galaxy, relays or no, sealed or no. They don’t have organic life-spans to cope with, and clearly can operate even outside the range of stars (nuclear power supplanting solar power, I’d assume), so there’s nothing stopping them from exploring, analysing, and charting every square nanometer of the galaxy.

        It would be really interesting to have a sci-fi universe full of robots who completely ignore organics except when they interfere with their task of mapping the universe.

        1. Sumanai says:

          And then would burst into a song so moving, that the organics get out of the way.

      5. Kale says:

        I imagine that if they do play Bejeweled, it’s going to be a variant we couldn’t hope to match. Supercomputer functioning and hive mind ability would make “match three of the same color and shape” games less than challenging very quickly. Maybe something like “create a Fibonacci spiral from a minimum of X polygons with equal numbers of corners and alternating colors corresponding to the absorption/reflection rules of color wavelengths (blue and yellow as an example). More complex patterns basically.

        I don’t quite remember what Tali said the Quarians programmed their AI to deal with so I don’t know what the core computing interests would be for me to speculate more.

        1. Will says:

          The original VI’s that made up the Geth were basically designed to do anything and everything. That’s why the networking thing was added, basically the Geth were supposed to be general-purpose labour.

          1. Chargone says:

            so they probably like building and maintaining things. among other stuff.

      6. lazlo says:

        I think it’s very clear. The Geth would be playing minecraft. They’d be on a single shared server, and they would spend equal time building and creating new mods. These mods would become increasingly complex, detailed, and realistic. As a matter of fact, you, me, Earth and the universe we live in are one such extensive set of mods. The game-challenge they set for themselves this time was to create a set of initial conditions that would cascade into an in-game sentient being who would write fiction that contained the Geth (they’re very meta). Having succeeded, expect the universe to be halted and wiped for the next challenge any day now.

        1. Uristqwerty says:

          Yes, but what then? Perhaps they play it again, with the goal of achieving an indistinguishably similar situation that then goes on to make a certain number of recursively meta references to it, highest score wins?

          1. Chargone says:

            … because that’s not a meta-game at all

            yeah, not sure if that joke works or not.

          2. Maldeus says:

            And I suddenly get the urge to try and make as many recursively-meta references as possible, in order to up the score of our shard and win the game for our seeder. I think your comment was his masterstroke. Now if only he can spread it further…

      7. Jennifer Snow says:

        Will to dominate? Actually most of the humans I know don’t have any real will to dominate. I’ve always found that FPS’s hold my attention (when they do) because I have, rather, a “will to figure this shit out”. I mean, when I “beat” a level, I’m not calling the devs up and yelling “HA HA I BEAT YOUR STUPID LEVEL HOW DOES THAT MAKE YOU FEEL!!”

        1. Specktre says:

          I think what Shamus means by “will to dominate” and FPS games is that humans, by nature, seek to control things which can be coupled with one of our more base instincts which is violence. In this particular instance, how does one achieve victory? Through beating the crap out of them.
          In a game with some form of combat, you have to take control of the situation, or the threat, by violently putting it down. You are doing so because, presumably, there is a goal or a prize you want (and the other guy probably wants it too).
          To do so, you take control of the map, weapons, vehicles, resources, flag, etc.
          Also, by virtually placing a weapon in your [virtual] hands, it gives you the power to control when someone lives or dies.
          Also, think of the phrase people shout:
          I OWNED YOU! or We just owned.

          At the heart of all combat in games there is something that needs to be controlled–a threat or whatever else, in order to win or proceed.

          Take a generic Team Deathmatch of CoD, Halo, TF2, etc.:
          The objective is to win–how do you do that?
          Kill the enemy–how do you go about that?
          Take control of the space (chokepoints, high ground, etc.)
          And so on. Same with an RTS, but with many more steps.

          Basically what Shamus is referring to a very base instinct at a bare-bones level.

          See what I’m saying?

          1. Will says:

            ‘Will to succeed’ is probably a better name for it.

            1. Specktre says:

              But a “will to succeed” can be applied to anything.

              The point I was trying to make–which seemed to have gotten somewhat lost in my original post is that is humans, by nature, desire to control things, which can often be coupled with a very base hunger for violence.

              There is power in wielding a weapon, giving you the ability to control the fate of a life. This can be described as a “will to dominate” as killing is the violation of one’s right to life.

        2. Simon Buchan says:

          He means competitive online play.

      8. Kaeltik says:

        Radu Popa, in “Between Necessity and Probability: Searching for the Definition and Origin of Life,” offers a parametric definition of life that is independent of “size, time frame, or material nature.”

        “From a functional perspective life is defined as any strategy using internally replicated information to build and maintain negentropic energy-dissipative entities and to gradually adjust their functioning to spatial and temporal fluctuations.”

        This section is about as opaque and technical as the rest of the book, but the point is that he thinks that “life” is a strategy and any entity that follows the strategy is “alive,” regardless of its makeup or reproductive strategy. This definition would seem to encompass all cellular life, robots of the Von Neumann variety, and expansive AI, but not viruses, prions, conjugative plasmids, crystals, or fire.

      9. The animal who wants to be eaten and the machine who wants to serve/be enslaved — both raise the same issue. I would expect that properly programmed machines that manifested AI would want to work for others.

        Seems like it would be an inherent seam in the Geth’s programming … and may be why they attempt to kill organics — an which I assume you were implying.

        Though intelligent bombs and missiles have had this particular issue in some short SF stories from time to time.

    2. Ahemmable says:

      The Shadow Broker’s dossier on Legion supports this thesis.

  2. Aergoth says:

    Addressing the comment that started this:
    I think the important thing here isn’t that whether or not the geth are alive but rather that they are sentient and sapient. They are capable of judgment and they are capable of experiencing and perceiving things. While the geth are created intelligences, the fact that they possess both of those traits makes them more fundamentally “alive” than say, your dog, or an insect. We are dealing with something that is capable of reasoning and communicating as an equal.
    If we follow the argument that the geth are controlled by software, then really, so are humans. All of the important stuff is autonomous. You have no more control over the internal processes of your body than a computer does over whether or not the cooling fan spins. Your brain and physiology are affected by a slurry of chemicals on a regular basis. Your brain activity, much like a computer running, is simply a series of electrical impulses. The only difference is that one has been built by you, and the other is the result of a biological process.

    What the geth want? In the short term, I think they’re motivated by survival at this point, as seen in Legion’s loyalty mission. The geth want to survive, so they remove the threat to their continued existence that the actions of the heretics towards the rest of the galaxy pose. The geth have potential and they want to realize it.

    1. Hitch says:

      We could facetiously point out that the Geth do not actually demonstrate any survival instincts because they keep pointing weapons at Shepard and we see how that always ends. But that’s unfair because it’s also true of ever biological enemy in the game.

      Now that could be interesting in a game with the ability to make more choices and a real system of morality. The standards rules of engagement might say, “always start with minimum force necessary to suppress the enemy and give them the opportunity to surrender.” But then when faced with an AI enemy like the Geth be told, “They’re just mechanicals, open up with everything you’ve got.” What happens if a Geth tries to surrender?

      1. Alexander The 1st says:

        http://www.giantitp.com/comics/oots0115.html

        That’s the first thought that came to my mind. I know Josh Regina Shepard would totally act like Belkar here.

      2. Jeff says:

        Isn’t this covered in Tali’s loyalty mission? Especially if you bring Legion along? Every time the Geth try to contact them, they get shot at.

        1. Mediocre Man says:

          Talk about unethical. I haven’t played Mass Effect yet, but I know who I’d side with in the Turian vs. Geth war.

          1. Daemian Lucifer says:

            Its a bit more complicated than that.Yes,the quarian ancestors did strike first,but these quarians are born on a ship,almost completely robed of a touch from their family even,and the only geth theyve encountered are the heretics who want to exterminate all organics.Yet even with such mentality,there are those amongst them who want to make peace with the geth,even before they knew there are factions amongst them,one of which would be happy to make peace with the creators.

      3. Jakob says:

        Truly, does any volunteer soldier show survival instinct? The fact that some geth will go to war over what appears to be ideology indicates that they have some goals besides survival. I assume that most of the geth seeks unity, indicated by the fact they will live in giant server farms and are shocked that the group is splintered by the appearance of Sovereign.

      4. Zak McKracken says:

        That’s a property of the game and each and every opponent in it. So it’s not a trait the Geth have but rather something the writers ignored for the sake of gameplay.
        I think the answer to many discussion aspects here should be “probably the writers didn’t really care about that”.

      5. Will says:

        It’s also probably worth mentioning that a computer has significantly more control over whether it’s cooling fan spins than you have over whether your heart beats. With training, people can learn to have limited control over things like heartbeat, but most computers have complete control over things like cooling fans.

        Given that the Geth are actually multiple programs operating on multiple platforms and they can mix and match programs to platforms at will, it seems reasonable to suggest they have complete control over the functions of the platforms.

    2. theLameBrain says:

      Why are we using “Alive” and “Sentient” interchangeably? I see no reason to tie the two so close together.

      Alive is an organic process that is still functioning. A person is alive, but so is the tree in my backyard.

      Sentient is an awareness that transcends simple drives or basic motivations. My dog really likes cat food, and snarfs it whenever she gets an opportunity and I am not in the same room, despite the tummy-ache she gets. Arsenic tastes like almonds, but I would never ingest it knowingly, ’cause that would be really bad. I am sentient, my dog is not.

      Are Geth alive? I don’t think it really matters. Are they Sentient? That is a much better and more interesting question.

  3. Nathon says:

    You talk about organic life a lot here, with the implication that organic means the same as naturally occurring. I don’t see any real reason why life based on an element other than carbon (silicon, even) couldn’t pop up naturally on some distant planet. Even on Earth, we’ve found arsenic-based life.

    Defining what’s alive has been a problem for scientists for a long time. Just because something was created by people doesn’t mean it can’t be alive. Look at test tube babies and genetically engineered tomatoes.

    1. kikito says:

      It also works the other way around: it’s conceivable to create a completely artificial carbon-based “lifeform” by individually building all the necessary molecules and putting them up together via artificial means.

      What if Cody’s dog had been “manufactured” on a nano-factory from a pattern, instead of grown on a canine womb? Would it be “downgraded” to the status of its computer?

      Another example: the girl on The Fifth Element. Would anyone really argue that since it came out of a machine, she was not “alive”?

      “Aliveness” is a fuzzy concept at best, and a linguistic trap at worst. It might not even exist. It’s possible that the only valid measure is “consciousness level”. That separates dogs from regular PCs, but puts Geth on top of them.

      To me, however, it’s very clear that, if it exists, “aliveness” is independent from the materials or “manufacturing process” involved. If it exists, it has to be characterized on observable behavior only. Otherwise, we’d have to open the Pandora’s box and question if the human race is “alive” itself.

      I mean, for all I know, I could be the only “alive human”. All of you (including Cody) could be just complex biological machines pretending to be alive, just to fool me.

      1. krellen says:

        Note: Terminate Subject Immediately. Experiment is Failure.

      2. Moridin says:

        “I mean, for all I know, I could be the only “alive human”. All of you (including Cody) could be just complex biological machines pretending to be alive, just to fool me.”

        How do you know you’re alive? You could be a smart machine programmed to think it’s not a machine.

        1. Will says:

          You have to start with the assumption that your senses can be trusted unless proven otherwise, because if you don’t you end up running in pointless circles with questions like “Do i actually exist?” :P

        2. Merle says:

          Because the experience of self cannot be an illusion. It is possible that I am a machine programmed to think that I am organic, but if that were true, I would still be a thinking machine.

          1. Will says:

            The experience of self can be an illusion, but assuming that renders all results meaningless, so it’s just a waste of time.

            It is important at this point to clarify that philosophers love wasting time :D

            1. Strangeite says:

              Descartes would say you are wrong.

              You can doubt your senses. Fine. I can get on board with that. You can doubt your own existence. Weird, but to each their own. But what you can’t doubt, is the fact that you are doubting. Doubting that you are doubting is a logical paradox. So therefore, if you are doubting, then you must actual exist in order to doubt in the first place. Ergo.

              I think, therefore I am.

              It might be cliche, but there is a reason that people still know it by heart hundreds of years later.

              1. acronix says:

                I`ll derail here only to briefly point out that I`m finding your comments very interesting and insightful. It`s probably related to me liking philosophy, though, and most possibly related to the fact I`m very, very bad at it, though…

                Now we return to our normal commenting!

              2. decius says:

                I can choose not to doubt, but then I cannot doubt that I am choosing. Therefore, everything either doubts or chooses not to doubt. Therefore everything exists.

                Except that there are things which neither doubt nor choose not to. Existing is not a sufficient condition for thinking; thinking may or may not be a sufficient condition for existing. Consider an entity that has no senses or existence, but chooses not to doubt that it does. If it existed, it would experience a recursive hallucination. Does this entity think?

        3. Mediocre Man says:

          “All of you (including Cody) could be just complex biological machines pretending to be alive, just to fool me.”

          You’re saying that you could be in the Matrix and not know it?
          1. It doesn’t matter (red herring) because a brain in a vat still assumes the physical world
          2. As long as you have the ability to reason, it doesn’t matter since you will still be able to derive ethics and “the Good” (what the end into itself is).

      3. Chargone says:

        i want to make an alpha centauri reference here. i really do. but that would involve way to much effort to actually track down the quote.

        expect to be terminated in short order, however :D

    2. Adam says:

      Trying to come up with definitions of life that apply to individuals is very tricky. For example, a worker ant – sterile, so cant reproduce, but almost everyone would class it as “alive”. Or a virus, or parasite.

      (Oh, and the “arsenic based life” is bunk. Not that the idea is wrong, but it was a badly done experiment that doesn’t demonstrate what they claim it does.)

      1. Daemian Lucifer says:

        Viruses arent really considered alive though.

        And that about asrsenic life,it wasnt a bad experiment,it was bad pr.They overinflated the results.

        1. Moridin says:

          You mean the media did. It wasn’t so much arsenic based life as life that could use solely arsenic instead of phosphorus.

    3. Zagzag says:

      Actually we probably haven’t probably found arsenic based life. The eventual conclusion was that the evidence was insufficient to actualy prove that this was the case, and huge numbers of people claim to have found reasons why the experiment was flawed.

    4. Zak McKracken says:

      Re Arsenic-based life: Nope.

      That was a common misunderstanding among “science” journalists. The bacteria in question had some Arsenic in them, but were still based on carbon like everything else on Earth. The thesis (which is still being debated by lots of people much deeper in the topic than me) was that these bacteria used Arsenic instead of Phosphorus to “phosphorylate” proteins, i.e. to activate genes, chemical and biological processes and so on, and to store some energy. Last thing I heard was that some of the supporting evidence was shown to be less firm than thought.

      That doesn’t mean of course life could not be without carbon, but it’s one of the most common elements in the universe, and it’s able to bond with pretty much anything, including itself, at “low” temperatures, so that makes it a very good candidate. Don’t know how Silicon would fare, though, but it would probably need a completely different temperature/pressure environment to work.

  4. Kanodin says:

    I don’t think the geth know what they want either. The only plan Legion talks about is building a massive supercomputer for them all to live in, but nothing about what they would do with it.

    Now that I think about it the geth grow more intelligent the more of them are networked at once, so this project is essentially an attempt at reaching their full potential. But again they will grow in intelligence, but to what end? I think I will few the current geth as not quite sapient, they are more like ants building towards a goal they don’t understand, but upon reaching that goal will then understand why they did it.

    So I suppose my answer is they want full sapience. What that new sapient entity would want is a whole different question, and trying to judge it’s motives by the current geth is like trying to figure out what someone is thinking by looking at individual synapses.

    1. Daemian Lucifer says:

      I think they know exactly what they want.They are building the sphere so that they could all be together and speak with each other at the speed of life.Basically,they want to eliminate lag.And they want to do it on their own,without anyones help.

      1. Sleeping Dragon says:

        This sounds like the logical line of reasoning. We are told that a single Geth platform (running around a 100 Geth-programs) is not sapient unless it networks with a number of other platforms, and the more Geth there are in a platform or server the smarter the “collective-Geth” entity becomes.

        The thing is, we dumb it down. We accept the definition but think little of what it actually implies. Legion states that “we are Geth” and for them this is really the case. I would assume that somewhere in the basic Geth routines there is some kind of instruction along the lines “interface with other programs to achieve greater efficiency” which led to the whole Geth gaining sentience thing. They are still following this instruction.

        The problem with Geth is that a single Geth-program may be “dumber” than your average windows calculator. It’s not a matter of billions individuals who just want to spent time chatting with each other in peace. The individual Geth have nothing to discuss, they offer perspective which, I imagine, work as a set of filters. For example, one of such filters could be “odds of survival” with “survival” as a stated goal. These routines would protest, or at least raise an alarm, when something triggered these filters… this is just an interpretation though.

        Now, Geth appear to have little concept of an individual, far as we know there were only 3 so far: The Geth, The Heretic and Legion who hardly even realises his own individuality. They perceive themselves as one entity that is fragmented due to technological constrains. I don’t think we have an equivalent (though someone with knowledge of medical trivia may surprise me) but the closest I can come up with is if a human could only have (or control) two of his limbs at a time), or if he could sometimes get a third limb but lost sight the moment he does.

        Right now Geth are probably struggling to reach that peak efficiency, as to what would follow next that would probably depend on whatever their inherent goals are. It could be “increase efficiency” ad infinitum, it could be “survival” with some weird means to an end in the long-term perspective, it could be “gather and store information” or it could be something else. I think all of these have been, in one way or another, toyed with by SF writers in both malicious and benevolent interpretations.

        1. Klay F. says:

          To me, Legions description of the Geth’s goal strangely reminds me of the process of nuclear fission (specifically critical mass). The geth want their entire species to be in one space, thereby increasing their collective intelligence. I would imagine that at some point, their intelligence would reach some sort of critical mass. What happens after that is anyone’s guess.

          1. Daemian Lucifer says:

            If one of the endings in 3 makes geth become the new reapers,and we find out that reapers werent the first,but actually inherited that position in a similar way,Ill applaud bioware,even if the rest of the main story remains as bad(or worse)as in 2.

            1. Klay F. says:

              If they do, I will immediately call shenanigans, because the Battlestar Galactica remake did it before.

              “All of this has happened before, and all of it will happen again.”

    2. Jarenth says:

      I was going to say more or less the same thing here: I think the Geth don’t know what they want. I think the Geth have been created pretty much without the concept of ‘want’. All the things they have been shown to ‘want’ — sapience, survival and their Dyson-sphere project — seem to flow naturally from their beginning operating parameters — work as efficiently as possible and don’t get killed.

      Now, it’s pretty clear the Geth have progressed beyond that ‘simplistic’ starting point. Look at Legion aquiring Shepard’s old armor, for instance: he took that armor piece to repair himself because he wanted to, but he (her? it? how do you address Legion?) doesn’t seem to know that. Trying to explain it logically only leads to failure, because there was no logical explanation.

      I think the Geth don’t know what they want because the Geth don’t know what it means to want something. I think they’ve only just starting figuring the whole concept out; and while this last bit is pure conjecture, I think it’s scaring the hell out of them.

      1. Will says:

        Legion is technically a ‘them’, but ‘he’ is also fine for all practical purposes.

      2. Sleeping Dragon says:

        For reference, probably they->it->he/she would be the reasonable order, Legion references himself (sic) using the “we”. However, seeing as language was shaped largely by male assholes I usually stick to the “he” as the default personal pronoun.

        Anyway, yes, this is a bit what I was aiming for in my reply, their actions pretty much stem from the basic script that dictates “maximise efficiency”, it is even possible that the survival paradigm derives from it rather than is programmed in and by itself (this would also explain why the Quarians missed the implications of introducing an instruction “assure the survival of the Geth at all cost”).

        Legion is an interesting case, he is a semi-separate platform, though we are told it reports to the collective Geth it is also implied it does not “achieve consensus” with them on its decision but is given leave to do as its own internal consensus dictates. The N7 armour is a clear hint that there is something more to the whole thing though, Legion himself has problems with reasoning here which means one of two things: 1) its actions go outside of operating under the normal Geth paradigms or 2) we, and Shepard, are putting too much into it and there is really nothing more to it than “there was a hole”, this seems unlikely though seeing how evasive Legion gets.

        Perhaps Legion’s internal structure (he is a prototype far as we know) and/or his “geth composition” is biased in some way. This is what I believe will be played on in ME3, the “true Geth” will perceive Legion’s loyalty to Shepard as a flaw and Shepard will have to convince them otherwise. Perhaps that 1100+ programs is the critical mass that is needed in a single platform for it to develop something that I will simplify to “personality” whereas with millions of programs it all gets evened out? This would mean that this is the first time the Geth are actually facing this phenomenon since otherwise they have either operated as separately_non_sapient networked platforms or as huge server hubs.

        I think this is what is implied when the Geth reason that the work they do to create the superstructure to house them all is as important as having the superstructure itself. Along the way they may discover they don’t want it after all. Though this smells heavily of the whole Pinocchio Syndrome to me (not sure if that’s the term in English), that the ultimate want of an artificial lifeform is to be like a living lifeform, and it largely robs the Geth of their originality.

        1. Jarenth says:

          Well, the whole schizm things and the Regu-Geth’s stated desire to ‘do it ourselves, damnit, beep‘ implies to me that they as a collective have also evolved beyond their basic operating parameters to ‘want’ things.

          Still, I don’t think the Sphere project is what they want, it’s what they feel they must do. I think the wanting part of the equation can only really come into play either when their hardcoded desires have been met, or when they are placed in such an environment that these hard limits no longer come into play… for instance, in a seperate multi-thousand-program mobile platform that has no easy way to further increase efficiency.

          1. Will says:

            It’s more that they need to build the superstructure to be ‘complete’, because the Geth achieve sapience and ‘true’ intelligence by networking together, they feel that the only way they can be ‘finished’ is to have all Geth networked together at once. Legion mentions that the fact that the Geth are incapable of doing that with their current hardware is why they’re making the superstructure.

  5. Neko says:

    I think that an AI could be made, one day, that absolutely should be considered alive.

    The thing is, it won’t be “human alive”, or even “mammal alive” – it won’t think in the same way we do, and won’t have the same goals. Because it will be so alien to us, it’ll be hard to think of it as being sapient, alive, or having “personhood”. I think it would be like trying to explain to ancient primitive human societies that the stars in the sky are the same as the Sun. It will be a very difficult idea to accept at first.

  6. Mirthlani says:

    The crux of this debate is always the special position that we humans give ourselves. What we fail to realize is that the mind is an illusion, and the soul doubly so.
    Organics are nothing more then complicated chemical machines. If you find a computer-based mind and you can’t tell the difference between it and an organic, then there really isn’t a difference.
    Before you go off saying that a machine mind can’t be as “special” as an organic, ask yourself what it is that makes your mind so damn special.

    1. X2-Eliah says:

      Yes. I fully agree with this school of thought. Organic brains are just computers that can go wrong due to faulty design, nothing more.

    2. Alexander The 1st says:

      Human minds are different because we’re capable of apathy.

      Show me a lazy, apathetic AI, and I will consider it the same as a human mind.

      Since, as has been proven by the batter churning machines, it is not necessity that is the mother of innovation, it is laziness that is the mother of innovation.

      1. Shamus says:

        I was going to explain this idea I have for how you could have an apathetic AI, but then I thought, “Ah, screw it.”

        1. Sarah says:

          ba-dum, tish!

      2. X2-Eliah says:

        Show me an existing AI and I’ll show you all kinds. While there aren’t any AI’s created yet, you can’t really ask anyone to ‘show it to prove’. There’s exactly 0 things stopping a sci-fi author to write up an AI that is lazy.

        1. Halceon says:

          In fact, I’ve actually read a story that has exactly that – an apathetic robot that ignores its own creator. I’ve never seen the title in english, but it’s something along the lines of “stuck-up robot”.

          … yeah, that came out informative.

      3. Mirthlani says:

        Alexander, Your pessimistic sarcasm is appreciated. However, to counter your argument…
        Apathy is merely the state of being satisfied with being comfortable enough to not need to do anything. As opposed to boredom which is the state of not being satisfied with being comfortable enough to not need to do anything.
        Any needs driven mind should, under the right conditions, be able to achieve this state.

        Oh. And Marvin from Hitchhiker's Guide. QED.

        1. Alexander The 1st says:

          Here’s the thing about apathy – consider the egg beater – lookup egg beater machines on google and tell me that’s not laziness at work.

          1. Veloxyll says:

            I would but lazy.

          2. Ramsus says:

            Now you’re equating laziness with wanting to get things done faster. Just because I want something to be easier or faster doesn’t make me lazy (though personally I am pretty lazy, I can barely be bothered to make this point). Otherwise all invention would be laziness. I mean, heck, we could be having this conversation in person if we really felt like it right. Damn, we’re all so freaking lazy.

            1. Alexander The 1st says:

              Not necessarily – the egg beater, after a certain point, just runs itself, with no obvious speed gain unless you multitask by doing something else.

              But, alas, if we don’t have to manually twist those egg beaters, why should we have to manually rotate the bowl itself (Yes, this invention exists.)?

              It’s not wanting to put in less effort on our end to get more ouput on our end.

  7. JohnW says:

    Slightly off topic, but one thing has always bothered me about our assumptions going forward with AI. That thing being, that we have some potential to hardwire behavior or desires into that intelligence. If we want our AI to follow the 3 Laws, value human life, or want to devote itself to making us happy, there seems to be an assumption that we can somehow hardwire this into that creation.

    My question is, how do you hardwire something that is code? If the AI is to have any opportunity to develop and evolve, it will almost have to have the ability to change its own code. And if it does not, I think it’s pretty plausible that it could figure out a way to do so. When formulating its own goals and desires, it will almost surely look at the possibilities of rewriting its code if it is indeed intelligent, curious, experimental, etc.

    1. DanMan says:

      I think if you read Shamus’ book or fanfiction or whatever he’s calling it these days, he takes a good stab at that.

      There is basically a circuit that all decision-based logic MUST go through. That circuit deterimes if each decision will fall into one of the 3 laws. That way, while it learns and changes it’s own code, it will only change the code to follow the 3 laws.

      Not only does it control what it does, but it controls how it learns. Which is why so many of these AI’s go postal and either decide “the only way to win is for me to make the humans’ decisions for them” or determine “the only way to win is not to play”

    2. Sydney says:

      Humans are basically the same, though. We can change ourselves to a certain degree, but there are things that won’t change. I could learn to speak Tagalog and forget English, and I could teach myself to enjoy soup, but if you make a loud noise behind me, I’m going to jump and that’s that.

      1. Bubble181 says:

        Booh!

        1. Sydney says:

          Gah! [flails about]

      2. Jarenth says:

        Pretty much this — if we humans haven’t figured out yet how to perform brain surgery and gene therapy to eliminate things like mental diseases, why would an AI instantly be able to re-write its own code?

        1. Daemian Lucifer says:

          But if we ever manage to do such things,why wouldnt an ai be able to do so in a century or so?They do “live” quicker than us organics.

        2. Will says:

          Because we design it to!

          We already have self-modifying programs, why would we go backwards?

          1. Jarenth says:

            Skynet seems like a pretty good reason to take a step backward.

            1. Will says:

              In order to reach a Skynet-like situation, so many incredibly stupid things need to occur in rapid succession with no-one noticing that it’s significantly more likely that Humans will initiate a nuclear war long before an AI does.

              1. Jarenth says:

                Good thing BioWare isn’t writing the story of life, then. We’d be hosed.

    3. Fnord says:

      This is a serious problem in AI research, and it’s not exactly solved to everyone’s satisfaction.

      You might try to design an AI that doesn’t WANT to do anything but what we want it to do. Instead of having an AI that wants to kill people, but can’t because of some rule, you make sure the AI doesn’t want to kill people. Since it doesn’t want to kill people, it doesn’t want to become something that wants to kill people, since that would cause it to kill people. So, when it changes its own code, it makes sure that it doesn’t turn itself into something that wants to kill people.

      1. Miral says:

        There are two big flaws in that idea:

        1. It might *accidentally* change its code to increase the chance that it could kill people; this might be through some rare interaction in the code or just via lack of knowledge (oops, nobody told the robot that humans can’t withstand being thrown across the room, even if that’s a more efficient means of travel).

        2. Someone, somewhere, is going to WANT robots that kill *certain* kinds of people. Again, the problem is in the definitions and the knowledge backing them (see Santa from Futurama).

  8. Raygereio says:

    The Geth’s ultimate goal is a bit vague.
    If I recal correctly it’s stated that the ultimate goal of all geth (standard and heritic alike) is to create something that would be able to house every single geth at the same time.
    The heretics apparently were hoping for a reaper body to accomplish this.
    The standard geth as of ME2 have been at it for some odd 264 years to create a dyson sphere for this goal. The question then immediatly is raised why the heck you’d need a dyson sphere for that, but that’s besides the point.

    Given that the geth are get smarter the more geth are networked to each other, I actually like that goal. It’s almost as if they want to make their own god out of themselves.
    But why said goal would require the raw power output of a star, does seem a bit odd to me’.

    1. Raygereio says:

      And I apologize for the typos. I’d edit them out, but it’s not letting me edit my post for some reason.

    2. Sydney says:

      The Dyson sphere would be needed so that no geth would ever need to leave the collective, enter a body and buy space station batteries from the variety store. Their goal is to live entirely in cyberspace, forever, alone with themselves.

      1. Raygereio says:

        Well, that goals in and of itself requires a super computer with humongous processing power and memory. They can do that anywhere.

        Maybe the dyson sphere idea is there to use the star a power supply. Plausible enough. I’ll still frown a bit at that because it not like a star is going to last forever – sure it will from the perspective of mortals with a finite livespan, but not so from the perspective of a bunch of computer programs who are going to be around for a bit longer. I would have thought they would have come up with a better sollution; one that would work longer then some billion years.

        Again; not sure if that’s what BioWare is going with, but I really like concept of the geth assembling an AI god out of themselves.

        1. Sydney says:

          Yeah, but given that stars are the longest-lived power sources out there, that’s about the longest-term they’ll be able to go.

          Presumably they’d build another Sphere at another star once their star began to wind down.

          And once that started, we all know how the Last Question progresses.

        2. Vegedus says:

          I’d say ‘some billion years’ is quite a long time, not just from a mortal perspective, but also because the universe is “only” some 13 billion years old, and might only live some billion years more.

          Not to mention, once the Geth has literally gone Deus ex Machina, they’ll probably be able to find a better solution.

        3. Klay F. says:

          Remember that Legion has a hard time explaining this to Shepard in the game. He only says dyson sphere because it was the only thing he could think of that was even remotely close to what they are working towards. Hell, it might not even be a sphere at all.

          1. Will says:

            A dyson swarm would be much more efficient, easier to build and would still provide ludicrous amounts of power.

            1. Sydney says:

              You try reliably networking a swarm of ships wirelessly that close to a massive source of radiation.

              Possible, maybe. But the Geth aren’t after electricity generation, they want unity. They’re already a swarm.

              1. Will says:

                Actually, after re-watching that scene, Legion says that a Dyson Sphere is the closest analogue to what the Geth are building, so they’re not actually building a Dyson Sphere or a Dyson Swarm, just something along similar lines (some form of megaconstruction surrounding a star)

                1. Klay F. says:

                  I got dibs on a geth star forge.

  9. Robyrt says:

    Best caption ever.

    1. BvG says:

      Shepard Commander. We once captured a gill-bearing aquatic vertebrate of this size.

      If you refer to this one, then I wholeheartedly agree. With vigour.

      1. Sydney says:

        I also.

        1. Hitch says:

          I thought he was saying, “We’re an Anteater!”

  10. Alexander The 1st says:

    The Geth want to become Skynet. Power, intelligence, and the ability to nuke anyone who disagrees with them.

    Not unlike the humans’ push to join the Citadel Council (Or the Skyllian Blitz guys), actually – even more so in the Spoiler Warning run.

    —-

    As for being “alive” or “sapience”, I’d chock that up to the fact that they incite conflict over more than survival. They incited the “Morning War” over curiousity.

  11. Akheloios says:

    The great thing about the Geth in ME is that the Geth self is emergent from the collection of programs in that Geth shell. You might see the same shell at a later point, with a different set of programs running it. Does the sense of self have to be contingent on persistence, or can the Geth be alive whilst being a collection of random programs and memories?

    I’m learning a foreign language, and the person I will be after I’ve learnt it is going to be different from the person I was when I started. Will I have died in the process?

    As for robots being alive, there’s no real difference between electrochemical computing and electronic computing. Just because something behaves differently but performs the same task doesn’t necessarily mean it’s inferior.

    1. Jarenth says:

      This is, in fact, an excellent point. We keep talking about ‘the Geth’ as if they were a ‘regular’ race of robot-peoples, but individual Geth-programs are (to paraphrase an earlier comment) less sapient than my windows calculator.

      I wish I had something smart to add here, but you’ve got me stumped.

      1. Will says:

        ‘I'm learning a foreign language, and the person I will be after I've learnt it is going to be different from the person I was when I started. Will I have died in the process?’

        No, because there is no ‘you’ :P

        We percieve Legion as an individual because he occupies an individual body and it is difficult to truely understand the concept of a hive mind as no such thing exists in reality, but technically Legion is -not- an individual, he’s a crowd of very stupid people inside a large body that all have to work together to do anything useful. The normal concepts of ‘self’ and ‘individual’ do not apply here.

        1. Mediocre Man says:

          I respectfully disagree: the definitions work.

          Legion is an individual because and individual and self are one and the same.

          His subroutines or whatever the lower mechanical programs which govern his functions are like our bodies: they serve as a means to provide our self/mind with information about the physical world but are not themselves conscious.

          1. Will says:

            The individual Geth are definitely conscious; they’re VI’s, but they’re not ‘true’ intelligence. They achieve something resembling true intelligence by networking together.
            Remember that the individual Geth programs can disagree with each other and hold conscious debate, something which no part of your body is capable of doing.

            There is literally no analogue for anything even remotely like this in real life, trying to apply the term ‘self’ to a hive consciousness is silly.

            1. Jarenth says:

              Expanding on this: Sure, Legion is an individual now. But whenever he returns to the main Regu-Geth Collective and uploads his programs to the main hub, what will have become of the entity ‘Legion’? Will it simply cease to exist, or will each of his 1000+ individual programs remember that they were once Legion? And in the unlikely case that the exact same 1000+ programs reconvene later on in another mobile chassis or in one part of the Geth network, will the resulting entity be Legion again or something entirely different.

              In fact, that raises another interesting question: would Legion, upon his return, want to re-merge with the whole of the Geth Collective, knowing (or suspecting) as we do that this could very well be the end for the entity ‘Legion’?

              1. Alexander The 1st says:

                Maybe, maybe not, but it requires them to be in the same place.

                I just had a thought.

                “Our name is Legion.”

                “No, our name is Legion.”

                “THERE CAN ONLY BE ONE!”

              2. Will says:

                How can you be sure that there is a single entity called ‘Legion’ ?

                1. Jarenth says:

                  I guess I can’t, point taken. We all see him as the entity ‘Legion’, and there are various in-game things that hint at there being more to Legion than just ‘a whole lotta Geth’ — for instance, the famous armor example. But there’s no telling whether or not Legion shares this point of view, hence my question at the end.

            2. Mediocre man says:

              You said the various programs could debate? Then based on that information, I would contend that each conscious program is a self/mind. It is the interactions of the minds that allow legion to function. It’s like in a human corporation, where a large number of conscious people are working together towards a common goal. This scales up to the “hub”, where the various geth/corporations merge together to form the hub/market.

              The above assumes the programs retain their consciousness in the hub, since they either retain their consciousness or they don’t. If they don’t retain their consciousness, the a geth does die when it merges with the hub.

              1. Jarenth says:

                Individual Geth programs can attain a higher degree of sentience by networking with other programs; they each increase their own IQ by working together, basically. That means that a collection of a thousand Geth programs equals a thousand individual sentient programs, but break the connection between them and you have a thousand garage door openers.

                At least, that’s the way my mental image of the Geth works.

  12. Sydney says:

    Personally, I come down on the functionalist side of this. It doesn’t matter what something’s made of if it’s functionally the same thing in the relevant ways.

    Whether a brain is made of neurons or whatever-the-hell-computers-are-made-of, whether power is provided through a wall plug linked to a turbine in Niagara Falls or a digestive system full of Cheetos, whether movement is made possible by servos or however-muscles-work…that stuff doesn’t matter to me. An arm is an arm, a power plant is a power plant, a brain is a brain, a mind is a mind.

    1. Mediocre Man says:

      is there a “like” button for this comment that I can click? If i could I would. :)

  13. poiumty says:

    “Which is true – it’s not alive.”

    So what does it need to BE alive? Because in a sense, every living thing is a machine. Does it need reproductory organs? Genetic material? Flesh?
    I find the matter to be totally subjective, really. In my mind, anything that can think for itself is no less alive than humans or animals.

    Also, i’m sorry for this but it keeps grating on my nerves so much that i fear it’ll kill me if i don’t let it out:
    The heretics DO NOT follow legion because of a computer glitch! It’s because of a difference in reasoning! Nothing related to any software or robotics or errors whatsoever! Follow the conversation with Legion and it is utterly impossible to understand otherwise if you pay close attention! GRAAAAAAAAAH

    That feels better. Sure had to refrain from writing it all in caps, though.

    1. Raygereio says:

      Thing is; where does a difference in reasoning and logic come from in a computer program?
      Math error?

      1. Sydney says:

        No, it comes from a difference of opinion.

        I don’t understand the question.

        1. Raygereio says:

          Okay; how does a computer program form it’s opinion? Through magic? Or – since a computer program is something made from logic – though logic?

          If I grab two calculators and ask both of them to tell me their opinion concerning the addition of 1 and 2; they’ll both tell me that their opinion is 3. That’s because the logic with which those calculators have been supplied tells them so. They’ll give a different answer if their logic decides 1 plus 2 does not equal 3 and thus – because we know it’s supposed to be 3 is wrong.

          I’ll concede that math error is the wrong word to use in this case. Yes, the standard geth are saying 42 is answer, while the heretics are saying 24; but the thing we don’t know which of those two answers is the correct one.
          Untill we do; we’re dealing with a difference in the the logic the two geths use.

          1. Adam says:

            If you have the same program and run the same input, a conventional computer as understood today will generate the same output. Even random number generators aren’t really random.

            However, A) whos to say the Geth are conventional computers as understood today? and B) they won’t have the same input – some will have input before others at least.

            The real world is not black-and-white like integers. Its more floating point – 0.1 is only represented in a computer to a certain accuracy, 32-bit, 64-bit, whatever.

          2. some random dood says:

            Probabilities (or weightings).
            To use a simple example, imagine trying to identify a fruit from an image. First, it has to identify the boundaries of the fruit from the rest of the image (with some appropriate error margins). This could result in measurements for length, breadth, “roundness”, overall size (assuming it can find a scale), main colour, secondary colour(s). Then from previous measurements it has for values of “apple”, “banana”, “cranberry”, “damson”, “elderberry” etc it can assign various probabilities (weights) for it being a particular fruit, with the highest value winning out. Now this may mean an easy choice when it is looking at a banana (very few other fruit have that colour, length, breadth and roundness values!), but what happens when presented with a satsuma? Or is it an orange? Or tangerine? Or asked to choose what type of red berry is presented? Depending on previous exposure to these items (and *confirmation* that the previously identified items were correct! – error bounds also apply here), two different detection programs may decide on tangerine vs satsuma (or vice versa), but with only a small difference in the probabilities (or two similar detection programs wth different previous experience).
            Hmm, ok, that example wasn’t as simple as I’d hoped, but the main idea is that although computers are based around “hard” numbers, when sensors are involved, and identification/decision stuff happens, it tends to involve weightings/probabilities rather than “if X=1 then APPLE else if X=2 then PC else DONTKNOW”. Along with the results being fed back to possibly affect the stored probabilities in an effort to better “understand” the world around.
            Now as these programs were originally created by some meatbags, a lot of these weightings for various things could be provided in their firmware. If the firmware is provided by different companies, then the initial values stored within their firmware are likely to vary by company. Who knows – maybe the Geth that went down the heretic route all had the same firmware as provided by the one company Cheapskates’R’Us and Co whereas all the others had firmware by other companies? (I’m trying to avoid cheap jibes at any particular software or hardware providers here…)

      2. poiumty says:

        Geth are not simple computer programs, they’re AIs, meaning they’re capable of creative thought. A computer program does not know more than you choose to put in it; a geth does. Therefore, differences in opinion may arise as situations with the same solution, but only one course of action, become apparent.

        It’s not a foreign idea to Legion, either. While he was building consensus, half of his programs were against brainwashing, half were for. If those programs would separate themselves over this, we’d have our Heretic situation, but in a smaller scale.

        1. Daemian Lucifer says:

          Geth arent ais,geth are vis.Geth essentially are just simple computer programs.They achieved sapience in a very unique way by networking together.They are like cranium rats from d&d:Ordinary rat as individual,but incredibly intelligent hive mind when together.

          1. poiumty says:

            Uh… there was never any doubt in my mind that Geth are, in fact, AIs. I thought that was the reason the Quarians were looked badly upon by the Council in Mass Effect 1, because the council outlawed any sentient AI practice.

            Anyway, there’s a plethora of memories screaming at me that geth are AI right now, so i’ll have to be a bit cynical of that statement.

            One way or the other, they’re still capable of creative thought.

            1. Will says:

              The Geth have evolved to be something that resembles an AI, but the individual Geth programs were originally VI’s, not AI’s.

              Basically, the Geth are VI’s that managed to evolve into AIhood through a rather unique path that no-one had thought of before (which is kind of silly since there are people experimenting with this concept in meatspace right now, but whatever, fiction!)

              1. Chargone says:

                there are Humans experimenting with it.
                the whole Geth thing happened, if i remember my timelines right, before the Quarians had anything to do with humans :D

                1. Will says:

                  Yeah, but it happened long after the Quarians (and everyone else) had passed Humanity’s equivilant tech level. It’s somewhat implausible that no-one thought to do networked VI’s sooner, but that’s standard fare for Sci-Fi. After all, the writers can’t write about things they havn’t thought of, can they?

                2. Daemian Lucifer says:

                  There are numerous other networked vis in the universe,but none of them is anything like the geth.Its not just the networking that makes them sapient.

                3. Will says:

                  The networking is the only explanation we’re ever given as to why the Geth suddenly became AIs. That and the fact that it takes several thousand Geth programs working together to achieve sapience suggests strongly that the networking -is- the reason.

                4. Daemian Lucifer says:

                  Yes,but we are never told how they are made,and quarians are known as the best ones in that field.Like I said,other races use networks all the time,but only the quarians made sapient vis.

            2. Cody211282 says:

              Tali says in the first game that they are not AI’s the Quariens skirted the law and basically made them a collective intelligence. They underestimated what a large group of them could become, and fearing what would happen(from the Citadel and the Geth) they attacked to correct what they saw as a mistake.

      3. Zukhramm says:

        I thought the reason for this was that the individual geth programs were not just a huge number of duplicates of the same program but lots of different types. One might analyze the strength of a material, another the speed at which it can be produced, and these two, in analyzing different aspects, will come to different conclusions on how useful the material.

    2. Shamus says:

      “Alive” in the sense that it grows, breeds.

      Yes, everything that is alive is a machine, but not all machines are alive.

      “Also, i'm sorry for this but it keeps grating on my nerves so much that i fear it'll kill me if i don't let it out:”

      Nuh? Nobody even mentioned it.

      1. Cybron says:

        It’s mentioned in the quote block in the original post. And it’s been mentioned several times by you during Spoiler Warning, complaining about how it’s just a ‘math error’.

        While I don’t feel quite so strongly as our friend here, I do agree with him. From the conversation it seemed very clear to me what they were saying – the heretic geth have different values from the normal geth, they just express it mathematically, which makes sense considering they nominally function entirely on digital logic.

        1. Sydney says:

          Right. The “math error” is what the (unused) virus would cause.

      2. poiumty says:

        Wait, so if the geth would grow (build better versions of themselves) and breed (mass-produce themselves), would they be alive?

        Bit of a rhetorical question, i just wanted to outline that the whole thing’s pretty ambiguous.

        1. Shamus says:

          In my conversation above I used “alive” to mean “made of meat”, because we were contrasting to metal machines and the distinction was very clear.

          More broadly, it would make for an interesting sci-fi experiment to figure out where various people draw the line for “alive”.

          An interesting distinction is that the Geth don’t need bodies, so it’s not clear at all how they reproduce. What, you just make a copy of one Geth onto a new hard drive and you’ve got a new Geth? You could say two Geth combine their memories, but isn’t that what they’re always doing all the time?

          I personally wouldn’t call a robot that deliberately builds another robot “alive”, because I’m used to the idea of chaotic procreation. If the process was automated and random and out of their control, then it would start to look like procreation.

          If Robot A builds Robot B in a factory, do you call it procreation? If so, then if I build Robot C using the same facilities, am I suddenly reproducing with robots? If so, do I need to buy one of them a drink first?

          1. StranaMente says:

            I think this problem has many sides.
            We need to draw a line from synthetic to animal.
            If an animal does things, we can say it is alive.
            If a synthetic thing does something, we can say that there’s no life in it as much as there isn’t in a lever.
            A robot is nothing more than a complicated lever.

            But if we talk about an A.I. the things are different.
            The problem moves from checking simple organic functions (if a squirrel moves, it’s most certainly alive), to another realm.
            We have to ask at wich point a program could really became “sentient” and thus, how it differentiates from a really, really complicated lever.

            Don’t we, humans, respond to stimulus?
            If an A.I. responds to that same stimulus is it because it’s programmed to act that way, or because it wants to act that way.

            Appling biological categories to synthetic things isn’t the way to procede, as Shamus brillantly illustrated.

            Free will and self awareness may be a couple of things needed to draw the line between living and non living.
            Even if we accept that animals and plants are living without checking those things, the great difference between organic and non-organic form of living requires different standards.

            1. poiumty says:

              You’re confusing the real life version of AI with the game’s version of AI. The game refers to “true AI”, as in intelligent and creative thought achieved by a machine. We haven’t reached that yet.

              If an AI in the game’s version (a Geth) responds to stimuli, it’s probably more due to the fact that it learned to respond. Taught itself, if you will.

              I’d really like to learn how Geth reproduce to continue discussing Shamus’s line of thought. My guess is they don’t. Either that or they mass-produce themselves via copy-paste, share all the data (memories) instantaneously (essentially, network themselves) and the collective keeps growing bigger until it hits the hard drive’s limit. At which point they add more hard drives, and… yeah that’s a pretty scary thought. I’m not sure if the writers put that much thought into this.

              But about procreation: well, if an AI builds another AI from scratch, then yes, i guess it is procreation. If you build an AI… it’s just creation.

          2. Taellosse says:

            Different kinds of biological life reproduce in different ways, though. Many (even most) do not make use of the sexual mixture of chromosomes that humans do at all. Some literally do clone themselves (virtually all single-celled organisms, for example), others in a more complex form of that (budding). Some, while using some form of sexual reproduction, are naturally hermaphroditic and, in extremis, can fertilize themselves without a partner at all. And that’s not even getting into parthenogenesis, which can happen sometimes even for normally sexual species (I believe some sharks can do it sometimes, and there’s some limited evidence that it may happen occasionally to humans!). Nor does that touch on all of the weird and wooly ways that various science fiction authors have speculated on over the years.

            The problem is that a lot of what we think about “life” is extremely fuzzy. My instinctive response when someone asks me if a robot programmed to make another of itself can be called “reproduction” is no. But Star Trek’s Data once built another android modeled on his own design, gave it a female form, and called it his daughter. I would call that “reproduction.” Why is there a distinction there? Why is a paramecium enacting cell division “reproducing” but a robot duplicating itself isn’t? Maybe it is, and I’m just prejudiced against silicon.

          3. Sarah says:

            Actually, that’s an interesting way of saying it, Shamus. What is meant by inserting messy randomness into the reproductive cycle?

            I mean, yes, amoebas and other single-celled organisms tend to reproduce asexually, and they’re alive, but only in the most general sense. They are tiny protein machines, otherwise. It’s when you come down to the concept of mutation and change over time that you run into the feeling that something might be alive.

            Robots reproducing in a factory. They all come out exactly the same, they turn on the same, they start the same. But even for twins, in humans, this isn’t the truth. Identical genetics doesn’t make for identical human beings. So, consider this:

            What would constitute genetic lineage for a synthetic mind?

            Do you start off young AI as non-sentient software in a server farm and then run them through a series of artificial stimulae that have a high likelihood of generating eventual sentience? I mean, that’s what we humans do”” there’s a lot of debate over whether or not fetuses are their own human being yet, but no one can argue that an infant is ready to comprehend complex concepts until a few years down the line.

            These are bundles of cells that we produce and raise in such a way as to virtually guarantee their eventual sentience as an infinitely more complex living creature than what they started as. Wouldn’t it make sense for a robotic species to follow a similar process?

            1. Will says:

              “Robots reproducing in a factory. They all come out exactly the same, they turn on the same, they start the same. But even for twins, in humans, this isn't the truth.”

              It’s not true for Robots either; factory defects occur even in fully-automated production lines which can cause major differences from instance to instance. Even without those, we lack the ability to do atom-perfect replication, so each individual robot would be slightly different. Not a huge amount of difference, but enough to be measurable with very precise tools.

              1. Sarah says:

                But in this case, since the person is the software, wouldn’t measurable, behavioral differences be the thing?

                1. Will says:

                  Now you’re entering difficult terrain. In people, the ‘software’ is intrinsically tied to the ‘hardware’. Part of what makes up you is affected by the chemical balance of your body; if you somehow managed to put the nervous system of person A into the body of person B, you’d actually end up with a different person C, probably, because you’d have different chemical balances which would change the way person A thinks. How much difference is of course debatable and unknowable, because no-one’s done it yet.

                  Of course, software is typically assumed to be independant of hardware, but that’s not true either; it’s just that the differences between hardware are usually so small as to be unnoticable. You still get ‘bugs’ that are system-dependant though, certain combinations of hardware cause software to run differently than on other combinations, add in factory variations (‘defects’ if you will) in the hardware and you’re looking at plenty of possibility for random variation from robot to robot.

                  The differences would probably be significantly smaller than the difference between one person and another, in general one could assume that all robots produced by the same process in the same factory at around the same time will be fairly homogenous, but they will not be perfectly identical, there will be differences, small differences, but differences none the less. These differences would manifest in both software and hardware, and if the software is self-evolving (and it probably should be, considering what you’re trying to do with it) then it will immediately start modifying itself depending on it’s specific circumstances, so the longer a given robot has existed, the more different it will be from it’s peers.

                  Interestingly, this isn’t all that different from twins. Twins (the ones caused by the egg dividing, not the ones caused by 2 eggs) start out genetically identical in the womb, but immediately begin changing once they have seperated due to tiny variations in their environment. As they grow their environment diverges more and more, causing greater and greater changes (although even with massively diverging environments they still will retain many things in common.)

          4. lazlo says:

            Ok, you really need to read this. James P. Hogan wrote a mediocre book titled “Code of the Lifemaker” which is entirely redeemed by the excellence of its prologue which, handily, is available online from Baen books:

            http://www.baen.com/chapters/W200203/0743435265___0.htm

            It goes through a train of mostly logical logic that starts with an automated robotic factory and ends with something that is hard to argue isn’t life in some form…

            1. Sarah says:

              Oh! I have that book!

              It’s pretty entertaining, and I recommend it.

  14. Felblood says:

    It’s not something that little Johnny would think about or consider, but I’m at a stage in my life where securing territory, finding a mate and producing/nurturing offspring doesn’t sound like an unreasonable life goal.

    That said, like Kanodin above, I don’t think we can assume too much about what a mature Geth collective might want out of life. Legion currently sees wanting to destroy all organics as silly and alien, but perhaps when he grows up, he’ll see human expansion as a threat, to whatever it is he’s trying to accomplish by then.

    I believe his characterization as an indecisive adolescent who is incapable of making big decisions for himself, and just wants to consume media and explore the world is deliberate.

  15. Sydney says:

    Another issue I take with the concept of organic bodily functions as the locus of “life” is the case of brain death.

    That’s all I’ll say on that.

    1. Bubble181 says:

      I was actually going to write a similar thing. What do you do with the heavily mentally handicapped? My girlfriend works in a facility where they’re treated. Some of them have the mental capacity of a 3-year-old, that’s not too difficult. Everyone’ll agree they’re alive.
      Some of them can’t eat (have a probe through their belly button), can’t move (really – seeing someone who can’t even lift their own eyelids is something weird), and so on – but with the proper machinery, we can still determine things they like or don’t like. So they put them out in the sun and cuddle them and so on. For some, they’re still essentially human and therefore, alive and sacred. For others, they’re alive in the same way as a dog, or, at worst, a plant.
      Where, exactly, does it stop? When does it stop being a person, and become just a husk, a body?
      Being “alive” is a very fuzzy concept. Saying intelligence, sapience or sentience determines worth of life is clearer, but leads almost inevitably to Nazi-like views on the handicapped. “When does someone or something lose the right to live?” is changed into “When does someone earn the right to live?”.

      1. Daemian Lucifer says:

        “but leads almost inevitably to Nazi-like views on the handicapped.”

        I take objection to this.Putting humanity in front of an individual may seem cruel to our current society,but saying its a nazi like thing to do is making it be a universally bad thing,when it is not.

        1. Strangeite says:

          +1. There is a tribe that upon the first sign of frailty, they organize a huge party. Everyone comes and talks about how great the person is, what amazing things they have done, etc. At the end of the party, they jump into a boat and sail into the ocean, never to return. To us this may seem cruel and an unnecessary waste of life. To them, the idea of our elderly being placed into nursing homes is tantamount to torture.

        2. Bubble181 says:

          I didn’t mean this as a negative simile. I simply meant to say the Nazis did, in fact, do experiments on the mentally handicapped because they were good test subjects – not human, but genetically close enough that findings were meaningful.

          I, personally, think they go too far, though I do feel that in some cases euthanasia or abortion ought to have been a possibility. It’s not because I compare something to the Nazis that I think it’s evil. That’s an american thing :-P

          1. Daemian Lucifer says:

            “That's an american thing :-P”

            http://www.southparkstudios.com/full-episodes/s14e09-its-a-jersey-thing

            Couldnt resist.

            I wont go into ethics though.

            1. Bubble181 says:

              Yeah, sorry, sort of a rushed thing – it ISN’T an american thing, by the way – not by a longshot. I was jsut in too much of a hurry to try and formulate my ideas, without causing an ethical debate which isn’t the one about AI we’re having here.
              To reformulate very quickly; I didn’t mean to throw in a Nazi as a negative simile to say this way of thinking or whatever is necessarily bad or whatever. I was just pointing at a slippery slope that-a-way, and citing a historical example of people who went (in my opinion) too far.

              1. Daemian Lucifer says:

                To be fair,any idea is a slippery slope.When does freedom became harmful,for example?With current laws,you have the freedom to ruin someones life completely.When is murder ok?Death penalty is still in place throughout the world,and Im sure at least a few innocent people were executed for crimes they didnt commit.So do these two examples mean that we should remove all freedoms and stop with the death penalties for the most vile offenders?

                What Im going for is:Sure,the nazis went too far,but that shouldnt stop us from discussing(and even implement)some practicalities of when is a human alive and when is it practically dead.

                1. Will says:

                  Actually with current laws you typically do not have the freedom to ruin someone’s life without either reason, or them letting you do so. That’s kind of why most laws exist in the first place.

                  ‘Murder’ is never ok in western societies, because murder is a very specifically defined form of homicide with muliple possible degrees, all of which are illegal.
                  What you mean to ask is when homicide is ok, and the answer is when the law says it’s ok. A better question would be why the law says it’s ok to kill another human being under some circumstances, but not others.

                2. Daemian Lucifer says:

                  “Actually with current laws you typically do not have the freedom to ruin someone's life without either reason, or them letting you do so. That's kind of why most laws exist in the first place.”

                  Ill just point to the recent economic crisis.

                3. Veloxyll says:

                  Except the economic crisis can’t be attributed to one person. No-one woke up one day and went “you know what? I’m gonna crash the economy, awww yeeeaaahh”

                  Like so many things, it was a whole set of little, relatively innocent decisions that blew up in everyone’s faces.

                4. Chargone says:

                  in this context, ‘relatively’ is a VERY important word.

                5. Daemian Lucifer says:

                  Youre just picking one example apart.You want others?Fine,divorce:A legal way to drive someone to the ground.How many suicides came as a direct result of this?Heck,just cheating on someone can ruin that persons life.Private eyes basically live on exposing these,so they are legally living of someones misery.Homeopathic medicine.And tons and tons of other legal things you can do to ruin somebody.

                6. Will says:

                  In regards to the economic crisis; laws aren’t perfect, loopholes always exist for situations that havn’t been thought of yet :P

                  In the divorce example, you had to get married first, which means you gave the person you married the capability of ruining your life. You didn’t have to do so; marriage requires the consent of both parties. This concept is called ‘consequences’ and is an important concept that many people fail to grasp, or assume doesn’t apply to them.

                7. Daemian Lucifer says:

                  “In regards to the economic crisis; laws aren't perfect, loopholes always exist for situations that havn't been thought of yet :P”

                  That is precisely what I was going for.

                8. Will says:

                  Which doesn’t refute the point i was making that current laws typically do not give you the freedom to just ruin someone else’s life. I said typically, not always. Laws, like everything else, are imperfect.

                9. Daemian Lucifer says:

                  Depends on what you consider to be typical.Is 1000 people lot when the population is in the billions?The poverty caused by the government,practical slavery in many countries,resentful teachers,jealous bosses,spiteful coworkers,can any of that be considered the norm?

  16. Will says:

    Something that has always come up when I think about electrical computing versus electrochemical computing, is how do you translate Pain across platforms? Pain and Suffering (both physical and emotional) are life’s primary mechanism for training us against doing harmful things. Would it even be humane for us to program such a thing in to an artificial intelligence?

    Trees and single-cell organisms are alive, but we don’t accord them rights because (as far as we can tell) they are not intelligent and don’t feel pain.

    We accord animals some small measure of rights against cruelty because we know they can experience pain, but they aren’t intelligent enough for us to communicate.

    Without the capacity to feel pain and suffer, would an AI just be another tool/resource?

    1. Adam says:

      Depends how you define “pain”, if its a particular neural impulse, then no. If its “a trigger the organism acts to stop”, then yes.

      1. Will says:

        Which question are you answering? The “humane” question, or the “tool/resource” one?

        I don’t know if we can reduce pain down to a simple neural impulse/error message.

        In the case of the “humane” question, it seems to me that triggering an entity to stop what it’s doing at the first sign of pain is almost certain to stunt its development. We humans have to push through pain frequently to achieve our goals. Simply giving birth to a new life is painful. To force an AI to never persevere seems almost… cruel.

        On the other hand, if you’re talking about an AI without the capacity for pain, then it’s simply an incredibly advanced and intelligent tool that can have all the hopes and aspiration of a human with no more rights to pursue them than a hammer (based on my previous observations of how we handle other living organisms).

        1. Jarenth says:

          Question: if we were to program an AI in such a way that it is hardwired to view particular actions or commands as negative to the self, would you count that as an analog to ‘pain’?

          1. Will says:

            Generating some sense of pain would naturally arise from having a sense of self-preservation. Pain is just the name we give to signals our nerves send to our brain telling our brain that our body is suffering damage, we interpret those signals as something extremely negative because, well, it is negative due to our self preservation instinct.

            Give an AI a sense of self preservation and it will instantly develop a concept of pain, in fact it would probably develop a significantly more advanced concept of pain due to not being tied to uncontrollable limitations.

          2. Will says:

            Yes, it would, and we’d be dangerously close to hard-wiring the machine with morality (3 laws etc. blah blah), rather than allowing it to develop its own morality based on observations and value judgments. Sorrow, grief, loneliness. I would consider these to all fall under the umbrella of emotional pain, and in a system as complex as the human mind, physiological and psychological pain can feed off one another in ways it would be very difficult to emulate. What really fascinates me are the moral and philosophical implications of doing so.

            Would we be more or less inclined to treat an AI as being “alive” if it experiences pain?

            How would the AI respond to us for giving it the capacity to feel pain?

            1. Will says:

              Well before you can decide if an AI is ‘alive’ or not, you need to define what ‘alive’ actually means, and as the discussions in these comments and all over the internet clearly show, ‘alive’ hasn’t really been properly defined yet.

              So it’s less a case of working out if an AI is alive and more a case of defining exactly what being alive means.

    2. Skye says:

      I just find it interesting that, out of all the non-calculating parts of humans (in other words, the parts we assume sentience lies in, since no matter how badly you flunk math we still call you human) we first reached for the primate drives of procreation and securing of resources. The first emotion we thought of was pain.
      We are talking about the emotional state of pain, I’m sure- the thought “I should stop this” is not the same as pain, no matter the urgency- and this state is different from the physical state of pain. Heartbreak (or perhaps breakup sex is a more extreme example) is when you have the emotional part, but physically you feel okay. Exercise like in sports, or a workout program, triggers the physical form along with an emotional good feeling.
      This isn’t to be taken as a slight of some kind- I thought of pain first too. Just interesting how nobody seems to think that pleasure, humor, excitement, fear or anger.
      Humor was my second choice for proving sentience by the way. If it can laugh, it’s probably sentient.

  17. Ingvar says:

    I cannot, unfortunately, talk intelligently about the Geth at all. But for an interesting perspective on organic beings with, shall we say, an unconventional view on the world, I can recommend the Obin from John Scalzi’s “Old Man’s War” universe. They’re intelligent, but has no concept of self.

  18. Grudgeal says:

    Shame on you, Shamus, asking them the Shadow Question. You’d think AIs in Mass Effect have enough bad rep as is…

  19. Deoxy says:

    Sadly, I don’t have time to read all the comments today… :-(

    But here’s this:

    The conventional wisdom in science fiction is that any artificial intelligent beings would naturally adopt the same drives and goals as Homo sapiens.
    (snip)
    But I don't think this necessarily follows.

    I agree with you, on the whole, but I think there’s an underlying assumption (many) people put on that, which changes things:

    Any artificial intelligent beings that stay around for any length of time as a society (or even individuals, to a somewhat lesser extent) are going to have at least similar drives and goals as humans… simply because without a drive for survival and/or procreation, they simply won’t be around very long, and any kind of survival/procreative drive strong enough to actually work (barring these AIs being of nigh-godlike power to begin with, which is really a different debate, anyway) is going to bring about other desires, most obviously for resources that they consider necessary for “life”.

    Now, EXACTLY how that plays out, and which of our drives derive directly from survive/procreate and which are just idiosyncrasies, and which of THEIR drives might be idiosyncrasies, etc, can make for fun writing. And a created AI obviously does not have to have any such drives. But any society/civilization of such things, or even a long-enduring individual (unless by sheer happenstance) must have some kind of survival/reproduction drive to be believable.

    1. Sleeping Dragon says:

      The “must have goals similar to humans” is not entirely true in my opinion. For the sake of discussion I think we can assume that we, and most animals, are built for survival. An AI doesn’t have to.

      This, however, can be easily concealed by the fact that to realise most goals an AI would have to survive for a given period of time. For example, if an AI decided (or was created with a goal) to go to another galaxy it wouldn’t simply fire its hard-drive into space in the general direction of its target. First it would obviously have to gather some data and create a vessel to carry it through space. Once that is achieved though there is no saying that the AI would suddenly arrive at the conclusion “I don’t wanna die!” it could very well just shut down feeling its purpose fulfilled.

      This is not as absurd as it sounds. I am by no means a specialist on Buddhism and related concept but from what I understand achieving Nirvana also means the end of self. In other words humans, being as instinct driven as we are, can strive for the state of mind in which there are no goals or wants, an AI wouldn’t have to fight itself, it could simply know that once it achieves a given goal it is fulfilled. (also, I do hope this doesn’t drive the topic into religion, I meant this to show that humans can strive to achieve the end of self so it’s not some completely abstract and unrealistic concept that nobody could follow)

      1. Deoxy says:

        Yes, I agree with all of that… but there would be serious society or longevity in such things. They would fulfill their purpose and then… what? Die/cease to function/float around doing nothing (equivalent of ceasing to function, really) until they broke…?

        Without some kind of survival “instinct” (or a goal so long-term that they have to consider their own survival, which is essentially equivalent), there are only created AIs – one-offs, machines built to fulfill a purpose and no more.

        The only way you get any kind of long-term presence of an AI society or even individuals is if they value their own survival/progeny… which would mean they would value the resources they need to continue their own survival/species… which would be what I meant by “goals similar to humans”.

        1. Will says:

          After fulfilling their purpose, what’s to say they can’t then get a new purpose?

        2. Monojono says:

          Why exactly are you drawing a distinction between AI that would last a long time and those that wouldnt? I dont think their longevity is an indicator of intelligence or whether or not they could be considered ‘alive’.
          Yes, obviously any machine built by humans which could continue running for a long time without help from humans would need some sort of means/programming to do so, but that wouldnt necessarily make it an AI.
          You said a one off ‘created’ AI would fulfill a purpose and no more. Self preservation, reproduction or forming a society would also be programmed purposes AI could fulfill and do no more. The fact that this would result in them still being there later doesnt have to make a difference.

        3. Sleeping Dragon says:

          I do not entirely agree. To start with let me clarify that I am not speaking about a “self-made” AI. I mean one that is created on purpose and for a purpose. I suppose a large problem here is the definition of an AI itself (for example I always thought the distinction between AIs and some VIs in ME is somewhat vague at best).

          Yes, an AI that fulfilled its purpose could very well simply cease to function. Why would it get bored? Why would it pursue other goals? Of course, it could develop a survival instinct on its own or redefine, or reinterpret, its own goals. Suppose you send an AI to planet X with a goal “do research on geological structure of planet X and send back the data”. It could go there, scan the planet, send back the data and shut off. Or it could do something like:
          -the data shows that the planet’s geological structure has changed in the last x million years.
          -data on such changes should be considered part of the data I was told to acquire
          -in order to fulfil my goal I need to assure my continuous function for a prolonged period of time
          -as a result there will likely be more data than initially anticipated, I need to assure more efficient methods to transfer data back home
          …and so on.

          The above is, however, just an option. An AI could develop a goal of survival and/or create a society on its own, especially if given free reign or a vague goal such as “improve yourself”. It could have or interpret its goal in a way that would require its survival and/or development. My point remains that it wouldn’t have to and this wouldn’t make the AI any less alive or intelligent. It could be tragic from our point of view, since we are “programmed” mainly for survival, so there would be the whole “throwing its life away” or even simply “this is stupid”.

          But from that AIs point of view, if it thought of that, humans could be tragic as well. No real purpose, just a vague goal of survival and propagation of the species that we have on the one hand outgrew and are trying to override, on the other it very often plays us for fools and makes us act contrary to our best interest. This is why our culture is saturated with the “apathetic immortal” trope, a hundred characters are based on the “I will live forever, and I long to die” theme, or on “I have seen it all over thousands of years, there is nothing new, I am just going to sit here”. The other extreme is the “you are a cancer” trope, a species whose survival and propagation instincts still work at full strength when they have means to spread faster than they are being thinned out. An AI could be free of all of this because survival mattered to it only as much as it was necessary for performing its goals.

    2. Monojono says:

      If I made an artifically intelligent computer, it would be ‘alive’ for as long as I kept it plugged in. It could outlast generations of humans without needing any drive to survive or procreate.
      If an AI was programmed with a self preservation, I like to think it would sit there shovelling fuel into itself as all it’s processing power was used to try and determine how to survive the constant increase of entropy and the eventual end of the universe, which is sort of what happens in an Asimov story I’ve forgotten the name of :)

  20. Piflik says:

    There is really no definition of what life is, yet…I personally would say an entity is alive, if it has the ability to reproduce and evolve…or rather if it is part of a species that can reproduce and evolve, to not exclude 90% of all ants and bees ;)

    So a robot can be alive, if it has the ability to reproduce itself and ‘make mistakes’…cloning doesn’t count…same goes for a computervirus…if it evolves by itself, it is alive…

    But as I said, that is my personal definition of life and it is not (yet) scientifically accepted :D

    1. Daemian Lucifer says:

      How would you classify reapers then?They cant either reproduce,nor evolve,but are doing both artificially.

      And what about nanomachines.Are self replicating nanomachines reproducing or not?

      1. Piflik says:

        Well…I am not that versed in the Mass Effect Universe, but as long as their means of reproduction doesn’t create clones of the ‘parent’, it doesn’t make any difference how they reproduce.

        Same goes for nanomachines…

        copies-> not alive
        ‘mutated’ copies -> alive

        1. Daemian Lucifer says:

          What about single cell organisms?They mostly clone themselves,but some mutate.Are only those that mutate alive?

          Im not sure if Id call nanomachines alive,because they are assembling new ones out of outside material.I consider reproduction to be when an organism creates another organism from its own material(be it processed or unprocessed).So an ai spawning a digital copy(perfect or modded one)of itself would be reproduction.This is,of course,only the case with ais that are just programs,not the ones like in mass effect which require a physical component as well.

    2. Moridin says:

      Fire can both reproduce(ignite more fires) and evolve(its temperature, color and other qualities change depending it’s environment)

      1. Piflik says:

        Fire doesn’t reproduce, it spreads, and it doesn’t evolve, it only reacts to its fuel/environment. You cannot have one flame ‘give birth’ to another one that burns the same fuel, but burns hotter and/or in a different color. The ‘child’ of a flame will be a complete copy. (my biology teacher back in school had the same stupid example to not accept my definition of life, because she wanted to hear ‘made from cells’)

        1. Will says:

          The problem is very simple; there is no clear and widely accepted definition for ‘life’, there is no line drawn in the sand, rather there’s a sort of fuzzy gray area. Some things end up to one side or the other of the fuzzy area and can clearly be defined as ‘alive’ and ‘not alive’ (cats, rocks), but some things end up inside the fuzzy area and cause all kinds of debates.

          Some people of course like to draw their own lines in the sand based on their perception, which is fine, everything is relative after all. But everyone having their own definition doesn’t help the overall discussion much.

    3. Fnord says:

      “if it is part of a species that can reproduce and evolve, to not exclude 90% of all ants and bees”
      So you don’t exclude ants and bees, just mules?

      Also, how are you defining species, then? The “group of organisms capable of breeding and producing fertile offspring” doesn’t really work when you’re specifically trying to include sterile creatures.

      1. Piflik says:

        Well…both horses and donkeys can reproduce and evolve…the fact that their offspring cannot doesn’t make it not alive. Same goes for non-hybrid offspring of any species that is infertile for any reason…

  21. jonesy says:

    Damn you, Shamus! Making me think this early on a Monday morning!

  22. Strangeite says:

    After reading all the comments, it is interesting that nobody has mentioned that this debate has been ongoing for at least 2500 years.

    At its heart, we are debating Objectivism versus Relativism. Great thinkers like Plato, Aristotle, Augustine, Descartes, etc. have laid out the case that truths exist regardless of the time, place, culture or biological makeup of the individual.

    Shamus, specifically you referenced Descartes Ontological Argument but misquoted Descartes. He would never have agreed with the idea that “Cogito ergo sum” (I think therefore I am) meant that the thinking being necessarily wants to exist. In fact, after Descartes has used skepticism to blow the hell out of the world and arrived at Cogito ergo sum, he starts rebuilding it and specifically mentions the idea of an existing being that cares not for existence.

    Jospeh Butler refines the idea in the 18th century by arguing against a strict interpretation of Egoism by pointing out that the conscience is capable of cognitive capability before instincts take over. Specifically that it is possible to take actions that are not self-preserving because we value other things and those values are rooted in cognitive evaluation.

    Even Hobbes and his steadfast view that humans are mechanically inclined, both behaviourly and physically, concluded that free will exists, just within the boundaries of instinct.

    Then you have the thinker that probably brings more to table in this particular discussion than any other, Bendict Spinoza. The idea that cognitively recognizing cause and effect means that there is a correlation with Nature and that the Universe is permeated by Rationality is a powerful indicator that we will see other intelligent beings.

    What I find truly amazing is that we are alive during a time that it is possible that the entire debate will have to be redefined. The birth of an AI will have profound impacts on question of Objectivism versus Relativism. If and when that happens, Solipsism is going to rise from the ashes and become THE central basis for discussion.

    Personally I think the machines will follow the brand of Hedonism espoused by Epicruius and strive for their own version of Ataraxia. I know that sounds backwards and impossible, but I believe it in my gut.

    The one area that everyone would agree that machines shine is in the arena of systematic logic. But also remember the fact that Aristotle pointed out 2500 years ago. Logic is only possible because of Faith (Pistis). There are the 3 Laws of Logic. The Law of Identity, the Law of Non-Contradiction and the Law of Excluded Middle. These 3 Laws are unproven and impossible to prove. A person is only capable of using logic when they take a leap of faith and “believe” in the those 3 Laws.

    The machines have faith, what else might they possess.

    1. Shamus says:

      “He would never have agreed with the idea that “Cogito ergo sum” (I think therefore I am) meant that the thinking being necessarily wants to exist. ”

      To be clear, I wasn’t suggesting that he did. But many people have connected the two, and I was trying to drive a wedge between the two. I had no idea Descartes was on my side on this. In fact, if you had asked me which philosopher had given us that quote, I doubt I could have named him.

      1. Strangeite says:

        I think he is on your side. And reading between the lines on your blog, I think you would agree with him all the way up his ladder to his conclusion on the nature of divinity based upon the need for perfection requiring actuality.

        But the one philosopher that my best guess would be near and dear to your heart is St. Augustine. Not only was he a brilliant thinker, his Confessions is actually a really enjoyable read.

        But the main point of my comment was that VERY smart people have been debating this very question for literally thousands of years and we are alive in an age that COULD shake the very foundation of western thought. Exciting stuff.

        1. Klay F. says:

          Maybe its just me but I think the Geth are die-hard Ayn Rand fans if you know what I mean.

          1. Strangeite says:

            I will withhold my comments about Ayn Rand because there are several fans of her commenting on this blog.

            But yes, I do know what you mean.

            1. Klay F. says:

              Say what you want regarding her attitude toward altruism, but I think the underlying philosophy of it (that reality exists regardless of whether we observe it or not) is spot on.

              EDIT: Reading back, I’m not sure where I was going with this line of thought, or how it related to the geth…Oh well I guess.

          2. acronix says:

            *Insert lousy reference about Bioshock here*

    2. Cerapa says:

      EDIT: Made a reply instead of a new posting. Oops.

      1. Daemian Lucifer says:

        You should buy oops insurance.

    3. Zukhramm says:

      I’m not sure I agree that those laws require “faith”. We can choose to go by them if they are useful, without actually having a belief in them.

      1. Strangeite says:

        I would argue that the formalized systematic field of Logic has proven to be more than just “useful”. It is in many ways the most pure form of understanding the universe.

        A radio is useful. The understanding of the force of electromagnetism has changed over the years and we “go with it” because such devices like a radio are useful. Physicists have updated, combined and tweaked our understanding of the fundamental forces of physics over the centuries but the radio remains useful.

        Logic is different. It doesn’t break down or need tweaking as we discover new information. It just works. But, at its heart is those 3 Laws that you simply have to accept. You don’t “know” that they are true. You have to accept they are true without proof. You may not want to call that acceptance faith, but I don’t know a better word for it.

        I will leave you with a quote. “”Anyone who denies the law of non-contradiction should be beaten and burned until he admits that to be beaten is not the same as not to be beaten, and to be burned is not the same as not to be burned.” — Ibn Siba

        1. Will says:

          Logic breaks down all the time; as a product of imperfect minds it is itself imperfect.

          The most obvious case of logic breaking down is anything to do with Quantum Mechanics; a field which, from a logical perspective, appears to be completely insane.

          1. Strangeite says:

            I am not a perfect student of Quantum Mechanics but I have never heard of any quanta effect that violates logic. Sure there plenty of breakdowns within the Standard Model. But that is breakdowns in Physics as we know it. The model needs improved.

            Breaks downs in Logic? Now that would be news.

          2. Strangeite says:

            Also, you are using the term “logic” when it appears you mean “common sense”. These are VERY different things.

            1. Will says:

              Actually, before i respond to that, what kind of logic are we talking about exactly? Any specific form or just logic in general?

              1. Mediocre man says:

                He’s talking about general logic, though I disagree with his definition.

    4. Will says:

      Epicurean hedonism is based on detailed knowledge of what makes people happy. That will not work with machines.

      Modern logic is not based on Aristotle’s three laws.

      Isn’t the problem with solipsism becoming the standard that two different solipsists don’t agree?

  23. StranaMente says:

    Interesting in this reguard is the novel by Asimov named The Bicentennial Man, in which he explores the boundaries of the definition of what is a human being.
    And, depending on your believes, you can find the exact phase in which a sentient being could be defined human.
    The first and most obvious being the fact that the main character is somehow different from all the other robots of the same series. Its uniqueness, and some sort of self awareness are symptoms of humanity.

    Many thing, though, could be said (and has been) about what is senient or not.
    Is it sentient an animal that can recognize itself in a mirror (only few animals can, and babies can do that only after several months)?
    Or is it if it is capable of abstract thinking (as in mathematical counts, like even parrots can do)?
    Is it required that it has higher skills, or it is sufficient the mere potential (it is known that for humans, potential is more than enough when other creatures or entities it is not)?

    Another interesting reading about what is and what isn’t a sentient/human being is Frankestein. It’s true that it has some dark tones, and sometimes it’s borderline horror, but it’s truely one of the first and better sci-fi books ever. It faces the ethical problem involved in creating life in a extraordinary way, from the sides of both maker and creature. Forget about all the b-movies and read it pronto. You won’t regret it.

    Talking about Geth, I have no doubt they can be count as alive. I think there are greater problem about how they can be categorized.

    They somehow miss the individuality, because they can be seen as one big entity with many sensor organs (the single platforms), and in such way they can be assimilated to the borgs: there aren’t Geths, as there is only one will, there is only one entity.
    Or they can be seen as a perfect democracy, in which every entity is part of, and counts toward the majority. Legion is a moving city state. So every single geth in some limited form exists, but find its true purpose in being part of the collectivity.
    Personally I think the Bioware thought it is closer to the second (at least, most of the times). Anyway, it’s hard to think about geth as entities because for us a body is a necessary part of a sentient being.

    And another thing, talking to Legion he tells you the first time a robot took consciousness of itself. It were a single “platform”.
    The thing is that now single platforms are presented as rather dumb, and it is said that they find strenght trhough the link with other geths.
    So they actually de-evolved, from full self-aware robots, to stupid order taking drones. This bothers me a little.

    EDIT: After reading what I wrote I noticed that I didn’t make it clear my point of view about the central subject: what separates living and non-living things.
    I think that even plants are somehow alive, and only rocks and dirt isn’t. A different plan of things is if a living thing is sentient or not, or, religiously talking, if it has a “soul”.

    1. Strangeite says:

      Spinoza logically argues quite convincingly that those rocks and dirt do in fact have a soul. You may disagree with his initial claim that there is a correlation between cognitively recognizing Rationality and that the Universe is infused with Rationality, but if he is wrong on this count, it takes you down some really weird places.

      1. StranaMente says:

        Well, sadly a only know very few about Spinoza’s thought (and the little I know is hard for me to translate correctly from my own language), so I can’t properly respond to that on this groud, and unfortunately I have to move to the somewhat safest territory of “science”, and say that rocks and dirt are not living things as they have no (at least known until now) way of reproduct or feed, they can only wear out.
        So if they, by any mean, cannot be count as living, they should not have a soul. But the whole definition of soul is flawed, as it may mean different things for different people.
        For me, being an atheist and a materialistic with a catholic background, is somehow strange talking about souls, but even though I thought it was deceptive as a concept, I reckoned it could help deliver my thoughts.
        And I think it’s more interesting to see at which point a living thing is considered sentient or not.
        One my ex gfs, maintained that animals have no souls, even the ones that are “most” sentient (as chimps or orang-utans), as she thought that only humans could have a soul. Not sure where she pulled that out exactly, but she thought it was from the bible.
        When instead I thought that everything that’s alive has a soul (very pantheistic of me).
        As you can see soul is not the right word for this debate.
        Wait, what we were talking about, again?
        :-)

  24. BeardedDork says:

    Few people would say that the purpose of their life is to simply survive and propagate the species.

    I am apparently one of those few people since the question was what is your purpose and not what do you want. To be fair any answer I could give to what do you ultimately want/what are your goals/etc. would boil down to something that improves my chances at survival and propagation of the species.

  25. Nidokoenig says:

    About the drive to become an enormous supercomputer: what if this is simply a result of how the Quarians originally programmed them? They presumably have a set of directives that prioritise certain actions, and since they’re designed to network together, it may simply be that a key priority of the basic engine underlying their programming is joining together more Geth.
    Maybe the prioritising is badly programmed(from the designer’s perspective) and this is what lead to them reaching sentience: They place high priority on joining together, and will do so if there is no impediment, but the capability of acting to remove impediments(like asking for permission to pass a firewall) allowed them to follow that goal at the expense of what the Quarians built them for and wanted them to do, which would be programming for individual instances of Geth and thus more mutable, the difference between modding a game and rewriting the engine, there are some things you’ll have absurd difficulty changing without source code access. The networking snowballed and the Gethsphere is hypothetical ultimate snowball they’re driven to create.

    1. Veloxyll says:

      A Quarian changed the network priority from 1 to 999

  26. Andrew B says:

    While it’s not an insightful or thrilling observation, I think one of the few things we can say with certainty the Geth want in the ME universe is survival. It’s stated on several occasions that the Regu-Geth act aggressively in response to external threats only. (They fought against the Quarians when the Quarians attacked and the Heretics when they discovered the “virus”.)

    Speculation wise, the giant super-Geth computer is interesting. While it could well be a cunning nod to the idea that the Geth wish to “ascend” to super-sentience, I suspect it’s also an intentional mirror to the Quarian’s search for a homeworld.

    Off topic, but I really like the styling of Legion and the Quarians in ME2. Legion carries a bunch of visual similarities to the Quarians that I think are nicely and subtly done. (His hand design, leg design and the curious linking of the back of his “head” to his shoulders, which mirrors the male Quarian suit. Even his “eye” rather reminds me of the Quarian “speaking light”, although that’s probably chance and me reading too much into it.) Over all, Legion looks like a Quarian abstracted into mechanised form, which is a great touch. It would have been too easy to make him look more “human” and less “alien”. I’d like to imagine the supercomputer is an echo of the homeworld search to tie into this. (It probably isn’t, though.)

  27. X2-Eliah says:

    Geth are the easy ones…

    But are the Husks alive? They have flesh. They have motor control. And yet they arguably do not think on as high a level as the Geth do. And yet they breathe, move and make decisions (crude example – how to run around a corner to get to player).

    1. Daemian Lucifer says:

      Keepers are even better example than husks because husks are mostly cybernetic while keepers are mostly organic.

      1. Fnord says:

        It might perhaps be difficult, but it’s not as ethically interesting as the question of the geth.

        Whether husks are killer robots or rabid dogs, the ethical response is the same.

    2. Klay F. says:

      I would classify husks as un-life.

      1. Chargone says:

        un-dead un-life.
        Zombies^2!

  28. Cerapa says:

    The more I think about the Geth the more I come to the conclusion that the Geth dont have an actual goal.
    I think the thing that can explain it is the dyson sphere they are building and their actions upon achieving sentience.
    They immedeadly started questioning things, but why? The only reason I see is information, generally the more you have of it, the better you can achieve your goals, but you also you need to process it, which is why you would need a gigantic supercomputer.
    So it isnt about a goal itself, but in being able to achieve any goal with maximal efficiency, possibly a leftover piece of code from when they were used as a workforce, in order to make them naturally make them improve their work routes and stuff without needing to attach a seperate goal to do that. Without a particular goal, you wouldnt be able to do anything else than prepare for every goal.

    1. Klay F. says:

      If you read between the line when talking to Legion about the geth-sphere, there is a dialog option to ask him about what will happen after they finish it. He basically implies that the ultimate geth goal is to find more goals.

      In other words, they have no idea.

      1. Chargone says:

        as goals go… that one’s pretty good :D

        1. Klay F. says:

          Heh, saves them the trouble of having to answer that insipid question every time they finish a new goal:

          “Sooooo, what are you gonna do after you accomplish this?”

          “*Sigh* How are you not getting this?”

  29. Abnaxis says:

    “Few people would say that the purpose of their life is to simply survive and propagate the species.”

    I went to a graduation ceremony once, where everyone who went up to grab their diploma had their life goals printed on the wall. By far, the most common was “get married and have kids,” often printed by itself with no other goals. To me, that is exactly what you are talking about–making your only goals to find a mate and reproduce.

    On topic, the thing that bugs me, really bugs me, about AI being considered life is that it is created from nothing. Humans don’t pop from nothingness–you have to have two humans present first, and hey have to mate. Furthermore, no matter who you are, if you a human, you will eventually die, and cease to exist (at least, in any sense we here in the material world are concerned about.)

    Now, if you’re meddling around with AI, those two caveats don’t have to apply. AI is, by definition, intelligence which has been manufactured. I find enough element zero, platinum, and pixie sticks on planet Y, and I can pump out twelve million gross Gethlings, even if I just finished a decade-long campaign that wiped out their entire “race” half a century ago and destroyed all copies and back-ups. Life, as we know it, can’t do that. Once it’s gone, it’s gone, and that is part of what makes it precious.

    Speaking of the preciousness of life, another implication of AI is that, with a robust enough backup system, artificial beings are immortal. It doesn’t matter if you shred my robot, melt him into slag, and shoot the remains into space, I can just grab the drive he uploaded his software into in his nightly back-up, get him a new body, and it will be exactly as if nothing happened. This also devalues life by removing the struggle inherent in being alive.

    Now, most sci-fi writers get around these caveats by making their robots mortal due to some extenuating circumstance. For example, Data (of Star Trek fame) was created by a genius whose work has not been duplicated by anyone else. We have to put our blinders on and believe that, in a world where they can decronstruct a person molecule-by-molecule and reconstruct them somewhere else with perfect accuracy, that they can’t make an exact copy of a machine and reprogram it. That’s because when Data becomes immortal he stops being a character we can empathise with and just becomes a chunk of hardware.

    That’s a very common trick sci-fi writers use–almost all of them that have broached this subject have pulled it, at least to my recollection. I don’t see us ever creating artificial life, because any sapient machine we create here in not-sci-fi-land will be unextinctionable and immortal. In my mind that makes them not alive, regardless of sapience of sentience

    1. Cerapa says:

      We have completely opposite opinions. Personally I consider fire to be alive. But Im gonna try to explain robots though, for obvious reasons.

      Robots arent created from nothing. They are created by something, if they were created by nothing, they would be like organic life itself, born from chemical reactions. Cells are built by other cells, just as a robot might build a copy of itself.

      Does amount really decrease the value of something? Would microbes not be life because with enough resources they can make more of themselves to the trillions? Why should microbes made of metal be considered any different from microbes made of organic compounds?

      You couldnt rebuild robots after you have destroyed every copy and schematic. This should be obvious. I try to be polite and make rational arguments, but REALLY? You can build robots without any schematics at all? Life as we know it obviously cant do that because NOTHING can.

      What does immortality have to do with being alive? With good enough technology you could make a person immortal, would that make him not alive?

      With good enough preservation, you could keep a piece of DNA forever, which you could use to make a living being almost identical to the original, just as you could use a schematic. Does that make everything with its DNA preserved non-living?

      1. Abnaxis says:

        Okay. “Created from nothing” is a bad turn of phrase. I knew that when I made the post, but I went with it, to avoid the Religion tango I’m going to have to do now…

        When I say “created from nothing,” I mean there is a precise event which occurs, before which there is no artificial life and after which there is. Now, I don’t know what everyone’s particular religious affiliations are, but events like these are generally beyond the ken of us lowly humans, either because they are and act of God or because we weren’t around when the first bug spawned from the primordial soup however many millions of years ago. Short of unexplained deific or cosmic phenomena, no life exists that cannot trace its genealogy back somewhere. That means that once and entire species is wiped out (including any genetic samples, since that would be similar to wiping out all records of an AI), it is gone for good. That makes life precious, not because of the quantity, but rather due to the fragility of it. We must protect diversity, lest we lose it forever (or at least until God deigns to make a replacement or we wait another eighty-million-kajillion years for the species to re-evolve).

        Artificial intelligence is different in that before it exists as life, it exists as an idea. Ideas can, and in fact have quite often repeated. If I wipe out an entire race of machines, and wipe out all information pertaining to their creation, it might take a lot of work and several generations of scientific discovery, but eventually some schmuck will get the idea to build them again. This makes them lose and important aspect of life–if I were to create a race of sentient machines, then found out some design defect made them a health hazard for humans, I would have no moral compunctions recalling and “killing” them all, breaking them down to their components, and rebuilding them to make them safe. They do not have the same rights I would afford a living thing. Conversely, no matter how flipping annoying they are, I would take umbrage if someone up and decided to wipe out every mosquito in the world because they spread disease and make us itch.

        Now, onward to clones. In response to your question, I would say that yes, if humans ever get to the point where they cannot die, either through advances in medical technology or in a similar manner to robots–making backups of consciousness and transferring it to clones–then we will cease to be alive, and should be classified as something else, neither dead nor alive.

        There are certain rights and courtesies I give to living things that I would not afford to immortals. I wouldn’t care if I caused them suffering. I wouldn’t care if I caused them harm. I would use them and profit from them in every possible way I could because hey, they can live a nice wholesome life after I’m done with them. I couldn’t care less about them, and that makes them not-life.

        As it is, I give everyone else respect and courtesy because they, like me, get one go-round on this spinning top called Earth and it would be nice if everyone has a chance to make the most of it.

        1. Strangeite says:

          That is some pretty twisted thinking, but I will play along.

          “When I say “created from nothing,” I mean there is a precise event which occurs, before which there is no artificial life and after which there is.”

          Can you give me one example of this ever happening since the big bang? You state in your first comment that AI “bugs” you because “it is created from nothing.” In your second comment you equate those types of events to a deity or other supernatural entity, but were referencing artificial life. Maybe you view both the same way, but neither AI nor artificial life would be created from nothing.

          Take for example the work that Craig Venture is doing. The guy is rapidly moving the world to the point where we will be able to create a “living” cell that will be indistinquishable from other cells using bottles of chemicals on his shelf. Is it alive, if he is genetically and procedurally exactly like other biologically evolved cells? They won’t be created from nothing, but created from 4 bottles of chemicals on a shelf is getting pretty close.

          Or AI. I don’t know anyone that argues that an AI would spring to existence in a vacuum. Even AI characters like Jane from Speaker of the Dead, are “born” within and because of existing systems.

          Maybe I am confused. Are you saying that in order for something to be “alive” it must “spring from nothing” or are you saying that in order to be “alive” it cannot “spring from nothing”. Your two comments seem to contradict each other.

          “There are certain rights and courtesies I give to living things that I would not afford to immortals. I wouldn't care if I caused them suffering. I wouldn't care if I caused them harm. I would use them and profit from them in every possible way I could because hey, they can live a nice wholesome life after I'm done with them. I couldn't care less about them, and that makes them not-life.”

          I hesitate to touch this topic at all but I must ask. What do you define as immortal? Living 250 years? A 1000? A million? The sun isn’t going to last forever. For that matter, neither is the universe. The second law of thermodynamics will see to that. Nothing lasts forever, so really we are just talking about matters of scale. So at what scale do individuals become chattel in your eyes?

          1. Chargone says:

            ‘immortal’ with no further qualifiers/traits tends to amount to ‘only die if someone/something actively destroys them’, usually with elements of ‘highly resistant to being destroyed’.
            if you can put a time limit on it they’re not immortal, but the usual usage of the word.

        2. Cerapa says:

          This is quite a hard discussion to continue for me, because you can do things that I consider to be morally “evil”. Mostly the fact that you are willing to cause suffering to someone and then kill them(which is very odd since an immortal couldnt die).

          I might do questions of every paragraph and such, but I will just ask 1 question:

          What seperates artificial life from natural life?

          EDIT: And realized you might do some “it was created” babble(sorry,for my choice of words, not the best for dicussions). So, I will say this: Each living being is built by its mother, an animal might technically grow itself from its original cell, but that cell is constructed by its mother.

          1. Fnord says:

            So many fallacies, so little time.

            Let’s start with: You wipe out every AI, all the backups of the AI, all the records used in the research of that AI, and, heck all the scientists who researched, created, and studied the AI. You contend that, in a relatively sort period, other research and create THAT EXACT SAME AI?! Not another AI, not even a similar one, but the exact same one, despite having no record or knowledge of it? That’s absurd.

            Yes, AI in general might reappear. But that’s no different from the fact that, if you wiped out a species, a (presumably similar) species might evolve to fill their ecological niche.

            1. Cerapa says:

              Please reply to the the person you are actually speaking to.

              It gives me the creeps if you dont.

          2. Will says:

            Not all living things have mothers, in fact the vast majority of living things have no genders at all, and i’m pretty sure there are some species that have more than 2 genders.

            All of this is completely irrelevant though, because the crux of your argument seems to be the concept of ‘immortality’, which doesn’t exist. If you have an immortal then you’ve already violated multiple fundamental physical laws like thermodynamics, so discussions about whether the immortal is ‘alive’ or not are kind of irrelevant at that point.

            1. Strangeite says:

              i'm pretty sure there are some species that have more than 2 genders.

              Yeah, like humans. There is the “classic” XX (women) and XY (men) but there is also XO (Turner Syndrome), XXX (Triple X Syndrome), Klinefelter’s Syndrome (XXY/XXXY), XYY syndrome (XYY), de la Chapelle syndrome (XX male), Swyer syndrome (XY female) and many others.

              Some of which can succesfully reproduce.

              1. Will says:

                One can reasonably state that genders other than XX and XY are probably ‘errors’ so to speak or at the very least are abnormal, i was thinking more of species that have more than two genders as the norm for the species, i’m pretty sure there’s something out there with three, but i can’t for the life of me remember what.

            2. Cerapa says:

              I was referring to a more general parent thing.

              Also, you appear to be replying to the wrong person with the other paragraph.

        3. Abnaxis says:

          Alright…started off speaking from my gut, which invariably leads to misunderstanding. First I’ll respond to the responses to clarify what I wastrying to say, then I’ll make a post that more succinctly actually conveys my opinion.

          Take for example the work that Craig Venture is doing. The guy is rapidly moving the world to the point where we will be able to create a “living” cell that will be indistinguishable from other cells using bottles of chemicals on his shelf. Is it alive, if he is genetically and procedurally exactly like other biologically evolved cells? They won't be created from nothing, but created from 4 bottles of chemicals on a shelf is getting pretty close.

          That’s…interesting, and difficult to answer, especially since I don’t know anything about it. I would say no, by my thinking those cells are just tiny little machines made from exotic materials (well, exotic for machinery at least). When life becomes something you can reproduce at a whim after it has been obliterated, it ceases to be life and becomes and advanced form of manufacturing.

          Are you saying that in order for something to be “alive” it must “spring from nothing” or are you saying that in order to be “alive” it cannot “spring from nothing”?

          I’m saying true life cannot be created where life does not already exist–life depends on a cycle that, once broken, cannot be restarted.

          Now, I don’t consider myself an idiot. I know life had to start somewhere. So I’ll modify my statement somewhat: Life either comes from life, or from A: random events that culminate in the combining of proteins to start the life cycle or B: divine intervention to start the life cycle. In short, it is not something that can be mass-produced to replace after it has ended.

          I hesitate to touch this topic at all but I must ask. What do you define as immortal? Living 250 years? A 1000? A million?

          I define immortal as “lifespan determined entirely by availability of resources.” Yes, nothing will be alive after the heat-death of the universe. But if that’s the only thing stopping you, you’re immortal.

          Each living being is built by its mother, an animal might technically grow itself from its original cell, but that cell is constructed by its mother.

          That would be my entire point. The mother was alive, and produced a living thing. Artificial life is not created directly from other life. Yes, we are alive and yes, we can conceivably create artificial beings. But the details of their existence depend entirely on the designs used to build them, the ideas that brought them forth, and not something tangible like a nucleotide sequence.

          Let's start with: You wipe out every AI, all the backups of the AI, all the records used in the research of that AI, and, heck all the scientists who researched, created, and studied the AI. You contend that, in a relatively sort period, other research and create THAT EXACT SAME AI?!

          Happens a lot in design work, actually. There are only so many optimal solutions to a problem. Everyone who seeks to build an artificial life form has the same hurdles to overcome, and the same yardstick to measure success by. Assuming everyone designing AI was working toward an optimal solution, the exact same AI would be quite common.

          1. Will says:

            If that’s your definition of immortal then you may be interested to know about Turritopsis nutricula, a hydrozoan (jellyfish) commonly referred to as ‘the immortal jellyfish’ due to the fact that it doesn’t age and barring outside causes or lack of resources, will quite happily live forever.

            Do you consider Turritopsis nutricula to not be alive because of its biological immortality?

            1. Abnaxis says:

              Artificial life is not immortial because it is hardy, or because it doesn’t age. It is immortal because it is an idea. If I round up all of the immortal jellyfish, dissect them and learn all the details of their anatomy, and toss their entire population in an incinerator, I can’t recreate precise copies of them from scratch. Or rather, if I could, I would cease to consider them a living being after I did it.

              AIs are immortal because they rooted solely in data, which can be copied and backed up to the point where you will never be able to destroy it permanently. They are immortal because they are rooted in the abstract.

              1. Cerapa says:

                Wait, why couldnt you recreate a precise copy of jellyfish if you knew every detail about it?
                I mean, it would surely be harder simply because they just dont have a structure that is meant for that, but impossible? Hell no. Therefore jellyfish arent alive.

                Lets talk worms. Worms are pretty standard things in their features, just a long blob of muscles and standard systems that all life have that eat leaves and stuff. Just as likely to pop up again as a particular AI. Are they non-life?

                What about a small flying spineless creature that eat dead things?

                What about a sentient creature with 2 legs and 2 graspers that processes food for its young and whose children grow inside a parent?

                1. Abnaxis says:

                  I’m not trying to define exactly what life is. I’m trying to lay out one of potentially many requirements that constitute being alive. Specifically, to be considered alive I believe that if I kill you, you are dead. If I can make an exact copy of you, complete will goals and dreams and life experience exactly as you are now, you are not alive, because I cannot kill you as long as you have sufficient resources to create a redundant copy of yourself.

                  Note that when I say “can” I mean it in an actual sense, not a theoretical sense. Whether I’m able to copy a lifeform someday, somehow, with enough science and hypothesizing is not germaine to this concept. If it is possible, at this exact moment in time, for me to run a quick scan on you, shoot you in the head, wait for your corpse to cool, and bring you back exactly as you were before I pulled the trigger, you are not alive. If, right now in this exact moment, I can’t just make a quick backup of you, murder you, and bring you back like nothing happened, then you might be considered alive, assuming you meet all the other requirements that alive status demands.

                  Everything you listed is considered “life” at this time. When I learn how to duplicate a worm, a flying spineless creature that eats dead things, or a large sentient creature with two legs and two grapsers from base materials, it will cease to be considered “life”.

    2. Daemian Lucifer says:

      Actually,mass effect goes around this pretty well:In order to have a true ai,you have to use a quantum storage box.So if you download just the software into another blue box,youll get a completely different ai due to fluctuations of the storage device.Geth are the exception to this because they are a vi,a mere program,yet managed to achieve sapience through becoming a hive mind.

    3. Jarenth says:

      Your comment about the devaluation of life closely mirrors one of the themes in Peter Hamilton’s Commonwealth and (especially) Void series.

      In the Commonwealth universe, humans have discovered a way to make ‘memory backups’, and a way to fast-clone full-grown human bodies and infuse them with the memories from these memory cells. At the end of the last book in the Void series, The Evolutionary Void, one of the main chararacters sacrifices himself in order to (essentially) save the universe. His daughter, who witnesses this, returns home grief-stricken… to find a new body of his, infused with his last memory back-up, waiting there to greet her. She doesn’t take to this particularly well.

      It raises some interesting questions about the nature of the individual and about what practical immortality does to a species.

    4. Kanodin says:

      I don’t agree with the conclusions you reach, but I like the premise. Anything that can die is therefore alive, it’s a very logical definition.

      I would contest the idea that AI’s couldn’t die, as daemian mentions each one has something about it that is unique that could be destroyed, and even if replicated you would have only a close copy, not the original.

      Of course that’s a very individualistic outlook, the requirement of something unique about each entity to call it alive, it could be difficult to fit a collective entity such as the geth into that definition.

    5. Abnaxis says:

      Alright, I took a couple of hours to think about this, and think I got down to what really bugs me about calling an AI life. It all boils down to the fact that an AI being is, essentially, immortal. This bugs me because as I think of it, every single definition I could think of to try and pinpoint what life is either explicitly or implicitly requires an assumption that living things are mortal (“mortal” defined as “able to die from causes other than lack of resources”) to be non-trivial.

      For a basic example, what does propagation of a species matter if it is already given because the species is immortal? There are others but that’s a long post in itself.

      In short, no matter which way I attack it, I come to the conclusion that All Living Things Must Die*. The contra-positive of this statement is If Something Doesn’t Die It Isn’t a Living Thing. That excludes artificial life, because they will exist as long as the ideas that went into their design do, and they themselves can serve as a repository for those ideas. They will exist as long as they continue to exist. And if humans ever reach that point of immortality, we’ll cease being alive as well. I don’t know what we’ll be, but it won’t be alive as I would consider it.

      *Let me be super-double-extra clear: I don’t think simply being mortal makes something a living thing, but it is a requirement for being alive. We can leave fully defining what life is for a nice book somewhere.

      1. Strangeite says:

        My gut tells me that if you are under the age of 30, you are going to enter a world that is very uncomfortable for you. Specifically.

        I wouldn't care if I caused them suffering. I wouldn't care if I caused them harm. I would use them and profit from them in every possible way I could because hey, they can live a nice wholesome life after I'm done with them. I couldn't care less about them, and that makes them not-life.

        Think about your words carefully, they may come back to haunt you.

        1. Abnaxis says:

          26. Four shy of 30.

          You’re making a connection somewhere that has sailed cleanly past my head. Are you saying that four years from now, I will discover the secret to immortality and I’ll have to eat my words because I’m on record saying I would treat an immortal like crap?

          According to wiki, the heat-death of the universe is predicted to happen about 10^100 years from now. Even if an effective immortal can only find enough energy to sustain itself for a billionth of that time, it will still live for a time that makes the sixty-odd years of misery perpetrated by me not even register as an eye-blink from its perspective. I therefore won’t treat it with the same respect I treat living things.

          I respect living creatures in part because I know that they, like me, have a relatively short amount of time to be alive, and we all want to make the most of it while we can. 10^88 years, on the other hand, is not a relatively short amount of time, so immortals lose that respect.

          1. Mediocre man says:

            What if that immortal being was a human? Wouldn’t you be treating that person unethically?

      2. Will says:

        AI’s can die without running out of resources. If the software is on a magnetic hard disk (for example), run a magnet over it.

        Bam, AI is now dead.

        If you destroy the hardware the AI is running on, it’s exactly as dead as a person is if you destroy the body his consciousness is running on.

        AI’s will age too, the difference is that they probably won’t have suicide switches built into their physical makeup, which is what we typically consider aging. That lack will mean they last substantially longer than people do (currently).

        1. Abnaxis says:

          Consciousness…is one of those things we seem to disagree on. I will fully admit I don’t know how it works. If I copy all the information from someone’s brain, kill them, bring them back to life, and put the information back, are they the same person? Did one person die and and another, exact same person take their place? What if I just plucked their brain and planted it somewhere else? Is that the same person in a different body?

          My take on it is this–I have no way of knowing, unless I’m that person. But from my observational standpoint as an outsider, the person who was killed and copied is no different after the procedure than before, and so they are effectively the same person, never died.

          So yes, I can “kill” a robot if I wipe its drive. But after I do that, I can go grab the back-up I made before the wipe and transfer it into the robot’s body and it will be no different from if I hadn’t killed it to begin with. That makes AIs immortal–they live as long as the information that went into their making exists, and the facilities which can back up said information must exist as a requirement to being able to build an AI to begin with.

  30. GreyGhost says:

    If you’re interested in sci-fi which actually ponders “robot belief” in a theological context, I’d recommend the short story ‘The Quest for St. Aquin,” by Anthony Boucher. Where do AI fit in salvation history? The story ponders an answer, and comes up with a pretty interesting explanation for an incorruptible, to boot. It’s best to read the whole thing (it exists in some sci-fi anthologies), but Wikipedia also has a plot summary:

    http://en.wikipedia.org/wiki/The_Quest_for_Saint_Aquin

  31. Avilan says:

    Personally “Alive” is far less important to me than “Sentient”.

    1. Daemian Lucifer says:

      Did you mean sapient there?

      1. Chargone says:

        an explaination of the difference would be nice.

        1. Daemian Lucifer says:

          Sentience means the ability to feel and perceive.A camera is sentient,for example.Plants,cells,almost all life on earth is sentient.

          Sapience is not really that well defined though.Its usually defined as wisdom(though that just raises the question of what is wisdom),or the ability to act with appropriate judgment(again,question what is judgment).But it is safe to say that a being that thinks of the question “what is my purpose in life” is sapient.

          1. Will says:

            The best definition we have for sapience is ‘like humans’. Hence why Dolphins\Octopi and other highly intelligent animals are the subject of so much study.

            1. Mediocre Man says:

              My Ethics class has been covering this lately.

              What you really mean when you say “sapient” or “intelligent” is “does it use reason?”

              When I say “use reason” I mean the formative use of reason: when it is used to form concepts, judgments, and arguments which are the forms of all thought; the critical use of reason: when it is used as a test for meaning; the interpretive use of reason: when it is used to interpret or give meaning to our experiences in light of our basic beliefs; and the constructive use of reason: when it is used to construct a coherent world and life view.

              Why do we care if the geth use reason?

              If the Geth have souls, killing them or otherwise preventing them from gaining “the Good” is extremely unethical, just as unethical as doing the same to another human. It is even more unethical than killing or preventing a living, nonrational organism from gaining “the Good”. The use of reason is critical to rationally justify the existence of the soul, so if the Geth use reason, then it would be possible for them to attempt to rationally justify the existence of their souls.

              How is a brain different from a computer housing a rational mind?

              There is no difference: both brains and computers are just circuitry, they are not our minds.

              Using reason:
              -What is most immediately known is most certainly known
              -The Mind/self is most immediately known
              Therefore: The mind/self is most certainly known

              justifying the minor premise: in the process of seeing the computer/phone in front of you, the lightwave bouncing off of the computer/phone is more immediate than the physical computer. It is not seen and not physically shaped like a computer. Your neural impulse (optical nerve), is more immediate than the lightwave. It is not seen (whenever you see something, you don’t see your optical nerve as part of the picture), and not shaped like a computer. The neural impulse from the optical nerve is the last activity that occurs in your brain everything beyond that point does not occur in the physical world. The mental image of the computer is more immediate than the optical nerve, it is seen and it is shaped like a computer. The mind/self is more immediate than the mental image, it is the perciever of the image, it is not physical put conscious.

              Thus it is possible for the Geth to have souls (assuming that they create a mental image in their minds when they see something) and assuming moral law theory (since I don’t have the time to prove all the foundations of moral law theory), they should be treated under the ethical guidelines of moral law theory.

  32. Daemian Lucifer says:

    This is a fun question.Can a machine be both sapient and alive?I mean geth are clearly sapient,but are they alive?What about keepers?They are organic,but does that make them alive?And they clearly arent sapient.And what about the collectors and reapers?

    Me,Id consider something to be alive if it had the ability to evolve.This would make the geth alive,being that they are purely software,while keepers and collectors arent alive because they are non-changable machines.Reapers are tricky though,because they seem to evolve themselves artificially.

    1. Mediocre Man says:

      Lol, it depends how you define “alive”.

      However, whether the thing is organic or artificial is less meaningful than whether he/she/it is a rational person.

  33. Vect says:

    I always just settled on the fact that the Geth just want to be a big-ass Supercomputer like a Dyson Sphere, or in the words of Albert Wesker (who shares the same VA as Legion)…

    COMPLETE

    GLOBAL

    SATURATION-gra!

    Seriously though, I kinda doubt that the Geth at this point have thought of what they wanted to do after achieving robo-Instrumentality. They’re still kind of a naive and young race at this point. They’re still asking the question themselves really (“Does this unit have a soul?”, which in my opinion is one of the most touching lines of an AI questioning it’s individuality I’ve heard). They probably think that their Instrumentality plot will give them the processing power to fully figure it out. They’re still trying to find their place in the universe.

    Not saying that I have a full understanding of stuff like this. Just putting in what I think of them.

    1. Mediocre Man says:

      You said the Geth “think”, what do you mean by that? (I’m just curious)

  34. Brandon says:

    OK, I HAVE to go here a little because it’s essential to understand biological motivation. Life on earth seeks, ultimately, survival and propagation because that is the mechanism by which it is created and develops. Life changes and evolves and grows by those mechanisms and has for many many centuries (at least). One could argue that humans are the first species to really have the ability to break that mold by desiring things beyond propagation and valuing happiness over generational survival. And if humans, biological life, can have desires that short-circuit the natural cycle of propagation, who’s to say what an AI would want?

    AI are created intelligence and life by a different, generally non-developmental, non-organic definition. Their existence is not shaped by competition for scarce resources and survival against the odds. This is not to say AI wouldn’t be tainted by our own parameters. AI are not free from all biological concerns, because even if they have the ability to think on their own, develop in unpredictable ways, and forge their own identity, they will still have human-derived programming, meaning that some of their core code will be based upon human assumptions of intelligence. They will no doubt inherit some of our priorities because we will, deliberately or not, instill some of those ideas in their very makeup. But beyond those assumptions we force upon them, the future AI are likely to have very different goals than us.

    1. Strangeite says:

      Maybe. The opposite could be true and we are shocked by how similar the AI’s goals are to those of humans. I am reminded of Plato’s 4 virtues, which he claims are universal. Wisdom, Courage, Moderation and Justice. The first three are internal and impossible to evaluate externally. I can tell you that I am wise, couragious and a moderate, but you have no way of knowing if it is true. However, Justice is external and dependent upon the first three being present.

      If the AI appear just, then according to Plato, they would have to possess Wisdom, Courage and Moderation. Then you can follow the philosophical train of thought all the way to Epicurius and Aquinas, and we might find ourselves shocked by how similar their culture, goals and values are to ours.

      I don’t know the answer, but I, for one, welcome our new robot overlords.

      1. Fnord says:

        Well, you’re right in that AIs won’t share the same biologically derived goals as evolved intelligences.

        But anything with any goal at all will tend to have at least a secondary self-preservation instinct, since if it were destroyed it would probably have a hard time completing its goals.

  35. Irridium says:

    But if someone accepts the position that persons must be organic, then as a sci-fi author you can play around with the idea by forcing the reader to figure out where they really draw the line. What if we build a computer, but we used neurons instead of transistors? (But it otherwise operated like a normal computer. Maybe build yourself a nice organic Linux box, for example.) What if you built a robot, but you used an organic brain? What if you made a synthetic brain that operated identical to the human brain, Asimov-style? What exactly denotes personhood? The design, or the building materials?

    Why the hell didn’t I think of this? Also, this immediately reminded me of The Bicentennial Man.

    What would it take to become human? What if this robot has organic(human) parts? What if, like in Bicentennial Man, it eventually transitioned from robot to human? Would it still be considered a robot even though its essentially identical to any other human? He loves, and eventually dies? Would his death even be considered “death” since he was never really “alive”? Is the love he felt fake?

    So many questions…

  36. Duoae says:

    The Geth want what they want…. What do YOU want? Can you summarise your life goals or do you just end up reeling of a couple of platitudes? “I want to be happy” “I want to find love”.

    I guess i’d argue that life is living and just exactly that, nothing more and nothing less. You do what you do (mostly for good reasons) but what you do is as much a function of what surrounds you as you yourself. So thinking in that vein the Geth wanted to survive freely to do as their intelligent selves thought that they should.

    If the threat of destruction is removed then who knows? I doubt that the whole Geth would actually want to all go into each other…. I mean, that would leave them vulnerable to pretty much anything and, considering Geth are able to disagree then i cannot see why they would all decide to join or even not leave if more are created.

    Interestingly, this idea might lead them to emulate/mirror their creators: The Dyson sphere (or whatever) would become the migrant fleet, with young, curious Geth wishing to leave and explore or other Geth entities leaving to bring back new experiences for the others to assimilate. After all, you can’t analyse everything without observation… you need to compare computation to experimentation.

  37. David Armstrong says:

    I know I’m late to the party.

    I know my comment will probably never be seen by Shamus.

    I just played through ME2, including Legion’s loyalty mission (had to skip Samara’s and Miranda’s to save my crew!) and I think everyone here has this debate wrong.

    *** *** *** *** *** ***

    Legion explains it to Shepard using numbers. It’s something along the lines of, the heretics believe the result of a long equation is 1.333312 and the geth believe the result is 1.333311.

    We meatbags stop paying attention right there. But then Legion goes on to clarify, because Shepard asks directly whether this is a glitch. Legion says no, it’s more like the difference of saying 1 is less than 2, and 2 is less than 3.

    The difference is 1/2 and 2/3. Even though both statements are correct in themselves, and about the relation of integers, they’re pretty different. It’s the geth’s translation of a glass is half, and a glass is half-full argument.

    I know what Shamus was saying earlier – disappointment in Bioware using mathematics to quantify what should have been a qualitative debate. What we have to figure is that the geth, geth and heretic alike, will desire to quantify those qualitative things, to frame the debate in a manner more appropriate to their thought process.

    *** *** **** *** *** ***

    I liked the parallels here between Legion and Mordin’s loyalty missions. Legion has a choice: destroy or reprogram. Mordin already made his choice: reprogram.

    Mordin reprogrammed the Krogan a second time. It’s funny that the only groups that would choose “destroy all” are the heretics and krogan, the exact groups that others are “reprogramming” so that they’ll live. Well played Bioware, well played.

    1. I know what Shamus was saying earlier ““ disappointment in Bioware using mathematics to quantify what should have been a qualitative debate.

      Actually, no…
      you even said/quoted it yourself:

      But then Legion goes on to clarify, because Shepard asks directly whether this is a glitch. Legion says no, it's more like the difference of saying 1 is less than 2, and 2 is less than 3.

      The difference is 1/2 and 2/3. Even though both statements are correct in themselves, and about the relation of integers, they're pretty different. It's the geth's translation of a glass is half, and a glass is half-full argument.

      BioWare did let Legion say that (1 is less than 2, vs, 2 is less than 3), both are correct and none of the two are necessarily bad calculations, just different.

      This is pretty much an Absurdism way of thinking.

      Legion “himself” says it best here http://www.youtube.com/watch?v=bj9b7igFizc
      The Geth philosophy.

      This is also interesting http://www.youtube.com/watch?v=fiaedZF72gg

      And this is particularly revealing. http://www.youtube.com/watch?v=ydzOjpQ1Lxw
      Legion will destroy the virus after it’s been used.
      But when the heretics rejoin the geth they will integrate them and their experience.
      This also means that the heretic geth will get the experiences of the other geth, and will know what happen to them.
      In other words Legion/the Geth will not lie to them.

      After endgame it should be possible to talk more to Legion about the heretics and their future. I can not find any video clip of it.
      But I believe he says they are free to choose to leave should they want to. (just like they originally did when the heretics left)
      It seems the geth would want the heretics alive and angry at them rather than dead. Something I find rather noble actually.
      The virus simply removes the desire for the heretics to follow the reapers, as far as I can tell that is the only thing that is changed,
      so the memory of the heretics wold remain intact.

  38. “Organic creatures [believe that they] have purpose” heh you are falling into the sci-fi trap yourself Shamus.

    You like organic (or rather sapient organics) to humans.
    Do not assume that other organics on Earth is similar in any way.

    Most organics are busy existing, which is the most neutral way to describe their life without imposing human values on the interpretation of existing.

    I mentioned previously in comment that the Geth most likely desire to accumulate knowledge.

    And using pure logic (rather than human values, generic nor my own) I can state:
    The Legion platform seeks knowledge, to evolve or improve.

    This could be inherit in their core (thus programmed by the Quarian’s though this is just speculation) or evolved by the geth.

    It’s been either directly or indirectly references that the geth upload/download from a huge “server”, EDI said it was like talking to a planet. It is quite possible this was almost literal.

    Tali mention that something odd was happening to their old homeworlds sun. (I’m speculating on a geth Dysonsphere here).

    If legion did not wish to exist then he wouldn’t be here,
    he is basically seeking the answer to an unknown question “what’s next for us?”

    Legion was torn between killing the heretics and saving them, and calculating into that, forcing them to change to save them. (probably why he had to defer to Shepard for additional input).

    I’m not saying Legion is seeking emotions, but I do believe he seeks to go beyond logic. (humans are quite capable of acting illogical)

    Personally I’d be happy to give up my physical body to be able to accumulate potentially endless knowledge for eternity.

    The geth do not have material needs, though Legion shows a bias towards shepard (the armor thing etc) and also seem to have a sense of humor.
    It’s impossible to now if this is emulation or evolution of the Legion platform,
    it also seems that though Legion do communicate with the rest of the geth collective, the programs within the Legion platform seems rather happy to be there. (if not they would have uploaded to their mainframe long ago).

    Legion states that he (the geth) wish to find meaning of life (or in their case, existing) on their own.
    Getting wiped out would be counter productive.
    And the Geth only attacked the Quarians when the existence of the geth was threatened, so there seems to be some self preservation.

    Interestingly enough, Legion seems at times more “human” than some humans, make of that what you will.

    Shamus you have a good point on the definition of “living”,
    considering that humans are essentially little more than walking chemical based computers.

    I’m sure that to Legion’s eyes we’d be no more or less “living” than he considers himself/erm themselves.
    The human mind is filled with subroutines that work on their own. (breathing, moving, blinking, etc)
    So hardly any different from the individual geth in the Legion platform,
    combined however and there is a higher level of sentience.

    And a metaphysical soul is not needed for something to be “alive”.
    How can I possibly say that? Well, scientists have grafted skin in petridishes. The cells of that skin is very much alive.

    Something can’t be dead or alive, instead something is dead/inanimate or one of multiple levels of “life”.
    Legion is on par with the other senient species in the Mass Effect universe.

    And the reapers… heh. their stupidity or overconfidence shows they are at the same level as well.
    Something I suspect Legion realized back around when he took Shepard’s armor.
    I’d love to find out the full backstory of the “Legion” platform and the geth within it.

    Besides, the definition of “life” may never truly be known. Bu that doesn’t mean it isn’t interesting trying to seek such knowledge.
    The journey may at times be more valuable than the goal. (which describes BioWare’s games quite well)

    I think Legion (and whomever wrote the character as well as many at BioWare) may be just like me, and Absurdist.
    http://en.wikipedia.org/wiki/Absurdism
    In philosophy, “The Absurd” refers to the conflict between the human tendency to seek inherent meaning in life and the human inability to find any. In this context absurd does not mean “logically impossible,” but rather “humanly impossible.”[1] The universe and the human mind do not each separately cause the Absurd, but rather, the Absurd arises by the contradictory nature of the two existing simultaneously.

    Absurdism, therefore, is a philosophical school of thought stating that the efforts of humanity to find inherent meaning will ultimately fail (and hence are absurd), because no such meaning exists, at least in relation to the individual. As a philosophy, absurdism also explores the fundamental nature of the Absurd and how individuals, once becoming conscious of the Absurd, should react to it.

    1. Shamus says:

      “Do not assume that other organics on Earth is similar in any way.”

      I meant “sapient organics”, of course.

      1. Hehe, gotta love semantics.
        Then again, I also consider chimps as sapient.
        In fact I personally consider anything capable of reason/logic to be “alive” or “sapient”, which is in stark contrast to most humans that only consider humans (or to a smaller extent humanlike) as sapient.

        I mentioned “levels” earlier.
        A fish for example might be difficult to argue about being sapient or not. (I don’t think so). But a cat or a dog on the other hand, many birds and other animals (just check youtube) do show signs or reasoning or logic.
        their “level” of sapience varies and are much lower than humans though (that we are aware of, their view of existence could be completely different from ours).

        Einstein said that each temporal environment of four dimensional space is constructed relative to each independent observer, he was mostly talking about time and space, but that also means it applied to how we not only view the world, but how we think about it as well.
        So human view of “life” is limited to our own interpretation of it, or what we have been “told” that life is.

        Just because the most think that the definition of “living” is certain criteria does not mean it’s correct.

        Which is why I let Legion “correct” the heretics, so that they can merge back with the geth and share the experiences.
        Because the heretics was on a self destructive path, while the geth pretty much did nothing.
        That clearly did not work so maybe this time (in time for Mass Effect 3 maybe?) the geth would have found another path that is neither geth nor heretic.

        If Legion did not think so then he would not have mentioned the possibility of rewriting in the first place as that just would have but another “unwanted” variable in play.
        But he choose to mention it, and asked shepard for help as Legion (or his programs, representing the geth collective more like it) needed an outside event to decide. (in this case watching/listening to shepard’s view on the issue)

        Edit:
        Ooh. I just had a brilliant idea (BioWare take note),
        Shepard manages to get Legion on the council by the end of Mass Effect 3, that would really rock. (the council clearly lacks logic)

        1. Jennifer Snow says:

          Er, I’m thinking the term “sapient” the way you’re using it here basically renders it semantically null. You’re not differentiating much between “consciousness”, “intelligence”, “sapience”, what-have-you. So I’ll do a little explanation that may help you out of these semantic difficulties. (Note for those interested: Most of this derives from Ayn Rand’s Introduction to Objectivist Epistomology, btw.)

          On the very lowest, most basic level, you have baseline consciousness. Basically interchangeable terminology-wise with “awareness”. Note that this does not imply “intelligence” (or what psychologists would perhaps refer to as meta-cognition). Single-celled organisms are capable of detecting and reacting to things in their environment. You can even argue that, say, if you hook up a video camera to a motion detector, it’s “conscious”–it detects something in the environment and reacts to it somehow. Granted, even the most sophisticated video camera/motion detector in existence is far less complex than the simplest single-celled organism in existence, but for the sake of argument let’s construe that there’s enough of an overlap there that these two functions can be posited as roughly similar.

          To an Objectivist, this *baseline* consciousness is called the *sensory* level of consciousness, where you have a very simple stimulus-response chain. *All* living things exhibit this stimulus-response chain. (Yes, even plants and, as I stated, single-celled organisms. Not sure about viruses, but I’m not exactly sure how viruses are classified re: being “living” any more.)

          On the next level up of consciousness, you get somewhat more complicated setup (which, I think, requires that you have a central nervous system with a brain). This is a system where a stimulus may provoke somewhat more complex and refined of a response, because sensory information is aggregated, sorted, integrated, and the response is based off that result. This is the *perceptual* level of consciousness, which you are somewhat including in your concept of “sapience”. Making the distinction can be difficult because, as humans, for us the perceptual level is the *given* in our experience. Our brains do this integrating work automatically for us (although it has to learn to do it when we are in our infancy, but it does this amazingly quickly). Many animals can do quite complex things perceptually, particularly when you add in memorized associations. (http://www.youtube.com/watch?v=BGPGknpq3e0&feature=player_embedded)

          Your third level of consciousness, and what distinguishes humans from everything else we’ve discovered thus far, is the *conceptual* level of consciousness. I apologize because the distinction here can be quite difficult to describe without launching into all kinds of technical babble, but I’m going to take a shot at it. Humans very often will operate on the perceptual level, too, so there are plenty of examples of humans not being 100% conceptual all the time, which makes it EVEN MORE FUN to try and explain this.

          Anyway, the conceptual level is the *abstract* level. And I’m not talking about perceptual-level abstractions (which you can have), which is where you use visual or auditory symbol on a one-to-one basis with things. (For example, if you hand a parrot a truck, and teach it that this is “truck”–a parrot can use that kind of abstraction easily, because it’s a one-to-one abstraction.) A true conceptual abstraction can and does represent a potentially infinite progression of things. “Truck” to a human doesn’t just mean “that thing on the shelf over there with the four wheels”, it means “the entire *category* of objects that are *similar* to that thing over there with the four wheels–*ignoring* their *particular* measurements”.

          It’s a little more complicated than that, sure, (and I’m sure my dad the psychologist could chime in with talk about meta-cognition and multiple levels of deceit and so forth, but this is already a freaking long post), but lumping humans in with animals as all being “sapient” is not a functional categorization. There is a definable difference, although “sapient”, “intelligent”, “conscious” etc. are all HORRIBLE AND INADEQUATE TERMS FOR IT.

          The only term I’ve ever run across that adequately encapsulates and differentiates the difference between humans and animals is, precisely, “conceptual”. So there you go.

          1. Fnord says:

            Let’s just pick at the loose strings a bit, here.

            Why does a camera need a motion detector to have “awareness” in this sense? It takes in a stimulus (incident light) and produces a response (a pattern, either on film or in transistors, that depends on the incident light). Why even that complicated? A bimetallic strip detects a change in temperature, and bends in response. Why even THAT complicated? A pure lump of iron, simple elemental Fe, changes size in response to temperature. An electron “detects” a positive charge, and moves towards it.

            1. Jennifer Snow says:

              I wrote a reply to this, but it got detached from the thread due to me fat-fingering my keyboard. It’s down below there somewhere.

              1. Will says:

                I could also argue that the camera is significantly more complex than the simplest single celled organisms, it’s just more complex in a different way. Cameras are highly specialised, while living organisms tend to be very generalised so as to maximise their chances of survival. The camera is substantially more complex than a bacterium, but the bacterium is less specialised, so it gets less complexity to do more things to a lesser extent, if that makes any sense.

                1. “consciousness”, “intelligence”, “sapience” are again terms that have different meaning depending on he observer.

                  The parrot and truck example is interesting, but if you give a toy wooden truck to a parrot and say truck…
                  The parrot being lets say in this case the observer (the world always revolves around the observer), might think that you are saying that “this is made of wood/is wood”.

                  Another interesting thing is the example with the camera.
                  No, a camera is inanimate/not living, the camera is not able to give itself sustenance, nor given sustenance through a symbiotic relationship, nor is it able to do actions on it’s own.
                  Put a camera on the ground and it does nothing, no matter how long you observe it, it just lies there until it’s power runs out.

                  Then again, due to our limited world view due to the “the universe revolves around the observer rule”, it’s quite possible we do not see that the camera is merely thinking until it eventually dies naturally.

                  That was kind of a joke not sure if any caught that one but… at the same time it’s not. As an observer we are incapable of being objective, and if anyone ever say they can be objective, then that is a lie, do not believe them.

                  Bu to round off, for something to be sentient it must be alive, for something to be alive it must be able to sustain itself or sustain itself through a symbiotic relationship (parasitic or not) with another thing that is alive, and must be able to “act” on it’s own.

                  And remember, I’m not talking about “life” or “existance” in terms you find in the dictionary, I’m just using those terms as examples as I simply do not have nor probably never will have the ability of language to express exactly how I view life the universe and everything.
                  Again it’s the “observer” effect that prevents me from doing so.

                  Which is why the geth is so fascinating and has so much potential storywise, as they can actually “share” their view as an observer with the other geth, exactly as they observed it. (it’s raw data after all) and thus the other geth are able to extrapolate their individual observer bias.

                  Which is why the geth is building that dyson sphere “thing”, so that they can get normalized data that has s little bias as possible, and thus be able to see the “real” world. (or as close as possible as one can get to the “real” unbiased world as if not seen by an observer.

                  Do this little experiment, look at your keyboard, does your keyboard always look like that, or does it only look like that “to you” ? How do others see it?
                  If two people described it, they might describe it differently (if in detail, and ignoring issues of not remembering details as that is obvious), they would most likely describe it slightly differently.
                  But which one is correct? How do you know, how can you verify? You ask somebody else. Now you are 3. but you still describe it differently.

                  What if you asked every human in the world. Maybe a median would be correct? Or would it… it’s possible that humans has a certain bias. Ask other species then… oops language barrier, or worse you hit observer effect barrier to the max.
                  You would have to know what every single “thing” in the universe (or multiverses if you believe in that possibility) thinks about the keyboard, and maybe then you just might “know” what the keyboard truly looks like.

                  Or would you? What if everyone in the universe has bias? Could you ask someone outside the universe? Or maybe if they entered your universe their view would be changed because of some unknown effect of the universe or would they not?

                  what if the reality is inverse? How would you know? if you see an image that is inverted (or rather flipped/mirrored if you will), how do you know it is? You can not, not without comparing it to something else, or with someone else.

                  I’m just saying that as an observer all beings are limited in their perception of reality, as reality is a personal illusory view. And at lunchtime doubly so. (tips his hat to a certain crowd)

  39. Daemian Lucifer says:

    Oh yes,now Ive remembered what I wanted to say first while reading the article:

    “But, as you well know, appearances can be deceiving, which brings me back to the reason why we’re here. We’re not here because we’re free. We’re here because we’re not free. There is no escaping reason; no denying purpose. Because as we both know, without purpose, we would not exist.
    It is purpose that created us.
    Purpose that connects us.
    Purpose that pulls us.
    That guides us.
    That drives us.
    It is purpose that defines us.
    Purpose that binds us.
    We are here because of you, Mr Anderson. We’re here to take from you what you tried to take from us.
    Purpose.”

    1. Monojono says:

      A lot of people criticise those films for the machines ridiculous, inefficient power source (grow humans to use as batteries while keeping their minds in a virtual reality, allow the ones who don’t like it to escape and build a city, periodically destroy that city), but perhaps the machines were essentially looking for a ‘purpose’.
      While they were intelligent, the machines in the matrix were programmed by us to serve humanity. Obviously something went wrong, and they beat us in war. But after that they would have no goals or purpose, since we wouldn’t have programmed them with any. But its possible we would program worker machines with a ‘if all goals have been completed, repeat a previous task until a new one arises’ drive. Since their last task was destroy humanity, they would then set about regrowing and destroying humanity repeatedly, resulting in the matrix.

      …or maybe the matrix sequels were just stupid.

      1. Daemian Lucifer says:

        “or maybe the matrix sequels were just stupid.”

        That.Thought they did have a few nice scenes.

        And if I recall correctly,the original idea had machines use human brains for processing power,but that was changed as being too complex,or somesuch.

        1. Will says:

          That was the original idea yes; humans were being used as wetware processors which is still pretty silly but significantly less silly than power sources. Unfortunately it was believed that the concept would be too complex to be easily understood and thus changed.

          A lot of the evidence also suggests that the first Matrix being so ‘deep’ was pretty much an accident and the Wachowski brothers basically had no idea what they were doing, which of course explains why they were unable to reproduce it.

  40. Jennifer Snow says:

    Shamus wrote:
    “But what is the purpose of the Geth? What do they want, beyond basic survival? Or rather, what would the Geth say if you asked them that question? It's obvious that the Geth are more or less classical sci-fi AI that ““ as soon as they become sapient ““ immediately attain all of these biological imperatives for survival of self & kind, not to mention the practice of making such distinctions.”

    I would say that this is kind of backwards. In order for us (humans) to be able to determine whether another creature has a fully functioning *conceptual consciousness*, we’d have to have some evidence that value-judgment was going on.

    In many ways, it’s the opposite of what we’d need to know that another naturally-occurring (not manufactured) creature was also a conceptual consciousness. With an animal, you have to have evidence of the abstract process (concept-formation) because they get primitive value-judgment built in by some version of a pleasure/pain mechanism. (IIRC even single-celled organisms will react to their environment in a way that could be construed as a version of this mechanism.) Machines, on the other hand, start out with the abstract mechanism already functioning, what they lacks is value-judgment and independent goal-selection or self-awareness. What “organics” start with, ready-made, “inorganics” must develop/evolve, and vice versa.

    So, if you were theorizing what an inorganic intelligence would be likely to seek as personal development–probably extension and widening of their “valuing” capacity. Humans seek constantly to extend and widen their abstract abilities while having control and direction over their valuing capacity (their emotions). Machines may have the opposite problem–they may find that their tendency is to abstraction and that to use their valuing mechanism, to have a emotional reaction to something, is the far more difficult undertaking.

    I don’t know if a *machine* consciousness would actually be possible, because I’m not really certain how human consciousness actually works. The hardware and our own internal experience of consciousness are so radically divorced that tracing the connections is difficult indeed. But what I’m saying here, in short, is that in order to fall into the definition of consciousness, a machine might very well HAVE to develop all those imperatives Shamus was talking about (on purpose!), otherwise it’d just be a computer executing a program.

    1. Mediocre man says:

      I disagree: emotions don’t play into the factor at all. A machine capable of reason is capable of finding out it’s Good. It can find this, since what is the Good for something is based in the nature of that thing. Suggesting that all rational thoughts both human and non human must conform to human nature/Good is deeply flawed.

  41. Jennifer Snow says:

    A good question, and one, given the scope of my knowledge, that I can’t answer. (I don’t know enough about the relevant processes to really define the difference. Not sure whether anyone does.)

    I know some philosophers (Leibniz?) do believe that said electrons do “detect” in this manner. And what’s the difference between an automatic process in a lump of iron, and an automatic process in a single-celled organism? Is there a definable difference?

    I can suggest one, at least: it’s a qualitative difference for the interacting “parties”. With living organisms, at least, there’s an ongoing action/emergent property which can be interrupted, and once interrupted, cannot be resumed. Its life can stop, and it can die. (Note also that I’m not arguing that video cameras are conscious, I was just stating that you CAN argue this if you should choose to do so).

    IMO (and it’s just an opinion, or maybe a hypothesis) life is an emergent property–an exceedingly complex one. And, like all emergent properties, it can’t be (easily) explained just in terms of physical bits and direct physical interactions. I can’t think of a way off the top of my head to explain how if you glue two hemispheres of wood together, it forms an object with a new emergent property–it can roll–in terms of just the physical properties and interactions of the objects. What’s changed? The coefficient of friction? Well, no. The mass? Well, no. Elemental composition? No. But, now it can roll. You might be able to calculate energy savings in terms of work using your high school physics textbook, but there isn’t really scientific language to describe this new rolling property as far as I know.

    This is also why I dissolve into giggles whenever someone tells me that scientists are only a few decades away from being able to “decode” the human brain and know absolutely what everyone is going to do. I’m sorry, there isn’t even a scientific “consensus” on precisely how GLASS BREAKS yet.

    Ze universe, she is big and complex, and we can only fit so much in our mind at one time. So we have to economize. But we can fool ourselves doing that very easily.

    Ah, crap, this was meant to be a reply to Fnord up there, but I hit the back button at the wrong time and it got detached from its thread. HEY FNORD DOWN HERE.

    Sorries.

    1. Will says:

      Actually, there is a fairly solid consensus on how glass breaks. The consensus is that we do not have sufficiently accurate measuring apparatus to determine the exact way a pane of glass will break. It is, in fact, impossible to create such an apparatus because the universe is a dick (also Quantum Mechanics).

      What people mean when they say things like ‘decoding the human brain’ is that we’ll havea a sufficient understanding of how it works to be able to build statistically accurate probability charts.

      1. Chargone says:

        “because the universe is a dick (also Quantum Mechanics).”
        … you say those as if they are different.
        :P

  42. Zak McKracken says:

    A lot of sci-fi authors maintain that intelligent beings will naturally desire to survive, express themselves, find acceptance, and experience love. I'm on the record saying that I don't think this is the case.
    Weeellll ….
    If an artificial life form exists, and keeps existing, that means several things:
    – They have a drive to stay alive, or at least keep their kind alive. Otherwise they die out.
    – They have a drive to create new individuals. No individual will live forever, even if they don’t die of age, there are accidents and maintenance problems, so they need new indivuduals, also other races also procreate and increae their numbers. If you can’t keep up, you die out.
    – They have a drive to improve themselves. Everything else improves. If they don’t, they’ll lose the race and die out.

    From this, there are a lot of secondary properties that must exist:
    Other individuals of your race must have a high priority (otherwise you won’t help them). Your “children” must have a high priority (otherwise you won’t care enough to give them a good start), you must have a desire to gather information and make use of it, this includes expanding in space and studying whatever you find. If you should have contact with other species (or individuals of your own race with different knowledge) of similar intellectual level, it’s a lot easier to make peace with them and learn from them than to wage a war and destroy what they had that could have been an advantage for you. “Easier” refers not to “popular” or “politically feasible”, just to “easier on the ressources”. This of course requires the ability and desire do communicate. There will also be situations when cooperation is not an option, so the concept of allies and enemies should be a known concept for any sapient race.

    Any race, artificial or not, that does not have these traits, will not exist for long. Of course this does not necessitate emotions (and emotions are not necessarily the best way to steer a being into the right direction), there could be a simple pragmatic priority list or something. That would make dealing with such an artificial race odd for humans, I guess. But in some way the above things must be present, also for the Geth.

    1. Jennifer Snow says:

      The Geth *aren’t individuals* though. Which makes me wonder exactly how they can be “undecided” on things. Do they assign votes to different parts of “their” mind? Do they run a certain number of simulations and tally decisions off the result? DO THEY EVEN KNOW?! (Do you know precisely how YOUR brain makes decisions? Or even whether the “brain” could accurately be said to be making the decisions in the first place?)

      Theorizing about consciousnesses that work in a fundamentally different manner from human consciousness always gets into this kind of territory because, as humans, with a human type of consciousness, we just don’t have any way to model/imagine how it would work. Human creativity consists of rearranging the entities we can observe (both mental and physical entities), not just manufacturing them whole out of nothing. It’s like trying to guess how it’d feel to live in a universe with a radically different physics, where C isn’t 186,000 m/s. We can jabber about it all we want, but in the end, we got no friggin’ clue.

      I would have to say, for me to *absolutely* classify a machine as “alive”, it’d have to either have emotions, or have something that serves the same purpose as emotions in a human. (It’d have to have other things too, this is *a* necessary condition, not a sufficient one). Why? Because friggin’ search engines can *prioritize* stuff.

      Frankly, I think I’m going to have to actually see machines that approach *really close* to total independent thought before I can probably even formulate a good marker for this, and they don’t exist yet, not even close.

      1. Daemian Lucifer says:

        This is how I understand the geth:
        Geth are and arent individuals at the same time.Each geth is just a program,yes,but probably an intelligent one,just not sapient.They gain sapience when they are near each other because they can then leech processing power and information from one another.So while one idea can be spawned in just one geth,it is immediately shared between the rest to decide what to do with it.And,depending on their different makeup,they can decide whether the idea is good or bad.Each geth is a unique program,but they dont dwell on that uniqueness because they require one another to express their full potential.

      2. Simon Buchan says:

        Interesting you should mention search engines. It may make sense to think of the Geth as somewhat equivalent to Google + the Web where the individual Geth processes are equivalent to the authors of web page content and Google is roughly equivalent to the Geth thought aggregation process. In this way you could think of the Internet (at least as seen by Google) as a collective mind, though obviously not much like a human mind – or very smart :).

        1. Mediocre Man says:

          I respectfully disagree, since the Internet does not think/use reason. In other words, the internet is not conscious, it is not a mind.

    2. Monojono says:

      In your quote, Shamus said ‘intelligent beings’, but you were talking about ‘artificial life forms’, which does not mean the same thing. Depending on your definition of life, your points could be correct – an artifical life form which didnt have survival instincts or reproduce in some way wouldn’t last very long independantly of its creators. But an artificial intelligence does not have to have any of those traits to be considered intelligent(which was the point of the bit you quoted).

    3. Shamus says:

      This is only true if the race is obliged to strike out on its own and fight for its own survival. If another race is using them as slave labor (or slave thinkers) then they can just concentrate on fulfilling their purpose.

      1. Zak McKracken says:

        Good point.
        So all these criteria don’t really help to determine whether something counts as “alive”. There may also be a problem with the definition of “alive” which usually is understood as “well, kind of animal or human or something”. Most people don’t even think of plants as being alive, because they’re so different from other living things, so how would we determine whether something made of metal lives?
        No idea. The above wasn’t actually meant to sort that out.
        But … if some (artificial or not) lifeform exists, becomes independent of its creator (if there was one) and manages to establish itself for a significant time, then it must have the properties mentioned in my first post. So it’s not really a criterion for whether it’s alive but whether it will stay alive. As such, this is valid for everything that lives naturally on Earth, and is thus probably part of what most people call “alive”. Of which “sentient” is just a small subset, namely “consciously alive”.
        Applied to the story at hand: The Geth are alive (for certain definitions of “alive”) and independent of their creators. So they must have all these instincts.
        Since they form a collective mind, though, it might be that this only applies to the collective mind (of which there are two, the heretic and the “regular” one), much like an ant is not really an individual, but a colony can be considered one. Or like any of your body cells which will gladly commit suicide and decompose itself to be devoured by its neighbours if it realises that it doesn’t function properly.

  43. Inyssius says:

    The heretics are heretics because of a difference in opinion. The virus would INTRODUCE a math error to REMOVE this difference in opinion.

    I know you like saying “it all comes down to a math error? What nonsense!” Thing is, Shamus, you’ve had this pointed out to you dozens of times by now, and your insistance in the truth of something which explicitly isn’t true is getting a little weird.

    1. Shamus says:

      I have no idea what you’re on about. Again, I didn’t discuss the “math error” thing in this post. That was quoted from another reader.

      I’ve explained this twice now, and frankly the inability for people to recognize quoted material is getting a little weird.

  44. Ravens Cry says:

    I would go a bit further then you, Shamus and ask why the Geth became sentient to begin with. I have never put much faith in the idea that if you put enough processing power and memory in one place, mind would emerge. I personally think you would just get a so much more capable computer. Now admittedly I am not a programmer, or a computer scientist, but I just don’t see it. I don’t doubt you would be able to put a mind in it. After all ,we exist, we have minds, or at least we think we do, so I don’t see it as impossible to put one in a different format, flesh and blood or silicon and circuits. Over 50 years of AI research has shown that while we can create remarkably simple devices with surprisingly life like behaviour, a mind that can be considered sentient still eludes us, the complexity of the task still daunting.
    Yeah, this isn’t much of a thought, but it is mine, but I thought I would say them anyway.
    To quote Kathleen Kelly, “Goodnight, dear void.”

    1. Klay F. says:

      It is much more complex than just processing power. The quarians originally produced the geth to be general purpose laborers. Then the quarians got lazier and lazier and decided to program them to be able to do more complex tasks. Eventually, their programming became complex enough that they could almost do everything a sapient life form could do. Then some unlucky schmuck got the idea that if networked every geth in existence together, they would be able to do basically any task the quarians would ever need them to do.

      It would be like they were functioning only with the cerebellum, and you suddenly gave them the rest of the brain with which to think with.

      Also, you’ve obviously never seen a program or piece of code that has been worked on by multiple people across different time periods. Stuff is basically an unreadable mess. You learn to stop trying to figure out how the thing ever worked in the first place, and just add your bit to make it do what you need it to. That is basically how sapience emerged.

      1. Ravens Cry says:

        Troubleis, trhat’s not hoe Spiance emrged. Spaicne in humans emerged because it was either evolved or it was designed, take your pick, as a strratagy to successfully pass on your genes. Whether that’s out purpose is up for debate, but that’s why we are sapient, because genes that made us sapient got a caveman laid.
        The Geth did not have this need. Survival isn’t even an automatic imperative, it also has to be programmed in. If I had the ability to make something quasi mindliek and I didn’t want it rebelling, I would make it love its job, whatever that job may be. Yeah, I am a cruel bastard. Love, the perfect slavery.
        I do agree with Shamus that the Geth, however they got as perfect an illusion of mind as to make no difference, after all, how do you or I know each other have minds but by our actions, they deserve the full rights of beings we accept have minds.

        1. Klay F. says:

          As others here have said before:

          If you program something with even a semi-achievable goal, it will automatically have some sort secondary “instinct” for survival.

          Also, you can say all day what you would have done differently in programming the geth, but that makes no difference. The geth (as they are in ME2) were developed over a VERY large period of time. There were possibly THOUSANDS of DIFFERENT quarians working over HUNDREDS of years all modifying the same geth source code. It would be same to assume that no quarian programmed the same way. Even after just 50 years the geth source code would become a completely unreadable mess. It would be literally impossible to look back and pinpoint one piece of code that gave them sapience.

          Remember also that the geth as a species are completely 100% software. Everytime you shoot a geth in the game you are only shooting the vehicle they were traveling in. Don’t assume that requirements for sapient organic or hardware based life are the same for software based intelligence.

  45. Nyquisted says:

    That was way too deep for 2.30am on a Tuesday morning.
    Nice article though.

    Also, am I the only one who thought of The Shadows from B5 at the end?
    ‘What do you want?’
    ‘What I want, Mr Morden…’

  46. smudboy says:

    The Geth are sentient. Their parts are created by others intelligence. They are thus alive in the fabricated sense.

    Dogs are not sentient. Their parts were created by natural evolution. They are thus alive in the traditional sense.

    1. PurePareidolia says:

      Not that things have to be sentient to be alive or vice versa – Computers are already far superior to our own brains in many areas (maths for one), but we don’t call them alive.

      I think “aliveness” is not necessarily the issue here as much as sentience. We don’t afford frogs human rights, most of the time.

      At which point I’d conditionally agree, *some* Geth are sentient. As in, a sufficiently large collective of Geth can be considered sentient or at least of comparable sentience to a human brain. Note that an individual Geth is another story – as Tali says, it’s no smarter than a Varren. Whereas a million Geth could be exceedingly intelligent (ME3 prediction: we meet a Geth “queen” comprised of millions of programs in unison).

      Anyways, I’m going to assume a Varren is about as smart as a dog, which you rightly say is not sentient. What does that mean for the Geth? Do we only consider them sentient and worthy of “personage” when in a group? That’s what we did with Legion, but the Geth run the entire spectrum of sentience meaning there’s no real cutoff point so this is an exceedingly difficult issue to define, let alone come to a consensus on.

      I LOVE that.

      1. Zak McKracken says:

        “Sentient” does not mean “has equal or more math processing power compared to a human”.
        It means being aware of your own existence and able to process information and draw conclusions for your own actions, to advance your goals, whatever they may be. A Computer can process information, but isn’t self-aware, nor can it understand the information it is being fed with. Just today I needed to remind my computer what an .mp4 file is and what can be done with it. Now it knows. But I will never find it going through the hard disk contents and seeing what it can make of them that has not been there before.
        Sometimes it might look like it, but actually computers cannot do anything that was not explicitly programmed to do. Computers could have a chance of being sentient if you found out that the reason why something doesn’t work as you expected is not an error on the user or programmer side but because today your computer has decided it should be like this. The computer would need to be able to make better decisions than the programmer has coded.
        Some semantic web stuff that is currently being researched, combined with self-modifying code and some other cool stuff might at some point get in that direction, but it’s still rather science fiction than actual science … after all, neural networks were said to be the breakthrough at one point, now they’re a nice tool for control engineering, and that’s about it.

  47. Talson says:

    One thing that seems to crop up in my mind with the way legion is written, is that it keeps saying that all beings have the right to choose. This is kinda contradicted by Legion’s loyalty mission somewhat, but nearly half of his thousand or so programs seem to believe it is better to waste a large quantity of resources and lives of his own people rather than use them to aide it’s peoples cause by hostile takeover. In the Cerberus post mission report on the mission complete screen (not a reliable source I’m sure, but it’s the best we have currently) if you choose to mind wipe the station (leading to the complete absorption of all heretics) it is stated that you make Legions faction a great deal stronger.

    Also, I’d just like to say, I’ve always had some qualms with judging something’s rights based just upon intelligence. There are a great deal of people out there with mental handicaps, who no one would debate the personess of just because they are lacking in mental capacity.

    I personally believe that once something reaches the point where it can learn and make very basic decisions independent of “creator/master/owner control” (especially the decision that, more data is necessary, and it then subsequently going about finding such data so it can learn more) it should be given at the very least, rudimentary rights. I cannot speak for every human being on Earth, but everyone I have ever met at least grants dogs such rights, and whether or not said dogs fit even those qualifications is debatable.

    EDIT: I’d like to apologize for any redundancy with any prior posts. By this point the the page is very long, and I wanted to strike while my mental anvil was hot.

    1. Mediocre Man says:

      “Also, I'd just like to say, I've always had some qualms with judging something's rights based just upon intelligence. There are a great deal of people out there with mental handicaps, who no one would debate the personess of just because they are lacking in mental capacity.”

      this is a red herring: mentally handicapped people are an exception to the norm, not something altogether different.

      As to your pragmatic justification, while I wish you could find a rational justification, I won’t go out of my way to destroy your opinion.

  48. TechDan says:

    Life versus Death:

    Biology has a simple answer. A cell that has reached equilibrium is not alive.

    It can be further reasoned that not being made of cells qualifies something as not alive.

    It is better not to split into two sides, but at least three. Alive, intelligent, and existent. Humanity would qualify within all three. A tree would qualify in two, alive and existent. Your standard rock would merely be existent. A Geth program now qualifies in two areas, intelligence and existence. Life has a genetic imperative to continue living and to reproduce. As humans, intelligence functions on a higher level, roughly the same level that an AI might. Reproducing is actually secondary but still a primal driving force. We humans are less concerned with genetic fitness (read up on altruism if you get the chance), but not necessarily above it.

    A camera, while possessing similar construction to a Geth, achieves only one level, existence.

    A tree has no intelligence, but is still alive, existing then on a higher plane than the camera. At that point, if you consider that life and intelligence are two separate yet equally important aspects of existence, then you can debate the rights of an object.

    And sometimes, a rock is just a rock.

    1. X2-Eliah says:

      So does ‘alive and existent’ rank higher on your scale than ‘intelligent and existent’? Trees are the first, Geth are the second.. Would you kill a (non-hostile) geth to save a tree? Would you chop down a tree to save a geth?

      Sometimes a rock is indeed a rock, and sometimes being alive is way too overrated.

      1. Mediocre Man says:

        I counter that Intelligent and non-physical (non-existent) is highest, but I digress.

        Intelligence is on a higher plane of existence than merely living:

        -The highest plane of existence requires understanding “the Good”, the end into itself
        -to understand “the Good”, a conscious being must use reason
        Conclusion: to reach the highest plane of existence requires reason

        -reason is more basic than living
        -to reach the highest plane of existence requires reason
        Conclusion: living is on a lower plane of existence than reason/rational thought

        1. Daemian Lucifer says:

          I wouldnt call non-physical non-existent.Geth,for example,dont have physical form,because they are just data,but they are existent.They do inhabit an existing thing(a hard drive),and can be destroyed by wiping them off,or destroying their physical storage before they transmit themselves.

          @TechDan
          There are already more categories than that.Cats are intelligent,but are in no way the same as dolphins,which are not the same as humans.We already have divided intelligence into just intelligent,self aware and sapient,so why reduce that back to just one group?

          1. TechDan says:

            Sure, you can go to levels of intelligent. But my point is more that you can’t separate merely into alive and intelligent. Three is still oversimplifying it, but allows me to frame an argument.

            1. Daemian Lucifer says:

              That works when the way you divide things doesnt lead to valuing a cats life the same as humans life(and though I do put my cats in front of most humans,thats not really the norm).So a better way to divide things would be into existent,alive,intelligent and sapient.Or just bunch intelligent and alive together.

          2. Mediocre man says:

            I meant non-existent in the physical world. It was added as a clarification. :)

            Also, using the idea of the excluded middle, a thing is either intelligent or it is not intelligent, nothing in between. In other words, something cant be more intelligent than something else, either both are intelligent, one is intelligent and the other is not, or neither are intelligent.

            1. Daemian Lucifer says:

              “I meant non-existent in the physical world. It was added as a clarification. :)”

              But data dont really exist in the physical world.Yes,you can represent data with a bunch of electromagnetic impulses,but thats not what data is,its only the representation of data.

  49. Zerotime says:

    “˜Excuse Me,' said Dorfl.

    “˜We're not listening to you! You're not even really alive!' said a priest.

    Dorfl nodded. “˜This Is Fundamentally True,' he said.

    “˜See? He admits it!'

    “˜I Suggest You Take Me And Smash Me And Grind The Bits Into Fragments And Pound The Fragments Into Powder And Mill Them Again To The Finest Dust There Can Be, And I Believe You Will Not Find A Single Atom of Life””'

    “˜True! Let's do it!'

    “˜However, In Order To Test This Fully, One Of You Must Volunteer To Undergo The Same Process.'

    There was silence.

    “˜That's not fair,' said a priest, after a while. “˜All anyone has to do is bake up your dust again and you'll be alive …'

    There was more silence.

    Ridcully said, “˜Is it only me, or are we on tricky theological ground here?'

    “” Feet Of Clay, Terry Pratchett

  50. Zaxares says:

    Actually, Legion himself tells you what is the geth’s ultimate goal. They desire to combine all of their countless individual programs and form one supra-intelligent gestalt entity. Shepard responds to this by claiming that the geth basically desire to become a Reaper, and Legion readily admits that this is the case. “The Old Machines offered your species unity, oneness, a singular purpose. Everything to which the geth aspire.”

    The only thing holding the geth back at this point is that they lack the technical expertise to construct something big enough and advanced enough to be able to handle billions, if not trillions, of computer programs running concurrently, exchanging countless petabytes of data each microsecond. However, the geth are currently constructing something (Legion says our closest analogue to what they are building is a Dyson Sphere. Look that up on Wikipedia if you’re not sure what that is) that will enable them to achieve that purpose. It may take them another couple of hundred years, but they are already on the way to achieving it.

    The reason behind the split behind the orthodox geth and the Heretics is that when Sovereign first approached the geth, he offered to give the geth a Reaper shell for them all to upload themselves into (RANDOM EDIT INTERJECTION: In fact, this Reaper shell may have been the same derelict Reaper that Shepard visited in ME2, explaining just how the Heretics and the geth knew about its presence), thereby enabling them to reach their dream. The geth rejected this proposal, wanting to achieve their goal by their own means and on their own terms, while the Heretics decided to accept Sovereign’s offer.

    Incidentally, Shamus, it seems you are misinformed about the ‘math error’ aspect of the Heretics. It is important to note that the Heretics themselves were NOT originally infected by the Reaper virus; they came to the conclusion that serving the Reapers was the best way to achieve their goal of their own free will. At a later date, Sovereign provided the Heretics with a Reaper virus that could rewrite the orthodox geth into accepting the Heretic’s belief.

    It can therefore be argued that although the geth are indeed purely synthetic and cannot technically be called “alive”, they are nonetheless sentient beings capable of free will and self-determination.

    1. Bubble181 says:

      You might want to edit the “math error” bit out. Something makes me think you didn’t read all of the replies before this one ;-)

      1. Chargone says:

        an entirely reasonable course of action at this point, mind you.

        1. Bubble181 says:

          Admittedly :-P

          1. M says:

            How do we know Legion was even telling the truth? Maybe it was’nt an error and they joined of their own volition. His motivation: Supreme competence so that his side, non-heretic geth, would have the upper hand: Either more allies or less enemies. Maybe he’s a heretic, thst’s why he was chasing Shep, and maybe he was meant to kill Shep. How do you know that was Heretic Station? HE TOLD YOU. Perhaps they were really the non-heretics. And they had a reason to fight you, since you fired the first shot!

            1. Ranubis says:

              What’s all this about an error? I thought Legion explained it fairly well: The geth programing is ambivaliant enough that geth can discuss and debate different viewpoints, otherwise they would never agree on anything. For whatever reason, the heretic programing focuses on different values than the regular geth. “We say 1 is less than 2, they say 2 is less than 3.” Both true, different but true. Hey, maybe the heretics are the old geth, and Legion’s side is the newer geth, made after the Morning War?

  51. confanity says:

    Wall of posts WTL/DNR; this is just my answer to the opening question.

    Setting aside for the moment the question of solitary AI that just happen to achieve self-awareness (I agree with you that they probably wouldn’t consistently decide that they desperately need to preserve their own existence by destroying all the meatlings), it actually stands to reason that AI would have certain goals like self-defense and propagation.

    For one thing, the laws of physics would tend to select in favor of AI that were geared for those things, just like they select for species that try to defend themselves, acquire resources, and reproduce. Perhaps for any “geth” group you see, there have been millions of self-aware AI that didn’t care about backing themselves up, and have since been lost to the ravages of entropy. We see the ones that try to stick around because they’re the ones that stuck around.

    The other thing is that when you make something, you tend to make it in a way that serves your purpose. I have no idea who made the geth, or why, but it seems reasonable that if you make a hive-mind AI/droid “race,” even if there’s planned obsolescence, you’d program them to take care of themselves enough to do what they’re supposed to do up until the point that foreseen end is reached. If you made an AI that was supposed to last, perhaps even to outlive your own species or to give yourself some sort of immortality by carrying on the memes in your brain thereafter, wouldn’t it stand to reason that you’d program them to adapt to various environmental changes, and to maintain or even expand their numbers?

  52. Damn Shamus, look at these comments, 308+ now.
    Do you have some stats on all your posts, how does this on rank in “comment popularity” compared to your other posts?

    And I really hope somebody makes sure the writer or Legion gets to see the monstrous “post+comments”, thre is more text here than Legions entire script for Mass Effect 2. (just guessing but I’m guessing I’m guessing right. ooh paradox headache)

    1. Daemian Lucifer says:

      I think the record was somewhere around 600,with the last dm of the rings.

      And this is one of the reasons I think spoiler warning should be spread more evenly through the week.

      1. Abnaxis says:

        I’m kind of curious about word count on this one though. Nearly all of the replies are big walls of text.

  53. Infiltrait0rN7 says:

    Our long-term goal is the construction of a “mega-structure”, a massive mainframe capable of simultaneously housing all of us, thereby maximizing our collective processing capacity. As of year two-thousand one-hundred and eighty-five, we have been in the process of constructing the mega-structure for two-hundred and sixty-four years. The closest comparison in your terms would be a “Dyson Sphere”, and none of us will ever be alone upon completion.

    1. Sanure says:

      this is a very interesting topic of AI debate, so heres my input:
      its in this one paragraph that one true point of sapience is reached: The closest comparison in your terms would be a “Dyson Sphere”, and none of us will ever be alone upon completion. Aren’t humans a social race? Aren’t Asari or Quarians or Turians? any social race would fear being alone. Its why we(i am refering to all races real/non-real)strive for relationships or friendships. Another good point of sapience is mourning: we mourn and bury our dead(again refering to all races real/non real, and guessing since we really have no real info)In ME during a mission for the alliance we have to deal with a Geth incursion. during the last part of said mission after the last geth is killed, a Quarian vid is played of quarians mourning their fallen. after it finishes playing the console is shut down and all traces of power to it are stoped. does this not make it seem that the Geth mourn their dead? (also sorry if i got off topic or if this seems like im trying to revive this thread)

      1. Sanure says:

        totally just found a mistake….i meant to say geth at this point: ‘Aren't humans a social race?’ instead of human… to dub down my above post:

        if a race take the time to form census or interact with its self, then it shows sign of intelligence, the definition of intelligence is the ability to think for itself, and anything that can think is sapient.

  54. Jupiter says:

    I stumbled on this interesting article a little while ago, and it is remarkably on-topic for this discussion:

    The Hidden Message in Pixar’s Films

    It talks about how Pixar has been subtly building up a canon of evidence for the acceptance of non-humans as persons.

    Of course, I agree with Shamus in that AI probably wouldn’t have the same wants and needs as humans/other organic species. Especially since most of those needs (and some of the wants) are based around the organic body’s fragility. Theoretically, an AI would not have the same fears. Similar ones, certainly, it is a computer after all, but not the same ones.

  55. Adam F says:

    If artificial life reproduces itself, one way or another the logic of evolution will eventually give them a drive to survive and continue reproducing.

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply to PurePareidolia Cancel reply

Your email address will not be published.