AI, CPU Cycles, and Twitter

 By Shamus Apr 7, 2013 154 comments

splash_thief.jpg

This post could also be titled, “Much Ado About Nothing”. It began as an ill-advised Twitter tirade, and turned into a confusing argument in which pretty much everyone was in misunderstood agreement. If the medium is the message, then the message of Twitter is that nobody has the time to communicate clearly.

The dispute is all settled down now, but it’s still nagging at me and so I want to put everything here as an official record of the foolishness. It begins with this preview of the upcoming Thief game, where we learn that:

As I learned straight from the developers that showed us the game, the artificial intelligence in Thief is something that is only possible on next-gen hardware. Whereas in past games, a soldier might react in an “if, then” manner – for example, if a glass breaks, then walk towards the sound – your enemies in Thief won’t be so predictable. Say you snuff out a lantern and attract the attention of a guard. At first, the enemy may simply mumble to himself that it must have been the wind, but should you do it again, the chances of the guard growing suspicious heightens. Do it a third time and he may launch an all-out sweep of the area, and each time is different.

Emphasis mine.

ai_thief5.jpg

This assertion is so monumentally preposterous I still get annoyed when I read it. Saying we needed next-gen hardware to do if, then, else is like saying that getting groceries is only possible with the new Chevy f-10 pickup. I understand that you want to sell the game and the platform, and I’m willing to entertain a little hyperbole when it comes to marketing, but this isn’t just an exaggeration. This isn’t a simple lie. It’s making the public more ignorant about programming in general and then taking advantage of the confusion to tell them a lie.

Annoyed, I took to Twitter:

Then, still annoyed, I wrote another. And this one is what caused the trouble:

Now, this was a continuation of the previous thought: The CPU cycles are there and we’ve been capable of much better AI for ages, but AI takes resources to develop and it’s not a selling point in AAA games. It’s not that AI can’t sell, or that better AI wouldn’t make the game better, but it’s not usually part of the marketing and not often demanded by the audience. What we need is a change in attitude, not more processing power.

The last time I got really excited about AI in a shooter was back in 2007 when I played the original FEAR. (I also played the sequel, where they abandoned the smart, unpredictable and dynamic enemies for the bog-standard target dummies we get in most shooters. Not only is AI not getting appreciably better in AAA shooters, but it might even be regressing in some cases.)

This tweet was repeated a few times, people responded to it, and it generally spread around until it reached people who didn’t have the earlier context. Taken alone, you might assume I was saying:

AI is too difficult to write or test, and it doesn’t matter to the consumer. The better graphics of the next generation of machines will make AI itself fundamentally more difficult to write.

Of course, that interpretation didn’t occur to me. Over the next few days three different AI programmers responded to my comment as if I was making crazy talk. I tried to understand, but at only 140 characters it’s really hard to express, support, and clarify a nuanced argument about the intersection of software and business.

The irony here is that we were all basically in agreement on the essentials. From my point of view it looked like I (a former graphics programmer) was saying AI could be better if people were willing to invest in it, and actual AI programmers were disputing me. I got so exasperated I ended up blocking one of them. (It’s rude, but if you’re tempted to express anger in a public place it’s best to just mute the conversation and walk away. Maybe come back later.)

You know what was awesome? The first two Thief games. They were awesome.

Once we untangled it we parted amicably.

And just another blanket apology for anyone who missed it: Sorry for being crabby.

With that out of the way, let’s loop back and look at the stuff they’re talking about in Thief.

Whereas in past games, a soldier might react in an “if, then” manner – for example, if a glass breaks, then walk towards the sound – your enemies in Thief won’t be so predictable. Say you snuff out a lantern and attract the attention of a guard. At first, the enemy may simply mumble to himself that it must have been the wind, but should you do it again, the chances of the guard growing suspicious heightens. Do it a third time and he may launch an all-out sweep of the area, and each time is different.

Now, I am not an AI programmer. And I’m sure that since this was said in the context of a press demo, everything was greatly simplified for public consumption. But what is described here is pretty simple stuff. We’ve been able to handle this type of thing for literally decades. Even a terrible, stupid, shallow AI from a strategy game a decade ago is going to be orders of magnitude more sophisticated than this. You’ve basically got a “paranoia” value that goes up when suspicious things happen, and the higher it is the greater the chance that the guard will begin searching for the player. That’s really cool and I’m eager to see it in action, but it’s not something that’s going to tax your hardware.

You know what was awesome? The first two Thief games. They were awesome.

Again, compare this to the complexity of (say) an AI in Civilization V that’s trying to decide how to allocate their resources towards infrastructure, military, research, and exploration. Or even just deciding which city to make a priority. Or which foe poses the greatest threat. Even in Thief, the decision about where to look and when to stop looking is going to be far more complex than than the decision to begin the search in the first place. Thief may or may not have good AI – we won’t know until it’s released – but the thing they’re holding up as an example of AI sophistication is probably the simplest part of the entire system. It’s like saying Albert Einstein was a genius because he could remember where his house was.

But one thing I want to point out about this specific example of guards becoming aware of the player: A lot of the expense of a system like this is not in the AI coding, but in the art assets. It’s not enough for the AI coder to make some code that says, “If you hear a sound, path to the source of the noise and investigate.” Imagine how it would look for a guard to walk towards the source of a noise using his regular animations. He walks head up, eyes forward, arms passive at his sides. Even if the programmer has the guard in a super-alert searching state, we can’t see that as players. Instead, this search makes the guard seem even dumber than if he’d just ignored the sound entirely.

You know what was awesome? The first two Thief games. They were awesome.

What you want is a system with different levels of visible awareness. If the Thief AI developers decided they want four different levels of awareness from oblivious to alert, those levels won’t enhance the user’s experience unless the user can see that these different states exist. Maybe the guard begins slack-jawed, bored, and mumbling to himself. If he’s spooked he starts looking around and paying attention. If he gets suspicious he might begin searching carefully, scanning the room with his hand on the hilt of his sword. And if he’s detected you he’s going to be pissed off (or terrified, with bonus points if the system has both like in Batman: Arkham Asylum) and walking around with his sword at the ready. Each state needs distinct animations and dialog.

This work ends up getting multiplied across all enemy types. So if we’ve got four states of awareness, then we need four sets of chatter for every enemy in the game.

It’s possible for the AI coder to have a guard share his awareness level with other guards. (If guard A sees guard B in an alarmed state, assume the same state.) But it’s much harder to do this in a way that will make sense to the player and really convey to them the depth of what is going on. You don’t want them to see all the guards in the room magically adopt the same posture like they’re a hive-mind. If you want the player to understand the guards are communicating, then the guards need to visibly and audibly communicate with hand gestures and words. A lot of the expense isn’t in making the foes smart, but in showing that they’re merely smart and not omniscient. Again, this leads to a multiplication of animation and voice assets.

You know what was awesome? The first two Thief games. They were awesome.

All of this becomes that much harder to pull off when you move to a new set of hardware. Marketing has been telling people the graphics on this new device will be worth their $400, and the audience is showing up expecting to be amazed. So now all the guards have to be rendered in super-realistic detail. This means the animations are more complex to make, since now we need all those drapey-cloaks and locks of long hair to move and sway realistically. Oh, and at this level of detail it won’t do to have the voices coming out of inert heads, so all those lines of dialog will also need to be lip-synced.

With graphics this photorealistic it would be goofy to have all the guards look like an army of clone troopers, so we need to make different models. And if we’re going to have different looking guards then it won’t do to have them all speaking in the same voice. So now we need multiple voices for multiple models which all must be rigged and animated with many states of awareness and have dialog for changing states, sharing states, and while idling in a particular state, and all of that needs to be lip-synced, and oh my gosh this game is costing us how much to produce?

Again, Thief could be a great game. It might even have mind-blowing, groundbreaking AI. Eidos did Deus Ex: Human Revolution and I was a fan of that game. But this initial round of press does not inspire confidence and their claim that the new hardware will open up new frontiers in AI is either foolishness or sophistry.


A Hundred!202014We've got 154 comments. But one more probably won't hurt.


  1. Moriarty says:

    I’ve been following the conversation on twitter, hopefully some of the other developers show up, it would be neat to hear more from an AI-devs perspective.

  2. Andy L says:

    I agree with most of this article, but I’m not sure about this statement at all.

    We’ve had the CPU cycles for at LEAST a decade.

    Some forms of AI are hard. Some forms of AI that would be very useful in video games just can’t be done on hardware that exists outside research labs.

    Even just simple forms of AI like min-max search trees, or basic pathfinding can require massive CPU power depending on the design of the game, and the options available to the AI. Those things can be simplified, but it’s usually obvious to the player that you’ve done so.

  3. Dave Mark says:

    As one of the AI devs that accosted you on Twitter, I thank you for putting together a longer version of your thoughts. I don’t necessarily have a lot to add other than my own #facepalm at the marketing speak.

    You are correct, however, that the appearance of good AI is more than simply better code. As an AI programmer, I can give you near infinite shades of personality, mood, emotion, etc. — but it doesn’t do anyone any good unless they show up on screen or through your speakers. This, of course, is a content generation problem. Some of this will gradually be remedied as we embrace more procedurally generated (or at least tweaked) animations. That way, we don’t have to hand model so many different postures, etc.

    The other issue that is putting up a wall on game AI is the industry endemic resistance of designers to embrace dynamic content. Breaking away from rigidly scripted AI is antithetical to the authorial control that so many designers cling to. Even designers that are comfortable with allowing a more free-flowing experience (e.g. Harvey Smith and Raf Colantonio of Dishonored fame) are still often wary of taking the reins off of AI entirely in a “Sims” sort of way.

    This conflict of visions makes for some unfortunate deadlocks where designers scream “we want better AI” and subsequently “… but don’t let it do anything I don’t want it to do!”

    Anyway, AI is difficult to write, for sure. However, we are making lots of progress and can do some truly spectacular things — if designers (and producers) would only let us.

    • Nyctef says:

      One way to cheat the content generation problem seems to be showing some of this internal state in the UI. I remember some games (unfortunately I forget which) where you could actually see the rising paranoia meter displayed above the guards’ heads – and see it transfer from person to person if it filled up. Dishonoured did something similar with the ui bolt thingies, although those showed discrete levels of alertness. DX:HR and Splinter Cell : Conviction both had “last known position” markers which you could use to predict what enemies would do without them needing to indicate it with advanced animations and acting. Mark of the Ninja is another game I remember that seemed to do this really well.

      Of course, it’s a lot more effective if the game doesn’t need to resort to such tricks — when it works.

      • kdansky says:

        The Sims also does this very explicitly, and it skimps heavily on the dialogue for the exact same reason: Insane amounts would have been required, and it would still have made very little sense. So they just mumble garbage instead, and that makes all the difference.

        Our human brains can see through repetitive patterns very quickly, and that is why lines like “arrow to the knee” really fail when repeated. I’m still shocked that Bethesda doesn’t realize this.

        Often, more realistic graphics and audio result in a far inferior play experience because it runs into content limits quicker. Imagine PSTorment with completely voiced characters: Surely, we would have gotten ten times less text, with two to three choices like Mass Effect, and not the twenty answer to a single question that the game sometimes throws at you.

    • rayen says:

      The ironic part is once a game is released to the public the players (some of them anyway) are going to do everything possible to take that designers precious authorial control away anyway. I mean youtube is chock full of players making NPCs behave in weird irrational ways and glitching games. Play DnD and see how long it takes for some to seriously start rebelling against the DM(spoiler; five minutes at most. three if mountain dew is involved.)

      • Volfram says:

        I actually have a friend who when he’s DM says “I have a story I want to tell. It’s not my fault if you can’t follow along.”

        I don’t play any games he DMs anymore.

      • Lifestealer says:

        Not DnD, but other TT RPGs I’ve run don’t have the players rebelling against me as the DM…although that might be due to the fact I work on a principle of the players will not cooperate if I force a story, so I just have the first scene planned, and then improvise and try to tell a story the players want to hear…

  4. broken_research says:

    Again, Thief could be a great game. It might even have mind-blowing, groundbreaking AI. Eidos did Deus Ex: Human Revolution and I was a fan of that game. But this initial round of press does not inspire confidence and their claim that the new hardware will open up new frontiers in AI is either foolishness or sophistry.

    Err, Shamus. The bolded statement doesn’t really follow from the rest of your article. The entire article is spent on the importance of visible AI sophistication that nicely undermines the marketing quotes that started this, but when it comes to Thief, why specifically are you less than confident on the new Thief? I have my own doubts, but I’m interested in your reasons.

    This may require a different venue than the comments section of this post, that’s up to you. I just found that the bolded part came out of left field.

    • HiEv says:

      He’s saying that because, “This assertion [that the artificial intelligence in Thief is something that is only possible on next-gen hardware based on their "alertness" example] is so monumentally preposterous I still get annoyed when I read it. Saying we needed next-gen hardware to do if, then, else is like saying that getting groceries is only possible with the new Chevy f-10 pickup. I understand that you want to sell the game and the platform, and I’m willing to entertain a little hyperbole when it comes to marketing, but this isn’t just an exaggeration. This isn’t a simple lie. It’s making the public more ignorant about programming in general and then taking advantage of the confusion to tell them a lie.”

      In other words, if they have to lie to make their product sound good, that probably doesn’t bode well for the product.

    • Shamus says:

      Yeah, that’s a bit of a tangent on my part. Basically, one thing about AI is part of a larger marketing push, and nearly everything that’s been said is making people worry. Dropping Stephen Russel as the voice of the protagonist, the move to extravagant in-game cutscenes, this bit about AI, the focus on “bone breaking” melee moves, no mention of key aspects of the Thief setting like Hammerites and Pagans…

      So, this odd press announcement is a small part of a larger whole. I actually have an upcoming column on this, so I’ll get into the details later.

      • broken_research says:

        Excellent, I’ll be looking forward to it. Have you read the RPShotgun preview? It’s alleviated some of my concerns, not all of them by far, but it’s well written.

        My biggest personal fear is that, the Thief series is extremely film noir in setting and story. I’ve not seen one dev quote that addresses that. That, beyond Stephen Russell, taffers and all the other good stuff, makes me suspicious.

      • BlackBloc says:

        My biggest red flag about Thief is the fact it’s been in development 7 years and, according to the interviews in Game Informer, 3-4 years of those were basically a gigantic experimentation phase where people were trying all sorts of stuff and a lot of it just got dumped. Now personally I’m all for having artists try out experimental stuff, but Eidos is a company, not an artistic collective, and my immediate thought here is “Oh god they must at some point have found a new person to head this project and given him the mandate to get it out fast and with as little waste as possible so that it will stop hemorraging money, and it’s going to end up being a rush job.”

        • Eric says:

          I’m of the opinion that the game looks awful (except visually) and that the people creating it either have no understanding of what makes Thief popular or basically are stuck having to make it hit a much wider audience than it would if it would stay true to the originals.

          1) Lack of darkness in environments (everything is green/blue instead of black)

          2) Third-person animations for various actions even though they are actually more limiting than first-person

          3) Strong possibility of Awesome Button takedowns

          4) Suggestion that there will be a “more emotional” storyline which is almost certainly going to be hackneyed nonsense

          5) Garret’s design done by the same guy who made Altair for Assassin’s Creed

          6) What seems like an obvious push towards stealth action rather than pure stealth

          7) Stuff like “focus mode” that basically rips off Detective Mode from Arkham Asylum and other hand-holding

          8) Previews indicate the game is going for a “dark and edgy” attitude complete with more pointless f-bombs, prostitutes etc.

          Suffice it to say I am very, very concerned about this game being in any way a worthy successor to the original Thief games and considering how much similarity we see in the title right now to Dishonored, it’s not surprising they want to capitalize on that title’s success.

          The PR campaign so far for the game has been schizophrenic at best, with tons of backpedaling and inconclusive answers from the devs. We get these BS statements about AI like what Shamus ranted about, we get assurances the game will be both accessible and hardcore, both fun for action-oriented players and people who want pure stealth, etc. Then every few days we hear new info that further alienates the original Thief fanbase, raising the question of why Eidos are even making a Thief game if their goal is not to appeal to that original audience to begin with.

          • Ciennas says:

            It could be great though.

            Not like the thief series though. No, don’t ask me why they felt the need to put an old title on an unrelated game that merely resembles an older one cosmetically.

            But it could be great; they just have to make sure that, unlike Deus Ex HR, pure stealth and skill will get you through from start to finish. Because otherwise it’s basically an extended game of the DB questchain from Oblivion- which was a lot of fun, but shouldn’t that be somebody else’s bag to make?

            I don’t like this trend in the entertainment industry right now. EVERY field is gutting the old stuff to remake it for a new crowd.

            Oh well. They just have to make sure that the player can solve problems however they like, rather than be cutscened into a tight corner, or forced to make the stupid or out of character decision to forward the plot.

            Also, remembering that the whole point of the original games was to not be a combat monster.

          • kdansky says:

            It’s only lacking “new, re-imagined protagonist who is more edgy and in touch with the younger crowd.” in that list, like with “DMC: Devil May Cry” (actual title). That was such a disaster.

          • Volfram says:

            I’m curious, if you could, what’s wrong with Altair’s character design in Assassin’s Creed? I’m still learning about good character design.

            His outfit was at least better than Ezio’s. Altair is wearing something that looks light enough to do all the acrobatics and jumps in. Ezio looks like he’s wearing 150 pounds of tight leather. I get exhausted just thinking about it, to the point that I bought the Altair skin in Assassin’s Creed: Brotherhood and wear it exclusively.

            (and that cape! Would RUIN your center of gravity! ARGH!)

          • harborpirate says:

            I wish to give you all of my internet points at this time.

            Your first sentence nails that creeping feeling that I’ve been getting about this game. Namely, that these designers simply have no idea what made Thief a great game.

            I listened to the Game Informer interview with a couple members of the Thief 4 team, and by the time it was finished, I was completely disillusioned with the idea of a remake.

            Their conclusions about why Thief was great:
            It has a really interesting narrative.
            Garret is a really cool character.

            That’s it.

            No mention of how the AI hit a careful balance of “not so realistic that its too hard”, and “not so gameified that it breaks immersion”. No mention about how the guard states being easy to read made the stealth mechanic truly playable. No mention about how guards constantly talking filled in atmosphere and plot details as well as helped to make their state clear. No mention of how the setting, which was somewhat novel at the time, is still an under-exploited genre in the AAA arena. No mention of any of the first 10 things that would have come to mind if I was discussing it. The closest they came was confirming that you could beat the game without killing anyone.

            My conclusion: The designers of this version saw what they wanted to see in the original game. They saw it through the myopic filter that constitutes modern AAA development; where “dark and gritty”, “immersive narrative”, and “cool characters” are the only things that resonate.

            To Modern AAA Game Designer Guy, everything is an action game, and action games with detailed stories are all the market wants. Shooter mechanics are the hammer, and everything looks like a nail.

            I just wish they’d stop trying to remake stuff if they’re not even going to try to understand the original. The way these guys talk about Thief, it is very hard to believe that they ever even played it. It seems more like they looked at some screenshots and read a plot synopsis on wikipedia, then assumed they knew everything they needed to know.

            We need more designers like Jake Solomon. We need people that really understand WHY the classics are classics leading these remake projects. That passion to take a shot at sticking with the core mechanics and trying to innovate upon the original ideas seems all too uncommon.

      • Neko says:

        NO STEVEN RUSSEL?!?! I had not heard this. This is worrying indeed.

        • Even says:

          Tell me about it. Of all the possible things to change, it’s one of the worst. I get it if they just want to make their “own game”, but this game is looking so far divorced from the things I liked in the earlier games, that they should just as well give it a new title and rename the protagonist.

          • broken_research says:

            the reason is more stupid: they didn’t use russell because they have decided that it is vital to use mocap technology, and russell is too old for the stunts they need him to do.

            Yeah, it’s that stupid.

            • Naota says:

              Now this might be a wild idea… but hear me out.

              Garret is a human, right? Stephen Russel is a human with an amazing voice. There are presumably other humans working at Eidos Montreal, right?

              So get this – we have Stephen Russel do Garret’s voice and facial expressions… and have somebody else do the motion capture for his stunts. Wild, isn’t it? He’ll sound like Stephen Russel but move like a stuntman!

              I am a GENIUS.

              • Ciennas says:

                You sir, are BRILLIANT. Of course that idea would work, Which is why you are most definitely NOT in charge of the project- we can’t let people who knew about the old games and cared about them even a little be anywhere near this.

                It might alienate the new people who don’t even know it exists, and your excitement doesn’t draw as much attention to the project as outrage.

                (At least, that’s how it looks. Maybe there’s another reason for not hiring him, and the real reason is way dumber.)

              • Steve C says:

                That sir is crazy talk! Next you’ll want Anakin Skywalker to be lip synced.

              • Nick-B says:

                They are doing the same thing that they did in the new Splinter Cell, in that they feel you cannot – as someone watching it – get attached to the character if it’s not the same person doing the voice as it is the one driving the body (the mocap actor). For some reason, despite in, like, EVERY game before mocap, we have had believable characters where the voice actor wasn’t also the guy in the bright green suit with tiny balls strapped to the outside.

                I argue STRONGLY against the idea that the voice actor and the motion actor HAVE to be the same damn person. There’s a reason voice actors specialize in voice acting, and are not also action stars. You may get the occasional person that can do both, but it isn’t an excuse for video games, in which you can EASILY swap in another’s voice on top of someone’s body.

                • Ciennas says:

                  Agreed. It’s very silly.

                  Maybe they’re inspired by Avatar. That worked out great for James Cameron, and made millions of dollars.

                  Or they’re using the exact same system. Would explain the silliness.

                  (Or they’re trying to desperately reinvent a franchise. Dunno why, though.)

        • Obligatory link to the petition:

          https://www.change.org/petitions/eidos-montreal-square-enix-bring-back-stephen-russell-for-thief-4

          It worked to get David Bateson back for Hitman: Absolution. Not that it was enough to save that game, of course…

      • Decius says:

        If they make Garret capable of swordfighting two guards at once and winning, I’m going to call shenanigans. Garret has never been a swordfighter.

        The first thief (drink) had a great level where the guards would sound an alarm, summoning more guards who would search a large area; the perfect implementation would be guards that did a systematic search around a localized alert and secured areas with more people using mutual coverage as they discover their comrades dead or unconscious. I don’t think we have the processing power now to implement such smart decision-making in arbitrary cases, but we don’t have to: We only have to implement smart AI in the well-described fully-known cases of the levels we create. We can assume that the forces arrayed against the player have standing orders and a level of knowledge about their surroundings, and assign them locations and group behaviors appropriately, including seeking out other NPCs and/or noticing when someone that they should expect to see is missing.

        Returning to a patrol route alone and relaxed is not an appropriate behavior after finding a dead coworker, no matter how loud the wind is.

  5. Thomas says:

    I don’t know if the marketing even made the game sound more attractive for someone whose clueless about the supposed difficulty of it. You’re description of Arkham Asylumesque guards becoming more and more paranoid sounds interesting, because that buys into the narrative idea of sneaking superhero scaring the pants off clueless guards.

    But the way the marketing puts it, making everything more unpredictable. Stealth games already require a lot of patience and timing with harsh failure penalties. It felt like they were hyping up that it was going to be even harder and this time it won’t even feel like your fault when you lose.

    Predictable interesting AI as part of the gamepuzzle sounds more attractive. Smart AI might win =D

    • Nordicus says:

      I agree with Shamus on that the reveals of the Thief reboot have either evoked no feelings or strong worry.

      I want one, just one detail about the new game that’ll show that Eidos has been trying hard to innovate “pure stealth” gameplay style rather than just make changes and additions to please those who probably don’t like stealth games to begin with. And not just empty marketing bullet points that are impossible to prove true or false until the game’s been out for a week

      Like, I don’t know, create several different “psychological profiles” for guards based on their job experience, loyalty, sleepiness, etc., and then give them subtle animations, dialog and/or character model details to reflect it? After telling the player that distinctly different guards exist and explaining how they’re different, Thief could dive into being a deep stealth game that requires mind games from time to time.

      Now, I know that example was just wishful thinking, but c’mon Eidos, give us something to dull the pain of having the new Garrett be modeled after The Crow and Dark Knight’s Joker!

  6. Infinitron says:

    Shamus: What I suspect is that this has less to do with CPU and more with memory.

    The current generation of consoles have SO little memory that it might actually be preventing them from managing the state of sophisticated AI routines.

    • HiEv says:

      As Shamus pointed out, the art for a single NPC almost certainly takes up thousands of times more memory than the memory for the AI for that same NPC. As far as the AI’s memory goes, it’s generally it’s just walk this path and have these handful of states, and the rest just falls out of the code for how to handle those states. So the memory for each AI could probably be stored in a few dozen bytes, while the graphics for that AI will be multiple kilobytes or possibly even megabytes.

      In many cases you could probably double the enemy intelligence and still be using less than 1% of the memory for that. CPU is a far larger issue than memory when it comes to AI.

      • Infinitron says:

        You’re probably right, but they’re pushing the console hardware so strongly these days for better graphics that who really knows? Maybe there really aren’t any bits to spare for AI state.

    • Volfram says:

      Batman: Arkham Asylum and Arkham City are both on the Xbox 360 and Playstation 3, and they have fairly sophisticated AI. Mark of the Ninja is an XBLA game, and it demonstrates AI that is in many ways smarter than the hypothetical guard example given.

  7. Lalaland says:

    I too was very impressed with the AI in FEAR it made the game a real challenge and is one of the few games where ‘peek-a-boo’ isn’t the go to strategy. The crappiness of the AI in the sequels meant that the key USP was gone and I was left with another dark corridor shooter with an awkward shoe horned sequel plot.

    I’ve always felt that this could be alleviated in a fantasy or sci-fi setting by having different tones of gibberish denoting different mood states. Something like ‘Simlish’ (if it worked that way) it’s relatively low cost and would allow for a much lower voice budget, combine it with a non-verbal method like flushing skin pigmentation it might reduce cost. Of course it wouldn’t help in any game where you fight humans.

    It’s been a while since we’ve had games that push AI as a feature, I do hope it’s not just more bluster.

    • StashAugustine says:

      FEAR 2 is the first game I remember being really, really disappointed in. The first one is probably my favorite pure FPS of all time, and the sequels were really, really bad.

    • Neko says:

      They could take the Borderlands 1 approach of having most NPCs wearing some sort of face-covering mask or helmet. That’d save on lip sync and most facial animations, at least.

      Of course, if the guards are wearing helmets that cover the mouth, they more than likely cover that sweet spot at the back of the head, too…

      And surely there’s some middleware out there to handle skeletal animations for somewhat-varied body types, and varied clothing. Elder Scrolls games do that pretty well considering all the possible body / outfit permutations.

      And frankly, aren’t we going to be in dark shadows most of the time anyway? Higher poly count is not something that strikes me as important in a Thief game. Better, smoother shadows and lighting and bigger environments, that’s what you can spend your HD budget on. Please. If anyone from Eidos is reading?

  8. Andy L says:

    Actually, I think a big part of the game industry’s problem with AI is cultural, sad to say.

    (Perhaps someone who’s worked on a AAA title can set me straight.)

    A lot of really good AI techniques exist in text-books, …but they’re rarely seen in games. Instead, character behaviors are usually created through the game’s scripting engine with scripts. Usually scripts that, while long and detailed, are fundamentally very simple.

    I guess depending on the engine these scripting engines can be pretty restrictive, but then why is it the go-to approach? (Perhaps someone with big-studio experience can tell us.) Surely it’s not just so AI can be fobbed off on level designers and other non-programmers?

    • impassiveimperfect says:

      Actually really curious about this, but do you know where one can read up on the AI techniques you alluded to?

      And when you say ‘cultural’, do you mean like a more mainstream culture (like that of the US), that of computer science, or that of video game development?

    • Rili says:

      As far as I was taught this is for once actually the PCs fault. Given the widely variable power of your costumers hardware you normally turn to scaling systems that allow for nice graphics and resolution on high end machines but workable experience on low end, but AI does not scale nicely. Who wants to play the stupid version of a game just because your machine is bad? So AI was always designed for the lowest allowed spec.
      Physics that impact gameplay have/had similar problems.

      • Ben Sizer says:

        That’s just an excuse, really. Yes, AI doesn’t degrade without gameplay repercussions like graphics can. But CPU power today is massive compared to previous years and plenty can be done with those cycles. The lowest spec PC today could run about 10 instances of a game from ten years ago concurrently, so we should have seen advances broadly proportional to that, but we haven’t.

      • Fleaman says:

        We should just scale the AI to three different levels and call them “easy”, “normal”, and “hard”.

        Actually, I think it’d probably be a selling point if you could say “Our game is SO SMART that last-gen hardware is too dumb to play hard mode.”

  9. impassiveimperfect says:

    *snicker like I’m a tween again*

    ‘Much Ado About Nothing’ was another of Shakespeare’s sexual puns in his works.

    Also, fulfilling my duty as part of the Typo Alertness Brigade; “but it’s not not usually part of the marketing”.

    Also also, would you have any interest in doing, in your free time, a little experiment in AI, the way you did with other concepts with your other projects?

    Just a thought.

    • Abnaxis says:

      Heh. When I was in high school, we actually had a field trip to see a professional production of Much Ado About Nothing. Us being the juveniles we were (and with full understanding the context of the superbly-acted play), we were rolling in the aisles and getting some really nasty looks from all the people there to see [snobby theatric playgoer's voice]Shakespeare[/snobby theatric playgoer's voice].

  10. zob says:

    Glad to see I am not the person around that’s missing FEAR AI. Consolification of games cost us too much.

    • Thomas says:

      ??????????
      http://videogamesalexander.files.wordpress.com/2012/02/b14afe1.jpg

      Am I missing something? Did the CPUs of all current gen consoles melt overnight and were replaced with inferiors ones so we had to make FEAR 2′s AI suck?

      • Irridium says:

        Yeah, F.E.A.R. 1 was ported to consoles and the AI in it was just as good as the version. I should know, my brother bought the 360 version.

        • Volfram says:

          If we view “consolification” as “Making the game into more what is expected on a console,” then I’m inclined to agree with zob. The consoles clearly have the power to run remarkably intelligent AI simulations(largely due to the fact that the difficulty in good AI is in creating it, not running it. Running it is easy.) What is expected of a console game, however, is amazing graphics, with little emphasis on anything else, so if developers pour everything into graphics, everything else will naturally suffer.

          In short, it’s not the console that’s the problem, it’s the developers.

          • Thomas says:

            I still don’t know if the console part needs to come into that. FEAR sold well on consoles which is why they greenlit it in the first place. I imagine developers have increasingly silly ideas of what people want in games without splitting console and PC games into different entities in their heads.

            If FEAR was an RPG it’d be different, because interface design is a legitimate argument for consolification and that doesn’t even involve the developers being particularly silly

            • Volfram says:

              Perhaps “EA-ification” would be a more apt term.

              I’m not a fan of the direction games have taken of late. Everything is online, DRM, social networking, achievements, microtransactions, FarmTown clones.

              Is it too much to ask for a reasonably-priced single-player game that I can just throw in the console and play? I loved Descent, and it didn’t have ANY of the stuff modern console games “require.”

      • zob says:

        It took about a year to port FEAR to consoles. I am aware that I may very well be wrong on my assumption. But I believe the dumbing down happened to increase number of enemies on screen and to release the game on multiple platforms simultaneously.

        • Thomas says:

          If you know you’ve got a multiplatform release, it probably doesn’t rush you that much for time (unless you’re trying to hit a christmas time period or something) because you have theoretically got a larger budget to pay staff to work on the game longer.

          So I don’t know, it seems like bad AI would be a result of bad design choices (say increased focus on graphics and enemies on screen at a time) than consolification, because since they already had code running on consoles, presumably they could have just copy pasted that AI to the new game (ish, presumably it’s quite a bit harder) and had good console AI. They must have made a decision to change something not directly related to consoles to make it worse

          • zob says:

            they could have just copy pasted that AI to the new game

            8 models using that AI at the same time needs more resources than 4 enemies. Xbox 360 has 512mb ram shared between GPU and CPU , PS3 has 256 reserved for GPU and 256 reserved for CPU.

            Back in those days to get the wow factor you need better graphics. If you want to improve your graphics, you need ram space. To get ram space you need to free up resources used by other processes. And this is before the days of insane optimization we see on current titles.

      • StashAugustine says:

        I seem to remember that they dumbed down the AI to make the game easier, not because consoles couldn’t handle it. I’d need to find a source though. (It should also be noted that FEAR 1 was very dependent on pulling off tricky shots, which would make it easier on PC than consoles.)

        • Thomas says:

          I was trying to get sources on this, because I’d heard someone say once that FEAR 1 Ai was surprisingly simple, following a couple of ground rules which ended up creating all the gameplay immergently, so I was wondering if they tried to make it more complicated and screwed that up.

          But it’s proving impossible to source. There were lots of articles of the developers talking about how they were concentrating on making the AI super awesome better for FEAR 2, but it was marketing bullet point stuff (ie lies)

          • mwchase says:

            Elsewhere in the comments, there’s a link to this PDF of a presentation at GDC 2006 which explains how it works, in a high-level sense. The basic idea is, you can give an enemy goals to fulfill, and actions to take, and the AI works out how to accomplish its goals using its available actions, at runtime.

            This is in contrast to typical FSM models, which try to model out the logic the enemy would use, ahead of time.

            They also cited the way this made it easy to confine one-off logic about enemy behaviors to the single enemy they apply to. As I understand it, it’s something like, say you’ve got a class of enemy that has to take an action at intervals (they gave the example of out-of-shape policemen from NOLF stopping to catch their breath. Because everything was on the same FSM, the associated logic bled out into the entire AI system). You could do something like, group certain actions together (“physically exhausting”), and give it an effect (“winded”) which might bleed out a little, but might not, depending on the details of the design (but it would definitely be cleaner than embedding flag checks everywhere in an FSM), and give that unit a goal (“if you’re winded, catch your breath”) that makes use of that behavior, those effects, etc.

          • Volfram says:

            I’m pretty sure the enemy AI for Halo was fairly similar.

  11. Swimon says:

    While I largely agree I would like to point out that the “best” AI is not always ideal, the goal is an AI that creates interesting gameplay for the player rather than the smartest or the most realistic AI.

    CIV V and alpha centauri is a good example here. I have no idea which is the better one from a computer science perspective (how you’d test that seeing as they’re designed for different rules I wouldn’t know) but I at least find alpha centauri to be the better game in large part because of the AI. The AI in alpha centauri play according to the underlying ethos of it’s faction to a degree that CIV doesn’t. For example fighting Yang is very different from fighting Santiago because of their view of the world. Yang swarms you with cheap low level troops because he doesn’t value the lives of his people and breed them like animals whereas Santiago usually use few well trained troops with the best equipment she has because she’s obsessed with the military (Although I have never played a match where she was any threat since she only focuses on military in a game where tech is god).

    Also simpler AI is generally easier to predict which is of great value in many gameplay scenarios, in others surprising chaotic behaviour is what’s needed which demands more complex AI (an example of this would be the walking humanoid robots from ME2, instead of taking cover like the rest of the enemies they walked straight towards you in a very stupid way which made them very different from the rest of the game). I guess it’s the comment “What we need is a change in attitude, not more processing power” that bugs me. While that was perhaps not what you meant it seems to imply that a more complex AI is always better and I think that would just put us in the same situation we are today with graphics. Obsessed with the jargon and the technical details instead of the actually important part, the created experience.

    • HiEv says:

      I’m a bit confused by what you wrote since you’re not exactly clear what you’re trying to say on some points.

      Are you saying that an AI that can emulate a variety of different playing styles to give each AI opponent a different “personality” is more or less sophisticated? What you’re describing sounds more sophisticated, but it reads like you’re saying it’s less sophisticated and that that’s a good thing.

      What makes an AI “smart” isn’t that it plays a perfect game (games with AIs that always make headshots are generally seen as pretty broke) it’s that it plays a “realistic” game, and that’s much more difficult. Generally speaking, the AI knows exactly where you are and, assuming the weapon isn’t too slow, exactly how to aim to hit you. The trick is to make it react like a human, so no eyes in the back of every AI’s head, no seeing through (opaque) solid objects, shots are less likely to hit when not “aimed” carefully, no faster-than-human reactions to being shot at, guesses should be made about the player’s heading when they move out of sight instead of always knowing where the player is, etc… So a “smart” AI has to play well, but not have access to information a human wouldn’t have in the same position. Giving each AI different “personalities” makes that even more complex.

      So while I agree that “an AI that creates interesting gameplay for the player rather than the smartest or the most realistic AI”, the problem is that dumb and unrealistic AI generally detract from interesting gameplay. These aren’t competing goals, they go hand in hand when done right.

    • Humanoid says:

      Philosophically, each iteration of Civilization and its offshoots have always flip-flopped to some degree on whether it’s more interesting to have AI that plays to win, or which plays to character. Say you’re within a handful of turns of achieving a peaceful cultural victory. Now gift Gandhi a bunch of nukes. Will he declare war and use those on you to deny your win?

      Personally I lean towards preferring the AI play a character, even at the expense of challenge, but I admit I almost never play games for the challenge regardless of genre – again with the Civilization example, I’d rather play at middling difficulties, say Prince-Monarch, build in the style I want, and pursue my friendships and vendettas based on character. So pretty definitively, I’d rather roleplay my tribe and have the AI do the same, rather than play against a min-maxing, rules-lawyer AI.

      There are limits of course – imagine a game of chess where both parties dogmatically advanced their pawns in a line and always kept their good units in the rear to simulate medieval melees.

      • Thomas says:

        AI with character can be easier to game with too. It can be hard to get to grips with what a generic ‘good’ AI is going to try and do next, a peace focused Ai is more easy to understand and try to come up with counter strategies

      • Syal says:

        imagine a game of chess where both parties dogmatically advanced their pawns in a line and always kept their good units in the rear to simulate medieval melees.

        I thought they called that Checkers.

        And at least in games with lots of characters, there’s no reason the two can’t coexist; if there are twelve AI opponents, you can still have one or three that focus on winning however they can. (Especially if there’s an option to turn them off at the beginning.)

      • Ok, so this is not actually anywhere near to your substantive point, but the advance-in-a-line thing would be a really terrible simulation of medieval warfare. The knights were not faking, they really believed in chivalry and the honour of a face-to-face fight with a peer. They did not hang back and let their peasants do the fighting. At Crecy the French knights charged right through their own mercenary infantry (which being armed with crossbows had some kind of chance at duking it out with the English archers!) in order to get into the fight.

        Moreover, lightly-armed peasants didn’t actually stand any sort of chance against armoured knights, and everyone knew it; so they were kept well back and only committed when the battle neared its end, to lend some punch to the pursuit and to help with mopping-up. Our ancestors were not, in fact, idiots, nor were they bloodthirsty monsters just for the sake of bloodthirst. They did not send in thousands of peasants to be slaughtered if they had any other option.

  12. Does the behavior of NPCs in Fallout: New Vegas count as A.I.? I remember the hilarious SW episode where you guys Stealth Boy’d your way through the Silver Rush, stole all of the merch, then sold it back to the bad guys who were watching as the stuff they’re buying disappeared right in front of them.

    I think it was Shamus who said something along the lines of “do you know how hard it is to program NPCs to be this stupid?”

    In light of telling people how A.I. and programming actually works, that statement bugged me. I saw it as the fact that the Silver Rush merchant is programmed to be a merchant. Rather than code a bunch of unique conditions to this specific merchant (“Aggro if I’m being sold weapons. Aggro if the weapons being sold to me are identical to the ones on my table. Aggro if the player has a Stealth Boy.”) the game put a bunch of gatekeepers between you and the merchant and made a few fatal mistakes. The A.I. was no more or less stupid than before, the conditions that were supposed to take away your ability to make them appear stupid (steal from them, sell to them) weren’t well tested or thought out.

    On another note, I still find the A.I. for the baddies in the F.E.A.R. games (at least the first two) to be among the better ones I’ve encountered, as they’ll try to flank me and draw me out if possible, and rarely seem to get stuck, run around in circles, etc. I don’t know if that’s due to amazing programming, limited areas with good pathfinding, etc.

    • X2-Eliah says:

      Yeah, that’s AI. Pretty much every control-decisionmaking routine attached to some active agent (thing) can be considered that agent’s AI.

      • Right. So them my larger point is more “is the A.I. itself stupid, or were the conditions set up around it stupid?”

        In the F:NV example, the A.I. was a standard model, but it was never modified to account for PlayerBehaviorX because (they thought) PlayerBehaviorX could never happen because the devs thought all possible actions had been nullified by those guarding the door.

        • Rax says:

          The NPCs in F:NV don’t actually pay any attention to “stuff” around them, they do however pay attention to you and if they see you loot an item that’s considered owned by someone else they attack you. That’s why it doesn’t matter if you steal a fork or some very expensive weapon/armor/whatever.
          That also means that if they can’t see you, they won’t attack and won’t notice the item disappearing.
          The even more hilarious approach to exploiting this behavior is to just take the item without looting it to somewhere else, as seen here

          That’s just how the AI in this and many similar games is programmed and yes, obviously, it’s pretty stupid. But it’s less work than making it more intelligent especially because you’d have to add a lot more stuff than just awareness of the merchant’s own merchandise and the ability of the player to carry stuff without looting it.

  13. X2-Eliah says:

    I wonder if the AI routines for racing/driving games – like NFS:MW – are similar or dissimilar to AI routines for shooter games (fear, bioshock, modhrrrn warfarce).

    Also I’ve heard that GalCiv2 has a kick-ass AI. Don’t know myself, I never played it past ‘below normal’ (not much of a strategy gamer), but.. Hm. Is it still highly regarded?

    • Tohron says:

      That depends on how well you know its quirks. The diplomacy model heavily overvalued a culture-like resource called “influence points” generated each turn by your planets, which determines votes at the end-of-year council meeting and tourism income. On larger maps, you have a huge overabundance of it, which you can trade to the other AIs in exchange for money, tech, low-population planets, and warships (sometimes, you have to mix in some of your own money, which you then get back by trading more influence points).

      That’s how I can get 8 max-difficulty AIs down to 3 -5 planets each by the end of the first year. Once that happens, they don’t really have much room to show off any other tricks.

    • Paul Tozour says:

      X2-Eliah: They are totally different. AI differs radically from one genre to the next in terms of the problems it needs to solve and the way it goes about doing so.

  14. Stephen Flockton says:

    A lot of people have mentioned the FEAR AI as being well done. I recommend you take a look at Jeff Orkin (The Lead AI Programmer on FEAR) presentation at GDC a few years back.

    It’s goal based rather then typical finite state machine AI and its a pretty goood read. The PDF is here.

  15. burningdragoon says:

    Apparently we also need some innovations in marketing.

    • Rax says:

      That’s probably one of the most interesting things I’ve read in quite some time. Thank you.

      • Ciennas says:

        Oh, marketing is bad, but it has to be.

        (Or rather, it’s successful at this level, which is all that’s required of it.)

        Anybody here work in marketing? I’d like to know how educated on a given product a team is before they start working the adstravaganza.

        Because their job isn’t education- not really. Their only duty is to get it flying off of retail shelves.

        • Trix2000 says:

          Education CAN potentially get more people interested in buying. But I think it really depends on the situation, and probably isn’t as obviously reliable compared to just making bold claims.

          • Ciennas says:

            Yeah, tragically so.

            I think it’s reliance on public ignorance. bringing them up to speed would not only bore and confuse most of them, it’s outside the scope of most ad campaigns.

            Using fancy buzzwords get people excited that the newer thing is better in some way, even if the only new and shiny thing about it is a fresh coat of paint.

    • Rax says:

      heh, just noticed I replied to the completely wrong comment, sorry for being an accidental sarcastic a-hole. :D

  16. Raygereio says:

    I once played around with making an AI mod for New Vegas and NWN2. I found that if I just increased how often the AI would update (reassess what was going and what would be the best course of action), enemies would already appear to behave smarter as they reacted better.
    But that change also had the obvious effect for both those games to become way more CPU intensive during combat when the AI did it’s thing. I’m not a programmer – let alone an AI expert, but what I took away from that is that while you naturally still need to create a good AI, more processing power can improve how the AI works ingame.

    As for Thief.
    Am I the only who’s starting to get the distinct impression that they started predevelopment on “Thi4f” a while back. Stopped for whatever reason. And then started work on a seperate Dishonered me-too-game, which they eventually slapped the Thief IP on in an attempt to pull on the nostalgia heartstring?

  17. David Bray says:

    Hi,

    Sorry to stir the pot again, but this is the most ridiculous over reaction I have ever read. I doubt marketing even spoke directly to the dev team, most likely they just got a list of dot points that they cherry picked things from. Having worked in large studios such as Rockstar I can guarantee you marketing never even meets the dev team.

    And for the love of god can we please stop calling it Artificial Intelligence, its not and never will be. Games require that characters behave in a predictable and reliable manner, it may appear intelligent, but it most certainly is not.

    • Raygereio says:

      this is the most ridiculous over reaction I have ever read. I doubt marketing even spoke directly to the dev team, most likely they just got a list of dot points that they cherry picked things from.

      Oh, so this is the first time you’ve been on the Internet then?
      Bad jokes aside, I actually somewhat agree with you. Personally I take anything said by marketing with a couple of truckloads of salt because these people generally have no real clue what they’re talking about and as a result throw out ridiculous statements.
      But on the other hand, when these people say something dumb or silly why shouldn’t there be a response? How else will marketing people learn to not be dumb?

      And for the love of god can we please stop calling it Artificial Intelligence, its not and never will be.

      Speaking about over reactions.
      I strongly disagree on the “never will be”. I think Artificial Intelligence can be utilized in videogames, but that’s besides the point. What is more important is the litte fact that words, terms and symbols can have different usages, definitions and meanings depending on the context in which they’re used and these usages, definitions and meanings can also change over time into new ones in exchange or next to the originals.

      Sure, what we call AI in videogames isn’t what the term AI means in the context of artificial intelligence research. But when you use the term AI in the context of videogames, everyone knows what you’re talking about (or at the very least has a vague idea as to what you’re referring to). So the term serves its purpose.
      If you feel strongly about this and really think we all should call it something different, try coming up with a new term to describe what we now refer to as AI in videogames and hope that it will catch on.

      • silver Harloe says:

        I could be wrong here, but I’m pretty sure even academic AI includes the studies of “expert systems” and “heuristics,” neither of which are ‘intelligent,’ and both of which can describe things that can be plausibly (or have already been) put into games.

        • Abnaxis says:

          I’m pretty sure what David is getting at, is that AI, as generally defined by experts, has to be able to learn and adapt to feedback dynamically. For your examples, when you’re talking about dirt-simple expert systems and heuristics that are fully deterministic, they’re not really considered AI by academics. Rather, it’s the systems that rely on statistical models and prior knowledge to inform current decisions that are of interest, because they are learning systems. The goal of AI is to dynamically optimize itself through feedback, without relying on a programmer to explicitly define the new optimizations.

          In contrast, gaming AI usually fully deterministic, reacting to player behavior in exactly the way the programmer wrote it without dynamically modifying its heuristics. We call it ‘AI,’ but…well, it really isn’t. It’s not going to bug me as much as David, but his point isn’t invalid.

    • What’s a better term? NPC Behavior? Dynamic response? Variable-weighted action?

      I know A.I. is a much larger subject and has implications far beyond the most advanced video game, but it’s not our fault that’s what people call it. I don’t know anyone who thinks the code-controlled elements in a game have anything resembling a thought process (outside of fantastical movies, I suppose), but things have a way of being boiled down to cool-sounding shorthand.

      I don’t think I’m actually exposing my food to gamma radiation when I “nuke” some lasagna in the microwave, but that’s slang for you.

      • theNater says:

        I propose we use the term “Apparent Intelligence”, as it’s the bit of code that gives the player the illusion that the game actors are intelligent. This has the added bonus of using the same abbreviation, meaning that we can just assume everyone is using this whenever they talk about AI in video games.

    • Shamus says:

      “Sorry to stir the pot again, but this is the most ridiculous over reaction I have ever read.”

      It’s actually the second-most ridiculous over-reaction. Your comment bumped me out of first place. :)

      More seriously, there’s nothing wrong with calling marketing out when they say silly things. It’s cathartic, educational, and entertaining.

      “And for the love of god can we please stop calling it Artificial Intelligence, its not and never will be.”

      Language doesn’t work that way.

      • “Language doesn’t work that way.”
        On the other hand, we’re all free to re-define terms to satisfaction (not that any alternative is offered in this case(missed your opportunity David!)). But, just because it’s possible, doesn’t mean it’s advisable. I am continually amazed at how many people overlook this. If you understand enough to confidently disagree then the definition of terms isn’t the issue.

        That said, I prefer Neal Stephenson’s (not to be confused with Stephen Neilson) terminology in Diamond Age. SI, as distinct from an actual artificial intelligence. But, as Shamus says, that’s both not the purpose of the article (though I would love to see another article on the nature and challenges of AI/SI/whatever-you-call-it) and also not how people understand the issue.

        • Zeta Kai says:

          The purpose of language is to express & exchange ideas. Your message has failed to do so. Ergo, you are unfit to advise others about language.

          Case in point: SI? What is that? What does it mean? Simulated Intelligence? Semi-intelligence? Stephenson’s Intelligence? You made an acronymic reference, recommended it, & then failed to explain what you’re advocating adequately.

      • Steve C says:

        “Language doesn’t work that way.”

        That’s sounds like linguistic dissidence to me.

    • LeoDaFinchy says:

      I’d suggest waiting until there is a universal definition of what ‘Intelligence’ is (and is not) before worrying about someone’s use of the term ‘Artificial Intelligence’

    • AyeGill says:

      And for the love of god can we please stop calling it Artificial Intelligence, its not and never will be

      I disagree. First of all, “intelligence” doesn’t mean “human-level intelligence,” “sapience,” or any such thing. But even supposing it did, what makes you think computers are incapable of performing the same calculations a human brain can? I don’t see why brains should be anything but mechanical systems, perhaps with a bit of random variation thrown in. The former can be done on any computer, the latter can be added with a hardware random number generator.

      • silver Harloe says:

        for people considering replying: this reply appears to be directed at the “never will be” part of the sentence, whereas previous replies were directed at the “it’s not” part of the sentence. Which is wholly different discussion, really, so change gears here and talk philosophy rather than pointing out the current state of game tech.

  18. This might be interesting to revisit. It’s from 2007, but it’s an article listing a top ten (to the author) list of A.I. in video games. I point it out because of the #1 pick, Black & White. I remember a lot being made of that game’s ability to “learn” from player actions and interaction. I want to say there were interviews with the devs who claimed players were getting their animal-creature-servant-thing to do stuff they hadn’t thought possible or had thought of (mostly, IIRC, training it to do tricks like cartwheels or backflips).

    Anyway, I always wondered what became of that concept or if it was ever used again.

    • Thomas says:

      Black & White was designed by Peter Molyneux => What the devs claimed were way overstated. I don’t know if it were a lot more complicated than %chance of doing action, slapping decreased %, tickling increases

      It was a really neat game and game concept though. I’d love to see someone return to it one day. (In general it’s cool watching computers do stuff autonomously. I love AI allies in RTS and one of the best things about inFamous 2 were the number of fights between and with NPCs)

      Did you ever play Creatures? That was another teach-the-ai-stuff game

    • Alan says:

      Black & White promised the world. It was an exciting experiment. But they massively overhyped what it could do. And in practice it was incredibly frustrating. It was like teaching a 5-year-old. If you fail to negatively reinforce against something you don’t want them doing, it’ll hell to break them of a new habit. Conversely, trying to reinforce a positive behavior was hell.

      Add in occasional mandatory mind wipes of the creature and you’ll understand why it was an utter failure as a game.

      • Thomas says:

        ‘It was an exciting experiment. But they massively overhyped what it could do’ – As I said, Peter Molyneux =D

      • Ian says:

        I worked on the original Black and White, as a lowly tester late in the production cycle.

        The biggest problem with teaching was getting to your creature to punish him fast enough. If you saw him eat a villager and swooped down and started slapping him about there was a fair chance he would have started thinking about something else in that gap, and was now confused about watering the villagers fields. It was far to easy to miss train your creature without realising.

        It was fixed in the sequel by letting you pick what to punish him for from a set of recent tasks.

  19. Thomas says:

    In Uncharted 3, they did something really wonky with the AI (which is strange because it was pretty normal unnoticeable polish in the previous 2), every now and then all the enemies would suddenly gather up and start this sort of swirling dance around each other for a few seconds before zerg rushing you.

    The best thing is, it didn’t negatively affect my opinion of the game because I was too pleased at an opportunity to mow everyone down =D

  20. Chauzuvoy says:

    Tangentially related, but in the article you linked, they make a big thing about how each raindrop is individually rendered and simulated.

    Why? What conceivable value does that add to a game? Is “more realistic rain” going to be the thing that separates the newest iteration of the Thief series from the rest of the pack in the modern game market? Because reintroducing the huge open levels, tense sneaking around, interesting steampunk world, or any of the actual Thief stuff just wasn’t cutting it. I… that sounds like something that would be really interesting for a graphics programmer to code, but that 90% of the audience would never have noticed unless it was specifically pointed out to them.

    • Syal says:

      each raindrop is individually rendered and simulated… Is “more realistic rain” going to be the thing that separates the newest iteration of the Thief series from the rest of the pack in the modern game market?

      You’re misinterpreting; they’re not going to make the rain more REALISTIC, they’re going to give it distinct PERSONALITIES. So THIS raindrop will follow you around the level, and THAT raindrop will aim for the nearest guard, and THESE raindrops will just hover in front of the camera, and THOSE raindrops will run away from you and such.

      And they’ll all be voice-acted. It’ll be revolutionary!

    • ehlijen says:

      Not sure if it’s worth it, but if raindrops are individually calculated there could be a ‘shadow’ in the rainfall that guards might notice if the player jumps from roof to roof above them during a rainstorm?

      • Ciennas says:

        Or it could naturally pool in gullies or whatever and decrease the hero’s footing.

        Tell me that wouldn’t be really cool. It would of course be locked away in ‘hardcore’ mode, along with the eating drinking and sleep meters.

    • Alan says:

      The article says, “we were told that each individual raindrop was being rendered separately, and reacted independently when coming into contact with the terrain.”

      My guess: it’s a particle system. Not actually all that surprising or difficult on modern hardware. At least if you limit it to near the camera, which you can do because beyond a certain distance rain stops being terribly distinct. And I’m pretty sure other games have done it.

  21. WWWebb says:

    Game AI behaviors that might actually need next-get hardware:

    1- Good pathfinding. Someone already mentioned this above. Between being a “multiple routes to the objective” game and a “lots of stuff to hide behind game”, pathfinding is going to be tough.

    2- Differentiated AI. This is the one thing in the press release that almost got it right. It’s not just that guards can have different values in their paranoia-meter, it’s that
    a) events in the world will cause paranoia to change by different amounts (e.g. the experienced guard knows which lamps are ALWAYS lit and which can get blown out by the wind).
    b) the guards will react in different ways. Some will start looking around, some will call their buddies, some will spook and run.

    3- AI based around visibility. The original Thief games had the darkness gem, and you knew that if it was dark, then a guard could be right next to you and never see you. With fancy graphics hardware, it could calculate how visible you are to each guard at all times. So while the door guards can’t see anything past the lamps that spoil their night-vision, the guard on the wall might spot you (or a body) easily. Better yet, give me the ability to put on a guard’s uniform and have that stop working when I’m clearly visible instead of far away.

    4- The ability to dial all of this up and down on a difficulty slider. Changing the loot drops or weapon damage weapons or enemy health is easy. Changing the quantity and/or smarts of the guards is tough without a LOT of testing.

    • Ian says:

      The thing which the visibility gem failed on which could be abused was back lighting.

      If you have two lit areas visible to each other with a patch of shadow in reality between you will see the silhouette of a thief lurking there with his sap. Using the shadow based system of the older games you are invisible.

      Now put that in as a feature please.

      • Heh. It’s almost to the point where, were I writing the game, I’d consider some form of webcam software for the NPCs to use when “looking” for you. I used to have a program that had a security camera mode, which would snap a picture and upload it to an e-mail account if [user defined number of pixels in a given area] changed in [user defined length of time]. I dunno if some coded simulation of that would be a solution or not, but it’s fun to consider.

    • Peter H. Coffin says:

      Path-finding is an important thing to experiment with, but real-world behavior doesn’t seem to make path-finding as important as the amount of talk about it really does. You don’t really need much path-finding for a guard patrol. They know their route. You don’t even need much path-finding for an alert at a known location: that’s a short list of “known location 1, start going left from here, known location 2, start going left from here” that’s small enough that pretty much every general place a guard can be can have its own list of “where to go next to get to Known Location X”. Alerts at unspecified locations are similar in that as long as the alert condition happens, the guards know which way to move to get closer to it (or away from it, if you’re modeling that) but the paths they can take are pretty constrained, and in most environments, there’s only one best option. The trick to all this is remembering that guards ALREADY KNOW THE MAP, and you need not consider that they’ll do anything other than something that’s basically sensible, and the only time you need to even pay attention to path-finding in an actual sense is when you’re allowing those paths to be altered BY THE PLAYER in a way that the guard cannot reasonably surmount, but the number of cases of that that can provided for is very limited. If a door can be closed but not locked, then anybody can open it and no path-finding matters. If a door can be locked, then you need to keep track of who has the key(s) and only someone that has a key can open the door, but the number of other paths past that door is going to be finite and probably small — that’s just the nature of doors, especially lockable ones. If a door can be barricaded, then anybody on the correct side of the door can open it (with a delay). And that about covers doors: four states (open, closed, locked, barricaded), two directions that may influence the the state-handling (hinge/not-hinge side), and that’s handled.

      I can go on and on about this, but basically, the reason that a lot of AI can and should be disregarded is simple: the majority of day to day intelligence should be looked at as recipes. You don’t need to know a THING about modeling protein denaturing and heat conduction to cook an omelette.

      • Fleaman says:

        My omelettes taste really unrealistic when I don’t model the protein denaturing.

      • Paul Tozour says:

        Peter, that doesn’t work in practice. You can script a lot of stuff for AIs if you didn’t have to worry about the player or what the player would do and what changes he might make to the game world.

        A big part of why so much of the AI in so many of our games is so bad is because we try to script too many things and end up with incredibly brittle scripts everywhere that absolutely don’t stand up to any kind of unexpected player interactions.

        It’s all well and good to have scripted patrol routes as a starting point (I’ve done a lot of that), but the moment the player starts interacting with the game, all those scripted paths shatter as the player breaks them like a bull in a china shop.

  22. “It’s like saying Albert Einstein was a genius because he could remember where his house was.”
    I love this example! I’ts so perfect, since it mirrors both the silliness and the falsehood of the argument in question. IIRC, Princeton assigned a friend to walk with Einstein to and from campus, precisely because, lost in his pondering, he might get lost on the way.
    Well done. Put this one on Twitter!

    To the point of the article: I would really enjoy iconized versions of most modern “high graphics” games. The insight into both the simplicity, and absurdity of the underlying mechanics would undoubtedly have an entertainment value of its own unique quality.

  23. Dev Null says:

    I don’t twit, so I’ll skip over the communications debacle. Any form of communication that limits me to 140 characters and doesn’t involve live animals or skywriters is just yanking my chain – bandwidth at that scale is free, so there’s no reason for it.

    But the AI argument is errant nonsense. “Next gen hardware” is what; twice the CPU and/or memory? I mean graphically it might require more specialised stuff, but for AI thats pretty much what you’re talking about. And if you look at the CPU and memory requirements of AI in a current generation game, compared to those of the graphics, they’re maybe using 10% of the CPU and memory, tops? So by saying your new AI requires a next gen system, you’re effectively saying either:

    1) The new system requires 10 times as much CPU and memory as the old one, or
    2) Everything in the new game takes twice as much CPU and memory

    And in the case of #2, you could easily run the new AI on the old system by reducing the demands of your graphics engine by 10% from your new values. (I am, of course, making up numbers what I say the AI is 10% of a game’s resource budget, but if anything I’d guess that exaggerates the case.)

    _Writing_ good AI is hard. That’s a different issue. Maybe they just mean they need to sell more boxes in order to be able to pay the programmers.

  24. Arkady says:

    Could I have a brief go at justifying this?

    It’s a little more complicated then “the processor cycles are there”.

    AI is full of “if-then” statements (branching, to use the correct lingo) and the PPE used in the PS3 and xBox360 absolutely sucks at branching code. The PS3′s SPE is a SIMD processor and thus even worse at branching code.

    The x86 architecture has better branch prediction and a far lower cost for missing a branch (at the cost of absolute clock power speed…)

    That means the whole “if you hear a window break, then check it out” could murder performance on an xBox360 / PS3, but not on PS4 / [next xBox, assuming leaked information is true]. This means much more complicated behaviour, with a lot more branches can be coded up and you’ll get more predictable performance.

  25. Paul Tozour says:

    Hi Shamus — @City_Conquest on Twitter here. Good write-up of your thoughts. I agree with you that the marketing message for Thief is delusional.

    Most of my response came from your second tweet, which I didn’t realize at the time was a response to the silly Thief 4 marketing — the tweet came across as being down on AI overall.

    So, you make great points, but I have to say I do disagree with you a bit on two points.

    First, the idea that next-gen hardware necessarily is going to force us to go through a whole new graphics upgrade cycle which will negatively impact the quality of AI.

    If that were true, we would have seen it in the last few console cycles as well, because every console cycle in history has meant an increase in graphical fidelity, etc.

    But that hasn’t happened. We are slowly — consistently, but slowly — seeing a slow increase in the quality of AI across the board every year as the industry gets better at it and recognizes how important it is.

    And the fact is that AI developers do have a little bit more CPU and more memory to play with every year, which can only help.

    The second thing I respectfully disagree with is the idea that players aren’t asking for better AI. They’re asking for it all over the place; they just don’t call it “AI” because they don’t necessarily know that that’s what you should call it.

    I think what we’re seeing is players are asking for better AI literally EVERYWHERE.

    Read any review of Aliens: Colonial Marines, and the first thing they will talk about is how the Aliens act nothing at all like Aliens — that’s bad AI, and it killed that game.

    Or read any of the reviews for SimCity or watch any of the YouTube videos talking about the problems with its AI. People notice these things! I haven’t played SimCity yet but it’s clear just from these videos that the game had a lot of AI issues at the game’s release, which seriously impacted product quality, and PEOPLE NOTICED!

    SimCity:

    http://www.youtube.com/watch?v=iYy2CxPytCQ
    http://www.youtube.com/watch?v=g418BSF6XBQ
    http://www.youtube.com/watch?v=OmYDAQ5vDhU
    http://www.youtube.com/watch?v=sFKKIH2WasM

    And I hate to mention Skyrim in this kind of company, because Skyrim generally has really good AI and is a fabulous game — it’s much more a case of “really good AI vs. the challenge of being truly human-like” as opposed to the more basic problems that SimCity and A:CM had. But here, too, it’s very clear that people do notice the areas where Skyrim’s AI could be better.

    Skyrim:

    http://www.youtube.com/watch?v=odw0gJGaZbM
    http://www.youtube.com/watch?v=aIw-VcZJgtg
    http://www.youtube.com/watch?v=MTMwjYgJsFQ
    http://www.youtube.com/watch?v=EOY-_-WTZyI

    I think it’s very clear that audiences are raising their expectations of AI over time, and are getting better at seeing flawed AI and aren’t putting up with it. They may not call it “AI,” but that’s what it is, and if we ignore all of that feedback and tell ourselves that “AI does not sell outside of strategy games,” we’re doing ourselves and our customers a huge disservice.

    I believe that AI is one of the reasons Skyrim is one of the best-selling and most-highly-awarded games of all time. Imperfect though it is, it’s by far one of the best AI systems out there, and contributed hugely to the success of the game.

    The reception of Skyrim shows how badly players WANT better AI, and how terribly we keep letting them down by continuing to let it slide.

    • CraigM says:

      While you are largely correct in that consumers are demanding more from AI, I have to disagree about it as a design priority in AAA. Having followed many such discussions, and knowing Shamus’ viewpoint a bit, there is a distinct differentiation in how people respond to AI before and after release.

      Before release AI does not sell a game. Most AAA games are hands off until late. AI bugs and flaws are often not discovered until the masses play with them for a bit. Think about the progression of SimCity. It took a few days for people to realize how borked the AI really was. It’s not something to really be picked up on release reviews, preview events, E3 or other traditional marketing venues. What sells games at this point (and that’s all publishers really care about, those precious day one sales) is the shinies.

      So with that in mind, no I don’t believe good AI will help sell AAA games. It will make them better games, give them longer lifespans, but do nothing to build hype. There is expectations for better AI, but such expectations usually only get expressed AFTER they have your money. At which point companies like EA don’t give a toss about your petty demands.

      So when looked at from that lens, yes new hardware will demand more graphics to justify them. With this demand resources are shifted to things that can sell early copies. Those things are not underlying structural elements, but the shiny graphics and wow moments. So new hardware —-> more graphical demands —-> more development focus —-> AI relegated to low priority.

      • Paul Tozour says:

        Totally disagree. Great AI already HAS sold AAA games, Skyrim in particular. I don’t think the sales and critical acclaim of Skyrim would be anywhere near what it was if they hadn’t worked so hard to put together one of the best game AI systems to date.

        The level of graphical fidelity in games is very quickly getting to the point where the human eye can’t tell the difference from one generation to the next. We’ve hit the point of diminishing returns on graphics. Spending more money on graphics than we already do isn’t going to help sell games when what players want is innovation.

        AI offers this huge, massive treasure trove of design tools that we could be using to improve our games if we approached it the right way.

        But you’d rather just look at the AI we have right now — and games as they currently are — and say, “ahh, players don’t really notice it.”

        Nonsense! Absolute nonsense.

        There’s a huge gap there where we’re failing to do our jobs, and players absolutely do notice!

        We’re not even TRYING to do a fraction of what we could be doing with AI yet! We have all these tools and we’re hardly even trying to use them.

        And that attitude, and the attitude of “good AI doesn’t sell outside of strategy games,” is the core of the problem! How can we expect designers to ever take the reins and use AI as a serious design tool if that’s our attitude?

        We continually discount AI, causing us to be irrationally terrified of it and minimize its use in games (instead of treating it like the first-class design tool that it is), causing us to release games with bad or over-constrained AI, and then we turn around and blame the *AI* for that instead of blaming the game designers for being too afraid to use it or learn how to design it properly, and the vicious cycle just propagates.

        It’s a vicious cycle, and that attitude isn’t helping.

        • Xapi says:

          2 points, one in favour of each of you guys:

          1 – Craig talked about day 1 sales not being affected by AI, Paul replies that overall sales are. So, Paul, you’re really not responding to Craig’s point.

          2 – A line of games were AI really does sell (maybe not day 1, but over several iterations) is sports games, in particular football (soccer), wich is what I know.

          I know sales depend on many factors, but EA Sports’ FIFA has in the last few years gone way ahead ok Konami’s Pro Evolution Soccer in terms of the AI of non controlled players, and at least in my group of friends this has gotten us to go from PES to FIFA (I bought PES ’09, and afterwards bought FIFA ’12 and ’13)

          • DeroBoy says:

            Yeah I am always surprised when discussion about game AI never mention sports games. I imagine teams sports AI must be some of the most complex and demanding to create, particularly when attempting to simulate “human error” into an AI’s performance. I would live to read an article of sports game AI programmer explain their experiences in this area.

        • TSi says:

          Why use Skyrim as an example when it’s AI is pure **** and everybody knows it. I think you’re mistaking the AI with the quest system because the AI in Skyrim isn’t much diferent from the AI in Baldur’s Gate or any other old 2D RPG. Players didn’t buy this game for it’s AI and, dispite it’s large amount of bugs, it’s still a great game because it doesn’t stop there.

          It’s always the same, AI is and was always used in marketing bullshit the same way as graphics because it’s easier to show it off in stupid and nonsensical presentations. Take Half Life 2 as an earlier example too if you want/need. That game was awesome, so, just like for Skyrim, it didn’t bother players to not experience the guards trying to force themselves into a room you bloked.

          This is not the case for SimCity where the AI is it’s CORE so obviously, if it doesn’t work then the game is broken in it’s entirety. The crysis arose mainly because of how most gamming websites work and i’m not intering details here (keywords : NDA, Games Media Awards or simply Doritos-gate). So yeah, players don’t have any more firewalls, they get the final product at the same time when it’s too late to warn them.

          The only game i recall having a great AI as well as the best graphics for it’s time was F.E.A.R (and only the first one). They didn’t need extra CPU. Instead, they used less script-lover designers and planned things very well.

      • Fleaman says:

        Good point here. Ass Creed is a good example here: AI is of great importance in a semi-stealthy game like Ass Creed, and a lot of the game’s quality and essential playability is dependent on the AI above all other factors… But that’s really hard to communicate in an ad, so the marketing push behind Black Flag has all got to be behind making the best-looking jungle ever. We can only hope they also get around to fixing that thing where the New York redcoats keep attacking you for no reason when you walk past the church.

    • Corran says:

      Please allow me this tangent:

      Seeing how many people post YouTube links on comments, forums, etc., isn’t there some kind of plugin script around that will grab the title of the videos from YouTube and replace the link text (which is just the base url) with the actual title of the video?

      Seems like a great tool to have and super easy to create.

  26. Steve C says:

    I’m not a programmer so can someone explain why developers don’t reuse code for the AI that worked in a previous game?

    Take F.E.A.R for example. Why didn’t they lift the AI code for the sequel? Yes I’ve read Shamus’ articles on why reusing code is so difficult but there’s got to be a point where a team on a multimillion dollar project can wrap their head around something that they know works. Games reuse engines all the time. Why can’t they reuse AI engines?

    Even if the code isn’t helpful wouldn’t the internal documentation for FEAR’s code be a template on how to make something that works really well? It just confuses me why the wheel has to be reinvented every single time.

    • Trithne says:

      Because they sack everyone after every game and delete the source to make room for the next one.

    • Paul Tozour says:

      Steve: It’s because every game is different, and design tends to need different things from the AI from one game to the next.

      In a lot of cases, they can build on top of existing AI systems. If you look at the way the AI has progressed in the Elder Scrolls games, for example, the AI clearly got better with each iteration from Arena to Daggerfall to Morrowind to Oblivion to Skyrim. The team did what it could to take the aspects of the AI that worked from each of those games and build on it in the next game.

      But a lot of times, you can’t do that. I worked on Metroid Prime 2 and 3, and the AI in each game changed a lot, just because there were so many different creatures that all worked differently. Even a character with the same identity, like Dark Samus, had to be rewritten for each battle because each one played out very differently (3 Dark Samus battles in Metroid Prime 2 and one in Metroid Prime 3).

    • Alan says:

      It depends. :-) But likely possibilities:

      Not invented here. Common complaints are it’s too messy to use and I can write something cleaner/more realistic/faster.

      No longer have the source to the old game. It’s depressingly common to literally lose track of old source code in the software industry. Probably not surprising in games with such rapid team turnover.

      The old AI code is deeply entangled with the old graphics and physics engine, which you’re also junking. Disentangling it can be incredibly hard. (Arguably it would make sense to build something more standalone, but doing so takes more time in the short term and few companies these days are willing to spend a man-month today to save a man-year next year.)

      The old AI code doesn’t fit the new game’s design. “We want lots of cool staged set pieces” (Call of Battlefield Honor) has different needs from “We have relatively predictable AI so the player can more easily create plans” (Thief) and both are different from “We want a highly emergent gameplay with no two run throughs being the same.” (FEAR)

      • Steve C says:

        “It’s too messy to use and I can write something cleaner/more realistic/faster.”

        I get that. Shamus made a few blog posts about it too. But a lot of that is determining “will this code work for me or not?” Which is a huge timesink that may not pay off. Except in this case (FEAR, Thief) you’ve got a clear idea of what the end result will be. Both of those games were great specifically because of the AI, not because of the IP. The code itself might not be of any use but the thought process and flow of the code would be useful. Wouldn’t it?

        Putting aside the possibly of gross incompetence of deleting all the old game’s data, code and documentation… let’s assume that the sequels have access to the original code because they bought/made it. My reasoning is that great games tend not to have many examples of gross incompetence and still be great.

        Think about how many games have used the Quake engine or the engine. There are some very different games all using the same engine. Why not create, use and sell a AI engine? Wouldn’t that have value enough to sell?

        As for the Dark Shamus example I’ve always thought of that as boss scripting. Sure it’s AI but you really want boss battles to be unique. I was referring to more generalized AI. For example the reactions of a guard in detecting then reacting to a player. Or a hostile animal. Or a zombie. Those are pretty common across many games. It just seems to be a waste not to come up with standardized templates when you find an example of AI that works really well and is common.

        • Paul Tozour says:

          Steve: Believe me when I say there was an awful lot more to the Dark Samus battles than “scripting.”

          There are some AI tools out there (behavior tree or state machine editors, pathfinding systems, etc), and some call themselves “engines,” but the bottom line is that there is a ton of work involved in properly fitting the AI for any given game to meet the game’s design goals and constraints, the details of the environments they have to interact in, and all the unique game rules and interactions for any given game.

          There is only so much you can do with “standardized templates.” Yes, you can build on existing AI systems, the way Bethesday did with Elder Scrolls Daggerfall -> Morrowind -> Oblivion -> Skyrim, but at the end of the day, any given game is going to bring its own unique set of AI challenges.

          So, I am not arguing against code reuse, but it’s much less useful for something like game AI than it is for, say, graphics and physics.

          Do a Google search on the “Photoshop of AI” debate, and you’ll find some good threads on the subject. Michael Mateas’ rebuttal of the idea (summarized on Dave Mark’s “IA on AI” blog) sums up my feelings very concisely.

          • Steve C says:

            I’m guessing this is the one you were referring to.
            Interesting. Ty. At least I don’t feel like I was talking out of my ass as other more knowledgeable people than me have also suggested the same thing. I see the problems of why it wouldn’t work now. I still think it has the possibility to work in the future especially after 2045′s singularity. :-)

        • Fleaman says:

          You need to never mix up Samus and Shamus again in your life.

  27. Overmind says:

    The most funny thing is that the AI behaviour the devs of new Thief is possible only on new hardware was already used extensively in the orignal Thief: The Dark Project in 1998.

  28. Overmind says:

    The most funny thing is that the AI behaviour the devs of new Thief say is possible only on new hardware was already used extensively in the orignal Thief: The Dark Project in 1998.

  29. somebodys_kid says:

    I would love to listen to a 60 minute Diecast special where Shamus and these three AI devs have a good chat about the current state of game AI and how to improve it. I’d even pay money for that. That seems like too much extra work for four already very busy people though. A man can dream…
    And may I also reiterate my admiration and gratitude for this site and its commenters for fostering a community where this level of discussion can take place. I sometimes wonder if this website is even on the same Internet as some other sites…

  30. somebodys_kid says:

    On a related note: remember 10+ years ago when S.T.A.L.K.E.R. was being hyped up for it’s super awesome creature AI? And then when it was finally released years and years late, the AI turned out to be….average? I’d very much like to know the inside scoop on that. And wasn’t the AI system part of what caused the delays? STALKER is one of my favorite games, primarily because of its atmosphere and setting. The AI is neither great, nor awful, simply serviceable. Pity the devs’ ambitions could not be met.

    • I may be remembering it incorrectly, but I thought the AI was better than average, especially when it came to hunting me down (usually when I failed to be sneaky enough). The only real flaw I can recall, and maybe this is why it wasn’t spectacular, was the habit of mobs to get hung up or stuck on the level geometry, especially the dogs.

      The other weird thing I remember was the guy you had to rescue from a locked cell in a building. He would turn to face you, and he’d continue turning as you walked around him… but his feet stayed planted on the floor. It’s like he had the waist of a Ken doll, allowing his torso to spin 360 degrees.

  31. KenTWOu says:

    (I also played the sequel, where they abandoned the smart, unpredictable and dynamic enemies for the bog-standard target dummies we get in most shooters…)

    Monolith tweaked F.E.A.R.2 AI on the highest difficulty level via patch, so it wasn’t that bad.

Leave a Reply

Comments are moderated and may not be posted immediately. Required fields are marked *

*
*

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun.

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!