AI Follies: Behaviors

By Shamus Posted Tuesday Aug 25, 2009

Filed under: Programming 32 comments

“Behaviors” is my catch-all for the complexities of having the AI do different things in different situations. This might be group-level stuff, like:

  1. Deciding to spread out to reduce vulnerability to grenade attacks.
  2. Regroup, because the team is getting picked apart.
  3. Fortify, because there is some central goal that needs to be defended above and beyond the lives of the team members.
  4. Flank, because a couple of members have the player pinned down.
  5. Rush, because the battle has reached a boring stalemate and a banzai charge should shake up the complacent player.

Behaviors also refers to the individual decisions being made, like when to retreat, call for backup, rush, or defend. Which weapon to use, and when to throw a grenade. It always feels like just “fixing” one situation will make the AI more robust, but often adding onto the logic will simply move the failure around rather than eliminate it. The perfect example is detailed in this old post, where I talk about the difficulty of having the AI react reasonably to a fallen comrade. Right now, most games have the AI spot a body and immediately assume they have just come upon a murder scene. This is true even if they’re ten meters away and the victim is apparently resting peacefully in their bed. It turns out that in trying to fix this you can end up chasing your tail quite a bit. You could write all sorts of code, add special cases, and record lots more dialog to give the AI many different modes of behavior, and in the end you’ll still have lots of cases where the AI makes a complete ass of itself.

What to do when a door is suddenly open? When objects are missing? When a tank of flammable stuff is ignited in a huge fireball? (Hint: Explosions do not imbue the people just outside of the blast radius with magic psychic powers to know the exact location of the person who initiated the explosion.)

This is where the real cost of AI comes from: It’s complex and time consuming to test. As your system grows, you’ll spend more time running scenarios and simulations and less time actually writing code. Perhaps the programmer will see some bad behavior. Perhaps the AI seems too cautious and doesn’t move often enough in combat. But why? Is the AI engaged in the wrong behavior, and defending when he should be moving? Is The “group AI” out of whack and the group isn’t receiving orders? Or is the pathing hosed and the AI can’t figure out how to get there? Or does the pathing work but the AI is under-valuing certain locations? Or is the AI over-estimating the danger involved in moving? Or is the AI caught between a couple of conflicting behaviors? (Example: Hey, I’m behind cover, but I need to move. I’ll move from A to B. 1/60th of a second later: Hey look! Point A is really a lot closer than B! Why don’t I go there instead? In fact, I’m there already! What luck!) Bugs are thus hard to spot amongst the noise, and hard to tell from design flaws. And even when you do spot them, it’s hard to identify the cause. And hard to fix without simply breaking something else. And the better your AI gets, the worse all of these forces will be.

Despite my doom and gloom, I’m actually encouraged at how AI has been improving a bit over the last few years. Black & White got my hopes up a few years ago. I heard about its complex behaviors and activities before release, and I think I imagined that we’d hit some sort of breakthrough with AI, similar to what BSP’s did for graphics in the early 90’s. The reality of the game didn’t nearly live up to my imagination, and I think it was a sobering moment for everyone trying to move AI forward. It was possible to spend huge amounts of time and energy on AI and wind up with something that often felt like an accident when it did something right, was infuriating when it went wrong, and very often wasn’t all that different from a system built on simple rules and randomness.

We haven’t gotten a huge leap forward, but we are seeing gradual progress. Still, I don’t think we need to worry about Skynet anytime soon.

 


From The Archives:
 

32 thoughts on “AI Follies: Behaviors

  1. equinox216 says:

    Still, I don't think we need to worry about Skynet anytime soon.

    I don’t know, my wife’s coffee machine wouldn’t work the other morning; instead of cutting on with the timer, it just chortled in that liquid way. She boiled water and passed it through the filter to make a weak cup for herself. A half hour after she did that, the coffee machine kicked in. Later that afternoon, and four test runs down, she proclaimed it fine.

    The coffee machine NEEDED COFFEE TO GET GOING IN THE MORNING. The war with the machines can’t be far behind.

    Topically: Would developing a player model to send against the game AI, that just runs through the gamut of possible player behaviors, make testing easier?

    I don’t know if you’ve read my longish (and often off-sub-topic until now) AI thoughts in your other posts, but there’re more questions I’ve got about this kind of thing. Does anybody build in ‘retreat and/or regroup’ as an in-game tactic/backstage reset button for maladaptative AI? Obviously, ‘retreat’ can itself be maladaptive, as well. I guess part of my problem is not having played ‘modern’ FPS examples, so I don’t know what’s being done and what’s being done well.

  2. Yar Kramer says:

    I think my favorite idea was the one Shamus had ages ago about AI scripts, which users could create and modify and stuff and then put up for download on mod-databases and whatnot. (Thus, um, sort of delegating the problem to the fans, but …)

  3. UTAlan says:

    It seems like the problem with AI will never be fixed if it takes the budget for one game to try to solve it (or even just part of it). It’d be nice if some wealthy investor decided to start a video game company that started by spending years in R&D of AI. Once they burned through a ton of money and time and had a “passable AI” system, start making games that use this system.

    It’s an idea, anyway.

  4. Factoid says:

    I wonder how well AI improvements permeate through the community. I mean if company X develops an awesome pathing algorithm, what incentive do they have to share it with anyone? And unlike some other things just seeing it in action doesn’t mean you’ll be able to reverse engineer it.

    The fact that the AI acted non-stupidly in some scenario tells you nothing about how the developer accomplished that feat.

    I assume there are academics out there who write white papers and share things in trade journals, but AI systems often seem to reside with a single developer. Maybe that company will use it in several games, but rarely will it be packaged and sold to another house for their use.

    I would guess that the components of a truly groundbreaking AI are already out there. Someone has totally nailed pathing, but you never noticed because some other aspects of their simulation sucked. Someone may have come up with a truly novel solution to enemy detection and alert states, but it went into a game that didn’t rely on stealth so nobody noticed.

    I’m not suggesting that every developer switch to an open-source model, but it’s interesting to think what might happen if they did.

  5. Peter H. Coffin says:

    Yar: That may not solve a problem, but it would certainly make things a lot more interesting. 500 users can sure come up with more tweaked AI models than even a dozen developers can. Combine that in an online environment, and some tracking of how often each variant dies, and you get actual evolutionary pressures, with a generation “lifespan” that could be as short as a day or a couple of hours.

  6. Robyrt says:

    On user-generated AI: This happens with RTS games all the time – as they tend to have a large, devoted fanbase. Total Annihilation ended up with an AI hand-crafted for every single map, which doesn’t solve any long-term problems but feels much better.

  7. Matt K says:

    @ UT Alan: The insentive would be to develop the AI and then license it to game developers similar to how game engines are licensed now. It would be nice to see and I would hope to see it sooner than later since imo the other budget hog, graphic, have pretty much hit the ceiling.

  8. Eric Meyer says:

    Since the topic of AI engines came up: Kynapse is such a product. There may be others, of course.

  9. Slippery Jim says:

    During the development of Black and White, IIRC the creature started quivering on the spot and it took a while for them to figure out it had identified itself as a potential food source and was trying to eat itself.

    Kind of a fun thought :¬)

  10. equinox216 says:

    Slippery Jim@9: I love that kind of ‘oops, emergent!’.

    “OM NOM NOM OH GOD WHY WAS I MADE SO DELICIOUS?”

  11. Rutskarn says:

    The robot revolution will fail the moment we find a certain kind of corner they have trouble getting to. They’ll sort of run impotently up against a wall for a minute, fire at a wall until they’re out of ammo, and then self-destruct.

  12. JPLC says:

    Although not entirely an AI engine, NaturalMotion’s Euphoria comes to mind as something that can be licensed that does contain AI (such as “falling properly” without being just a ragdoll). Things like this show hope for more licensable AIs in the future.

    On a similar note, I think just as much attention needs to be put into a more social AI as well. These endeavours are usually attempted outside of the video game world though (the various contests that centre around the Turing Test or similar tests, for example). The only game I can think of that’s aiming for more of a social AI approach is Bot Colony. If there are others in development, I’d love to hear about them.

  13. JPLC says:

    Hrm, the comment I was going to make seems to have eliminated itself. The gist:

    While combat AI and the like is important, I think more games need to focus on social AI as well. Most work on social AI seems to be done outside of the video game scene, though (like in the various contests that are similar to the Turing Test). In fact, the only game on the horizon I can think of that heavily utilizes a social form of AI is Bot Colony ( http://botcolony.com/ ). An early/rough gameplay video can be found here: http://www.youtube.com/watch?v=Jr5YrOJENPU&feature=channel

  14. mookers says:

    @equinox216 (#1)…

    [The coffee machine NEEDED COFFEE TO GET GOING IN THE MORNING. The war with the machines can't be far behind.]

    That was the funniest thing I’ve read all day :D

    (Granted, over here the day is quite young. But still.)

  15. Jonathan says:

    This triggered a memory. Favorite memory from Rainbox Six Rogue Spear (FPS, 10 years ago): Running a co-op mission in an airport… chucking a grenade through a door into shed that we believed there were terrorists in.

    We hear a panicked voice:
    “Grenade!
    *Grenade!*
    GRENADE!”

    BOOM!

    1) hilarious
    2) realistic
    3) totally unexpected

    I’ve still never heard a reaction like that from a game at any other time.

  16. Khizan says:

    My favorite AI moment was in FEAR. I was pinned down behind some boxes by a group of enemies behind another stack of boxes and there was a group going around to flank me. I blindly threw a grenade over my cover towards the guys pinning me down. “OH S***!” *kaboom* I had to pause the game because I couldn’t stop laughing. The reaction was perfectly believable.

  17. Danath says:

    FEAR is full of great moments like that, unfortunately I found grenades to be very useless other than making the enemy scatter because they were REALLY good at running away from grenades unless you put it somewhere they couldnt avoid it (Like when one of their vans is opening the back doors and you drop the grenade right there, nowhere to go).

    Their behaviors were great, as more people on their team would get injured they would be more likely to back off or call for backup, not to say it was perfect… but it was one of my favorite for sure. (Despite my FEAR gushing I am WELL aware of the games many flaws, the AI and the behaviors isn’t one of them though). And yes, “OH SHIT!” When you threw a grenade was absolutly hilarious.

  18. Martin Annadale says:

    Shamus, what is your opinion on the AI in Commandos: Behind enemy lines? Back then I thought it was quite good. The Nazis would run towards the sound of the explosion and start searching from there. They reacted to bodies in a way that’s realistic for a war situation, although if the alarm finally passes with the player’s team never discovered, they just go about their business, ignoring the corpses still lying about. That bothered me a bit. The amount of patrols did increase during an alarm, though, so at least a mistake on the player’s part wasn’t without consequence.
    I always felt that Commandos was the best sneak-em-up until Thief. Thief was the best (obviously), even though the thing about leaving an knocked-out body on a bed still causing an alarm bothered me a lot too.

  19. Jabor says:

    Bugs are thus hard to spot amongst the noise, and hard to tell from design flaws.

    Arguably, a design flaw is a bug.

    But of course, what are you doing trying to debug a complex system just by observing its behaviour? Chuck a few (figurative) printf() statements in there and log what the AI is doing.

    Then you can pore over the logs and determine that yes, the AI has decided it needs to move, it’s started to move to B, and has then recalculated and determined that A is closer. Or that it’s decided it needs to move, but can’t find a reasonably safe path to any better location. Or that it hasn’t actually figured out that it needs to move despite the crate it’s hiding behind being shot to pieces.

    Yes, it might be a lot of data to dig through – but it’s certainly better than trying to guess at what the problem is. Of course, even better is pitting the AI against itself and stepping through the decision trees as the battle unfolds.

  20. Thijs says:

    Human behaviour is so complex that no designed system can ever imitate it. As human beings are geared to learning, we learn from feedback loops all the time. The best way to create a convincing AI is to let the system learn too. In chess this is already the case, and computers can beat human players now, most of the times. But also, more psychologically, AI can now learn categorisations in the same way humans do. Just show the computer lots of pictures of houses, and it can make the same quality of categorisation judgments a human can; it even makes the same errors as humans.

    a learning AI would especially work in determining strategies like the ones you describe above. Just hand your learning AI over to some hardcore gamers and let it try different strategies and learn from that. It should not have the goal to beat the players everytime, ofcourse.

  21. Magichanics says:

    I have always found it weird that the development of an A.I. seems to be done separately for each and every game. Why don’t developers recycle the A.I. code, or core algorithms thereof that handle detection, targeting and pathing, from one game and use it as a skeleton for another? It would seem to me that (apart from additional time spent porting) doing so would allow you to spend more time fleshing out NPC behaviour.

    In fact, why doesn’t anyone bother with writing a sort of “generalised” code that can be easily modified to fit within a particular game engine and to work with particular variables? It seems so cost- and time-inefficient to have to write the whole damn thing again. Or am I severely underestimating the difficulty of porting here?

  22. Rattus says:

    There actually are few AI middleware packages, but mostly for navigation and pathfinding i think ( i read about those from time to time at http://aigamedev.com ).

    For example FEAR developers wrote nice paper about AI used in the game and actually put out AI SDK for free download. And if I remember correctly the team for FEAR AI was hired from some university AI research group. So there is a lot of research in this area but lot of times it’s not worth the extra effort.

    Remember previews of Oblivion’s RADIANT AI ? I was so looking forward to it. In the end they found out that most of the interesting stuff was too uncontrollable and game breaking ( whole cities going to riots, player getting to city only to find it totally dead ). Then they restricted it so much that I didn’t find a difference to any other dump RPG AI from past.

    Development cost are really high when trying to make AI work in plausible way, because lot of the problems has to be playtested and it takes a lot of time to just spot some kind of bugs. It’s just easier and quicker to do it in simpler way ( specific rules ) and end result is not much different. So why bother with it. ( sob sob )

    The problem of generic AI package is that every game has vastly different requirements for AI behavior, need specific cases, etc.

    And for universal AI we also certainly lack performance to run it with the game for every character. Every game thus has to find it’s own balance how to do it right .. i mean playable.

  23. Jordi says:

    Another AI engine other than Kynapse (as mentioned by Eric Meyer) is Intia.

    I agree with Thijs in that I don’t believe human behavior can ever be programmed in through a set of rules and I also think that (machine) learning will provide a solution to this problem eventually. However, I don’t think machine learning has advanced nearly enough to evolve novel rules and behaviors for lots of situations. Perhaps it can be used to tune some parameters of already defined behaviors, but anything more would be tricky.
    Also, I think there is a general distrust of this method, because the programmer won’t know why the AI is bahaving the way it is if he didn’t program it. Unless the AI just learns to tune some predefined values, chances are that the learning system’s behavior will be a giant black box whose behavior is hard to predict. And if a normal AI is detected to fail at something during testing, the programmer might have at least some idea about why. But if the AI was evolved, this is probably not the case.
    Furthermore, I think that right now most AI mostly fails in rare niche cases. If a learning system hasn’t encountered a situation often enough, it will probably not learn to deal with it very well either.

  24. Matt K says:

    @Rattus, actually Oblivion’s “Radient AI” was mostly hype and lies. The big new thing really was schedules for the NPCs (which had been done before) the rest were scripted events which pre-release were said to be otherwise. The rest was just Morrowind’s AI with a few patches and upgrades. I was pretty disapointed at the time too, but then again there was a lot to be disapointed with that Ai was far from what made me stop playing (the rediculous quest choices is what did it)

  25. Rattus says:

    @Matt K : Dunno about lies, but hyped it definitely was.
    I remember reading some very interesting stories on oblivion dev blog about what AI did – like guards overreacting to some “fight” causing all guards to come fight before gates and thieve stealing unguarded stuff from houses around the city.

    Actually when i read it again i think it was lies :-D

    Nevertheless i would be definitely still playing Oblivion if it had been more sandbox game. What ultimately killed the game for me was that i found myself destroying twenty-something oblivion gate tower and realizing they will be all the same and there will be few dozen more of them. Even grinding MMO seems like interesting stuff compared to that.

    1. WJS says:

      Yeah, reading some of the pre-release blurbs after actually playing Oblivion (and in particular playing with the construction kit), it’s pretty obvious that large amounts of what was said were never even close to true. Why they thought it would be a good idea to lie so blatantly when they knew that players would not only get to play the game soon but also to poke around inside their NPCs, I don’t know. Or maybe I do – they knew that people are fickle goldfish, and that everything would be long forgotten before Skyrim came out.

  26. Primogenitor says:

    A distributed Black & White-like AI system could produce interesting results. Human player give direct feedback about “right” and “wrong” behaviour. The AI makes fuzzy choices between preset actions/behaviours (which could be custom content). Allow players to (optionally) feedback to a central server, which combines the data and provides global updates (a la Spore).

  27. Neil says:

    Obligatory comment that Skynet is already here.
    Obligatory link: http://en.wikipedia.org/wiki/Skynet_%28satellites%29
    Obligatory snarky comment about UK government officials needing to get out more.

  28. SolkaTruesilver says:

    I remember my suggestion about the StarCraft’s AI Scripting, and pitting them against each other. I’d wonder how nice it would be to be able for modder to go in and masterly re-write Black & White’s creature’s AI.. How much of it would be merely pre-scripting, and not real reactions to stimulus?

    If there is one element on the web I have ifinite respect, it’s the modder’s capacities. (Still waiting for The Sith Lords :-( )

  29. I struck me as weird that there’s common engines people can use… eg game engines (Unreal, Cry etc) and physics engines (Havox) but until the last few comments I’d never heard of an AI engine, let alone a commercial one.

  30. Dave Mark says:

    Ah, Shamus… how much do I need to pay you for unwittingly bringing up almost all the points I address in my book, “Behavioral Mathematics for Game AI“?

    There IS work and improvement being done on these types of behavior selection algorithms. The problem is, many designers and programmers are stuck in the idea of using strictly rule-based approaches (i.e. “if X then Y”). As you alluded to above, this not only gets unwieldy rather quickly but also flat-out breaks in many circumstances.

    However, by changing to an approach that uses techniques such as “maximization of expected utility”, for example, many of the types of complex comparisons between very divergent choices can be sifted through quickly and effectively. What’s more, utility-based approaches scale far better and often include less programmer time. Designer tweaking, yes… but that’s where you want to have the bulk of the time being spent since it is the designers that are looking for a certain “feel” to the AI. An intuitive, data-driven approach makes that easier for them.

    By tossing in the ability to fuzzy things up a bit with weighted randoms, etc., you can also strip away the monotonous predictability of agents but still let them select from reasonable actions. That is, rather than selecting the “winner” of the decision scoring process, let them select proportionally from all of the reasonably good ones.

    Again, all of this is covered in my book. Also, for those of you in Austin here in a few weeks, I’m giving a lecture on using better AI techniques in MMOs. I will also be speaking at the GameX Summit in Philly this October on exactly the topic above… the “art” of crafting behaviors. See you there!

    (By the way, most AI middleware engines do NOTHING to help you craft realistic-looking behaviors.)

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply

Your email address will not be published.