“Behaviors” is my catch-all for the complexities of having the AI do different things in different situations. This might be group-level stuff, like:
- Deciding to spread out to reduce vulnerability to grenade attacks.
- Regroup, because the team is getting picked apart.
- Fortify, because there is some central goal that needs to be defended above and beyond the lives of the team members.
- Flank, because a couple of members have the player pinned down.
- Rush, because the battle has reached a boring stalemate and a banzai charge should shake up the complacent player.
Behaviors also refers to the individual decisions being made, like when to retreat, call for backup, rush, or defend. Which weapon to use, and when to throw a grenade. It always feels like just “fixing” one situation will make the AI more robust, but often adding onto the logic will simply move the failure around rather than eliminate it. The perfect example is detailed in this old post, where I talk about the difficulty of having the AI react reasonably to a fallen comrade. Right now, most games have the AI spot a body and immediately assume they have just come upon a murder scene. This is true even if they’re ten meters away and the victim is apparently resting peacefully in their bed. It turns out that in trying to fix this you can end up chasing your tail quite a bit. You could write all sorts of code, add special cases, and record lots more dialog to give the AI many different modes of behavior, and in the end you’ll still have lots of cases where the AI makes a complete ass of itself.
What to do when a door is suddenly open? When objects are missing? When a tank of flammable stuff is ignited in a huge fireball? (Hint: Explosions do not imbue the people just outside of the blast radius with magic psychic powers to know the exact location of the person who initiated the explosion.)
This is where the real cost of AI comes from: It’s complex and time consuming to test. As your system grows, you’ll spend more time running scenarios and simulations and less time actually writing code. Perhaps the programmer will see some bad behavior. Perhaps the AI seems too cautious and doesn’t move often enough in combat. But why? Is the AI engaged in the wrong behavior, and defending when he should be moving? Is The “group AI” out of whack and the group isn’t receiving orders? Or is the pathing hosed and the AI can’t figure out how to get there? Or does the pathing work but the AI is under-valuing certain locations? Or is the AI over-estimating the danger involved in moving? Or is the AI caught between a couple of conflicting behaviors? (Example: Hey, I’m behind cover, but I need to move. I’ll move from A to B. 1/60th of a second later: Hey look! Point A is really a lot closer than B! Why don’t I go there instead? In fact, I’m there already! What luck!) Bugs are thus hard to spot amongst the noise, and hard to tell from design flaws. And even when you do spot them, it’s hard to identify the cause. And hard to fix without simply breaking something else. And the better your AI gets, the worse all of these forces will be.
Despite my doom and gloom, I’m actually encouraged at how AI has been improving a bit over the last few years. Black & White got my hopes up a few years ago. I heard about its complex behaviors and activities before release, and I think I imagined that we’d hit some sort of breakthrough with AI, similar to what BSP’s did for graphics in the early 90’s. The reality of the game didn’t nearly live up to my imagination, and I think it was a sobering moment for everyone trying to move AI forward. It was possible to spend huge amounts of time and energy on AI and wind up with something that often felt like an accident when it did something right, was infuriating when it went wrong, and very often wasn’t all that different from a system built on simple rules and randomness.
We haven’t gotten a huge leap forward, but we are seeing gradual progress. Still, I don’t think we need to worry about Skynet anytime soon.
T w e n t y S i d e d
