Emergent Gameplay

By Shamus Posted Thursday Feb 2, 2006

Filed under: Game Design 0 comments

The word “emergent” was a big gaming industry buzzword a few years ago. Like all buzzwords, it was eventually abused until it had lost most of its meaning, and then discarded. The original intention was to denote systems where AI had actual emergent properties. That is, a game that displayed AI with behaviors and abilities not specifically written by the game’s creators. This was a hot item to have, and pretty soon everyone claimed to have it. The word seems to be used to mean “good AI” now, which is entirely incorrect.

In the old days (around ten years ago), behavior in games was scripted. If you had a game where the player was (for example) a secret agent trying to sneak into an enemy base, the designer might have a script that drives a specific non-combat character (Let’s say a scientist). The script might work like this:

  1. If you see the player, cry ogut in surprise
  2. Run from your current location (ROOM A) into the next room (ROOM B)
  3. Go into the hallway (HALLWAY) and cry for help
  4. The guard there in the hall will run back through ROOM B, enter ROOM A
  5. guard begins attacking player.

This will usually work for someone going through the game for the first time. To the player, it looks like they startle the scientist, who yells and runs out of the room. A few seconds later a guard runs in. Looks reasonable enough. But on subsequent attempts, the player is going to experiment with this situation (actually, they are experimenting with the script, but they don’t know it) by trying different things. What will happen? The goal is to produce behavior that makes sense, so anytime the script causes the actors to behave in nonsensical ways, the script has failed.

Let’s see:

What happens if you sneak by, kill the guard, and then come back and startle the scientist? He will run out into the HALLWAY and start screaming at the dead guard for help. Stupid. Then, deprived of further actions, he will continue to stand there. Total failure.

What if you stand in front of the door between ROOM A and ROOM B? When you startle the scientist, he will run TOWARDS you. (stupid) Because he’s blindly following a predetermined script, he’s going to keep running into you until you move or kill him. Total and absurd failure.

What if you startle the scientist, follow him into ROOM B and stay there? He will go into the HALLWAY, tell the guard, and the guard (who is also blindly following this script) will run right past you back to ROOM A. Once he gets there, he will turn around and come back. Mild failure.

Pretty soon it becomes apparent that this script works in a passable manner when dealing with the most obvious and common first-time behavior, and fails miserably if the player does anything unexpected. (I caught a few instances like this in the old Thief games.) Even worse, if the layout of the place is changed (say, a ROOM C is placed between ROOM A and ROOM B) then you have to change the script.

But how do you fix this? For a while the solution was to make the scripts more complex. “If the first guard is dead, run to this other guard over here”. “If the way is blocked, try this other route”. Pretty soon designers are playing whack-a-mole with various failure modes. Fix one failure, but introduce some new absurd behavior or situation elsewhere. At the same time, these once-simple scripts are becoming larger and more complex little programs. Pretty soon it becomes clear that you need a better solution.

Emergent behaviors

This is a lot more expensive in terms of programming time. I would guess that the difference is an order of magnitude or more. Instead of having a level designer concoct a script to anticipate player behaviors, you write a lot of general-purpose code. Stuff like:

  • A general-purpose routine for finding the best route from any given point A to any given point B, taking into account routes that are blocked by enemies.
  • Some code for general-purpose combat: seeking out things to use for cover. Behaviors for dealing with situations where the enemy (the player) might hide. Avoiding friendly fire situations.
  • Code for sensing the player. How much of an enemey (again, usually the player) needs to be visible, and for how long, before the AI “knows” they are there? How many player generated sounds, and at what volume (distance) are required before the jig is up?

With that complete, all you need is some rules for your AI-driven characters to follow:

  1. An AI behavior: If you see an enemy, you become alarmed.
  2. An AI behavior: If you see an ally who is alarmed, you become alarmed.
  3. An AI behavior: If you become alarmed and you are armed, head back towards the last known location of the enemy that started the alarm.
  4. An AI behavior: If you become alarmed and you are NOT armed, use that route-finding script and run to your nearest living, non-incapacitated ally who is not already alarmed and cry for help. If they are not armed, find another one. If they ARE armed, stop where you are.
  5. An AI behavior: If you see an enemy and are armed, engage them in combat.

This is very, very simplified. No nitpickery, please.

That is a lot of code. Now let’s see how it does. Exact same setup as before:

The player kills the guard out in the HALLWAY, comes back, and startles the scientist. The scientist becomes “alarmed”. The scientist will then plan a route to their nearest still-functioning ally and run there. Let’s say that ally is another scientist. Once scientist 2 sees scientist 1, he also becomes alarmed. Now both of them plan a route to the nearest ally. Let’s say that ally is a guard. Scientist 1 gets there first, alarms the guard, and stops. Scientist 2 then picks another ally to run to, since his programming tells him he needs to find someone who isn’t already alarmed before he can stop. He finds another guard, alarms him, and stops.

Now there are two guards, in two different locations, closing in on ROOM A. The scientists have fled to safety behind the protection of the guards, who are surrounding the player.

This is emergent behavior. You didn’t write code to tell non-combatants to run around alerting other non-combatants. You didn’t write code to tell guards to try to attack the player from two different locations. You didn’t write code to tell the non-cobatants to hide “behind” combatants. All of that just happened, as a result of the system you set up. This AI code has far fewer failure modes, (although it is by no means foolproof!) it can be used by both the scientists and the guards, and it can be used and re-used without regard to the general layout of the “base”. If you add rooms, remove rooms, or change the connections between them, you don’t need to change the AI at all.

You could add another AI behavior: If you are alarmed, unarmed, and you find a weapon, pick it up!

At this point you have an awful lot going on. Killing a guard and leaving his weapon means that at some point a frightened scientist might come along, pick up the gun and begin behaving like a guard!

Add another one: If your are seriously injured and have a clear path to a non-alarmed, armed ally, go to them!

Now guards will retreat when injured if they can, and get reinforcements.

Make no mistake, the code I’ve described is by no means as simple as it sounds. It is far more work than writing many seperate scripts. This is a major investment of coding time, and it just wasn’t possible under the budgets of ten years ago.

Doing stuff like “figuring out if you have a clear path to an ally” is non-trivial. Let’s go ahead and call it problematic. It means examining lines of fire and possible player movements and making a value judgement on the likelyhood of escape. The AI will absolutly make the wrong choice sometimes, but for the most part we don’t mind because humans frequently make bad choices in these situations too. As long as it can make the right choice more often than not, we will have a winning system. If the guard runs into the open heading for the door and gets cut down, it will just look like the guard “panicked”. The fact that this happens only sometimes will make the behavior seem even more plausible. There we have some more emergent behavior: unpredictability. We don’t have some randomizer in there: the unpredicability comes from the interaction between a bunch of different variables. This used to be called “fuzzy logic”, which is another buzzword that was drained of meaning and discarded.

 


From The Archives: