|By Shamus||Aug 30, 2013||84 comments|
As I play the game, I get this idea that a lot of AI problems are probably due to asymmetrical vision. Not all of them. It’s not that easy to make great AI. But there’s something inherently derpy about an enemy when you can see them and they can’t see you.
There’s a 90’s movie where fat guy Chris Farley plays a ninja. There’s a bunch of shtick where he tries to hide like a ninja but fails because he’s huge. The humor (where applicable) comes from the idea that this 300 pound man is standing behind a floor lamp and thinks he’s hidden, when in reality he’s basically standing in the open. He’s so dumb! He thinks we can’t see him!
I’m noticing a lot of this in my game. Foes are parked behind a wall, waiting to ambush me. But instead of “Ooh, ambush!” I think, “Oh, idiot ninja that thinks I can’t see him.” These are some really dumb AI, but the thing that makes them look dumb isn’t their AI, it’s the fact that I can see them hiding.
So let’s experiment with the idea of restricting what the player can see to the things their character could see.
This doesn’t seem to be a common feature in 2D games. I know the original X-Com did it, along with most RTS games. There was a semi-obscure game back in 2000 called Nox that did this. I’m sure there have been others. But for the vast majority of 2D games, no attempt is made to reconcile player vision with character vision. In 3D this problem usually solves itself because player vision and character vision are the same thing (first person mode) or basically close enough (in a third person game) that we don’t need to worry about it.
I don’t know how those other games did it, but here’s what I’m thinking:
I’ll project a bunch of radial lines from the player, stopping when I hit some level geometry. This forms a perimeter of points that all have an open line between themselves and the player in the center.
You can use these points to draw a triangle “Fan” in OpenGL. You feed it the origin. (The player’s position.) Then you give it those radial points in order. When you get to the end, repeat the first radial point again to close the loop. This will create a filled region that covers everything the player should be able to see.
But hold on a second there, code-monkey. Before you get out your big red crayon and start coloring in visible regions, we need to back up and figure out what we mean by “project radial lines”. It’s easy to say we’re going to do it, but how will we accomplish that, exactly?
We can take nice little baby-steps outward from the player, testing for wall collisions as we go. But if our visible range is, say, 4 units, and we’re taking steps of 0.1 or so, then it will take us 40 steps to get from the player to the edge of their vision, assuming we don’t meet any walls along the way. Assuming we’re projecting 360 radial lines (the image above is actually 2 degrees per line, for 180 individual lines) then that’s 14,400 little collision checks. This is not something to be done lightly, particularly not when you’re aiming for 60 frames a second.
We can make the steps larger, but then we run into this problem:
I’ve replaced the normal cave wall texture with a texture that shows the actual shape of the walls, just so we can see what we’re doing.
If our steps are too big, we run the risk of stepping over a narrow section of wall. One collision check happens on the near side of the wall, and the next one lands on the far side, and so we never run into it. This means the player will see through the wall somewhat unpredictably. As they zip around, those little collision dots will sometimes strike the wall and sometimes hop over it, and so that triangle will alternate between blocking sight and being transparent.
This won’t do. We’re doing too much work, and even at this herculean level of effort we’re still not accurate enough. Also, we have this problem:
That light boundary shows where the points are stopping when they do finally crash into the wall. Since they’re travelling outward from the player and since we’re taking big steps, those points kind of penetrate the walls in odd patterns. That light boundary will wiggle slightly as the player alters their distance from the wall.
What we need is to make our radial boundary to be much more accurate while also making it do fewer checks.
A first easy step is to just look at the space we’re moving through. Remember that the world is built on a grid of squares. We can hop along, taking giant 1-unit steps. But if we look and see that the next hop will land us in a square with SOME form of wall geometry in it, then we instead take smaller steps.
That solves the really ugly problem where sitting in an open chamber would perform some horrific number of checks. Now we need to refine our collision and have it stop when it hits the edge of a wall, not somewhere below the surface.
What you do is you have it step forward until it hits a wall. Once it does, you go into collision mode, where you’re looking for the edge. You begin stepping backwards. Every time you pass through the wall (if you were hitting last hop and not this hop, or vice-versa) you reverse direction. Each step is half the distance of the previous one.
This lets you zero in on the edge with respectable accuracy. The more hops you’re willing to do, the more perfect the edge will be. We just need the edge to not wiggle around in distracting ways, so 2 or 3 collision hops is probably plenty.
We started out with the daunting task of doing 14,400 collision checks. With all of this in place we can do the same job in ~1,200.
We’re halfway there. Whew.
I get really nervous doing this kind of prototyping. I’m wary of things that have a large up-front cost and I won’t know if they’ll pan out until I’m nearly done. The eyes that I added last time were a trifle. If a fifteen minute change doesn’t work out, then it’s no big deal. But here we have a big complicated undertaking with multiple moving parts, performance concerns, and artistic worries.
I could get all the way to the end and find out this looks horrible. I might find out it looks okay, but there’s some side-effect I didn’t take into account that makes it impractical. Or maybe I’ll discover it looks great, works fine, but isn’t any dang fun.
Sometimes it’s hard to tell, even when you get to the end. If it doesn’t work I have to decide if it’s a good idea that needs more fussing or if it’s just a fundamentally flawed idea that should be scrapped. Or perhaps the idea is good, but my implementation is crap? Maybe a lines-of-sight mode would be groovy but my idea of projecting lines is terrible?
Well, we won’t know for sure until we finish. Back to work.
Now we have a polygon region that defines everything we should be able to see. Now all we have to do is make sure that’s all that gets drawn. For this we use the OpenGL stencil buffer.
The stencil buffer is a strange beast. The stencil buffer lets you do stuff like:
- Okay OpenGL, I’m going to write to the stencil buffer now. (Draw a bunch of polygons.)
- Okay, I’m done writing to the buffer now. From now on, I want to only draw polygons in the region I just covered. (Draw other polygons.)
You can write to the stencil buffer. You can write to only certain bits in the stencil buffer. You can draw using the stencil as a mask. You can draw only using certain bits as a mask. You can write and draw at the same time, using different bit patterns, in order to both add to and subtract from the mask at the same time. You can draw to the stencil buffer but not the viewport. You can draw to the stencil buffer, and also the viewport, and you can make all of this conditional on other, tangentally related systems.
There are a lot of moving parts. There are a lot of flags to set and values to define. A lot can do wrong, and if you mess up you’ll generally end up drawing everything or nothing, which isn’t very helpful in spotting the problem. The thing is so confusing that I can never remember how it works. I end up reading the NeHe stencil buffer tutorial every dang time.
But I get it working. And when I draw the shape to the stencil buffer it ends up shaped like this:
You can see it’s still clipping through the walls a tiny bit. That’s fine. It doesn’t need to be pixel perfect. Once we get our normal wall texture in here you won’t be able to tell.
I draw in that region with a texture that gets darker on the edges, so it looks like I’m shining a light:
Get rid of this debug texture and put some robots in.
This accomplishes exactly what I was hoping. Robots can ambush you and actually take you by surprise, and when they duck behind a wall you can’t tell when and where they will pop up again. Gameplay is a little more paranoid and a little more surprising.
It’s kind of rare for an idea to pay off on the first try like this, but this is good enough that I’ve decided it’s a core part of the game. I’d be willing to cut other planned features if it means I get to keep this one.
Well, they don’t always work out this well, but it’s nice when they do.