As I play the game, I get this idea that a lot of AI problems are probably due to asymmetrical vision. Not all of them. It’s not that easy to make great AI. But there’s something inherently derpy about an enemy when you can see them and they can’t see you.
There’s a 90’s movie where fat guy Chris Farley plays a ninja. There’s a bunch of shtick where he tries to hide like a ninja but fails because he’s huge. The humor (where applicable) comes from the idea that this 300 pound man is standing behind a floor lamp and thinks he’s hidden, when in reality he’s basically standing in the open. He’s so dumb! He thinks we can’t see him!
I’m noticing a lot of this in my game. Foes are parked behind a wall, waiting to ambush me. But instead of “Ooh, ambush!” I think, “Oh, idiot ninja that thinks I can’t see him.” These are some really dumb AI, but the thing that makes them look dumb isn’t their AI, it’s the fact that I can see them hiding.
So let’s experiment with the idea of restricting what the player can see to the things their character could see.
This doesn’t seem to be a common feature in 2D games. I know the original X-Com did it, along with most RTS games. There was a semi-obscure game back in 2000 called Nox that did this. I’m sure there have been others. But for the vast majority of 2D games, no attempt is made to reconcile player vision with character vision. In 3D this problem usually solves itself because player vision and character vision are the same thing (first person mode) or basically close enough (in a third person game) that we don’t need to worry about it.
I don’t know how those other games did it, but here’s what I’m thinking:
I’ll project a bunch of radial lines from the player, stopping when I hit some level geometry. This forms a perimeter of points that all have an open line between themselves and the player in the center.
You can use these points to draw a triangle “Fan” in OpenGL. You feed it the origin. (The player’s position.) Then you give it those radial points in order. When you get to the end, repeat the first radial point again to close the loop. This will create a filled region that covers everything the player should be able to see.
But hold on a second there, code-monkey. Before you get out your big red crayon and start coloring in visible regions, we need to back up and figure out what we mean by “project radial lines”. It’s easy to say we’re going to do it, but how will we accomplish that, exactly?
We can take nice little baby-steps outward from the player, testing for wall collisions as we go. But if our visible range is, say, 4 units, and we’re taking steps of 0.1 or so, then it will take us 40 steps to get from the player to the edge of their vision, assuming we don’t meet any walls along the way. Assuming we’re projecting 360 radial lines (the image above is actually 2 degrees per line, for 180 individual lines) then that’s 14,400 little collision checks. This is not something to be done lightly, particularly not when you’re aiming for 60 frames a second.
We can make the steps larger, but then we run into this problem:
I’ve replaced the normal cave wall texture with a texture that shows the actual shape of the walls, just so we can see what we’re doing.
If our steps are too big, we run the risk of stepping over a narrow section of wall. One collision check happens on the near side of the wall, and the next one lands on the far side, and so we never run into it. This means the player will see through the wall somewhat unpredictably. As they zip around, those little collision dots will sometimes strike the wall and sometimes hop over it, and so that triangle will alternate between blocking sight and being transparent.
This won’t do. We’re doing too much work, and even at this herculean level of effort we’re still not accurate enough. Also, we have this problem:
That light boundary shows where the points are stopping when they do finally crash into the wall. Since they’re travelling outward from the player and since we’re taking big steps, those points kind of penetrate the walls in odd patterns. That light boundary will wiggle slightly as the player alters their distance from the wall.
What we need is to make our radial boundary to be much more accurate while also making it do fewer checks.
A first easy step is to just look at the space we’re moving through. Remember that the world is built on a grid of squares. We can hop along, taking giant 1-unit steps. But if we look and see that the next hop will land us in a square with SOME form of wall geometry in it, then we instead take smaller steps.
That solves the really ugly problem where sitting in an open chamber would perform some horrific number of checks. Now we need to refine our collision and have it stop when it hits the edge of a wall, not somewhere below the surface.
What you do is you have it step forward until it hits a wall. Once it does, you go into collision mode, where you’re looking for the edge. You begin stepping backwards. Every time you pass through the wall (if you were hitting last hop and not this hop, or vice-versa) you reverse direction. Each step is half the distance of the previous one.
This lets you zero in on the edge with respectable accuracy. The more hops you’re willing to do, the more perfect the edge will be. We just need the edge to not wiggle around in distracting ways, so 2 or 3 collision hops is probably plenty.
We started out with the daunting task of doing 14,400 collision checks. With all of this in place we can do the same job in ~1,200.
We’re halfway there. Whew.
I get really nervous doing this kind of prototyping. I’m wary of things that have a large up-front cost and I won’t know if they’ll pan out until I’m nearly done. The eyes that I added last time were a trifle. If a fifteen minute change doesn’t work out, then it’s no big deal. But here we have a big complicated undertaking with multiple moving parts, performance concerns, and artistic worries.
I could get all the way to the end and find out this looks horrible. I might find out it looks okay, but there’s some side-effect I didn’t take into account that makes it impractical. Or maybe I’ll discover it looks great, works fine, but isn’t any dang fun.
Sometimes it’s hard to tell, even when you get to the end. If it doesn’t work I have to decide if it’s a good idea that needs more fussing or if it’s just a fundamentally flawed idea that should be scrapped. Or perhaps the idea is good, but my implementation is crap? Maybe a lines-of-sight mode would be groovy but my idea of projecting lines is terrible?
Well, we won’t know for sure until we finish. Back to work.
Now we have a polygon region that defines everything we should be able to see. Now all we have to do is make sure that’s all that gets drawn. For this we use the OpenGL stencil buffer.
The stencil buffer is a strange beast. The stencil buffer lets you do stuff like:
- Okay OpenGL, I’m going to write to the stencil buffer now. (Draw a bunch of polygons.)
- Okay, I’m done writing to the buffer now. From now on, I want to only draw polygons in the region I just covered. (Draw other polygons.)
You can write to the stencil buffer. You can write to only certain bits in the stencil buffer. You can draw using the stencil as a mask. You can draw only using certain bits as a mask. You can write and draw at the same time, using different bit patterns, in order to both add to and subtract from the mask at the same time. You can draw to the stencil buffer but not the viewport. You can draw to the stencil buffer, and also the viewport, and you can make all of this conditional on other, tangentally related systems.
There are a lot of moving parts. There are a lot of flags to set and values to define. A lot can do wrong, and if you mess up you’ll generally end up drawing everything or nothing, which isn’t very helpful in spotting the problem. The thing is so confusing that I can never remember how it works. I end up reading the NeHe stencil buffer tutorial every dang time.
But I get it working. And when I draw the shape to the stencil buffer it ends up shaped like this:
You can see it’s still clipping through the walls a tiny bit. That’s fine. It doesn’t need to be pixel perfect. Once we get our normal wall texture in here you won’t be able to tell.
I draw in that region with a texture that gets darker on the edges, so it looks like I’m shining a light:
Get rid of this debug texture and put some robots in.
This accomplishes exactly what I was hoping. Robots can ambush you and actually take you by surprise, and when they duck behind a wall you can’t tell when and where they will pop up again. Gameplay is a little more paranoid and a little more surprising.
It’s kind of rare for an idea to pay off on the first try like this, but this is good enough that I’ve decided it’s a core part of the game. I’d be willing to cut other planned features if it means I get to keep this one.
Well, they don’t always work out this well, but it’s nice when they do.
The Plot-Driven Door
You know how videogames sometimes do that thing where it's preposterously hard to go through a simple door? This one is really bad.
Starcraft: Bot Fight
Let's do some scripting to make the Starcraft AI fight itself, and see how smart it is. Or isn't.
Lost Laughs in Leisure Suit Larry
Why was this classic adventure game so funny in the 80's, and why did it stop being funny?
A video discussing Megatexture technology. Why we needed it, what it was supposed to do, and why it maybe didn't totally work.
The No Politics Rule
Here are 6 reasons why I forbid political discussions on this site. #4 will amaze you. Or not.
84 thoughts on “Project Good Robot 7: Lines of Sight”
The little teal-eyed robots look positively filled with glee at the prospect of getting to ambush you.
I’ve been thinking the same thing. They look so happy to do their jobs. “Teeheehee, here comes a robot! I’m gonna jump out from behind this wall at him!” “Well, I’m going to shoot him with lasers!” “Well, I’m going to compose a mocking song about him and sing it in front of that girl robot he has a secret crush on!”
I feel like I would get on well with the teal robots, aside from the whole murder thing.
Like a mixture between Portal2 turrets and some other Portal2 turrets only with less swearing?
The suicide robot would really like to explode, explosions being the most fun thing ever, but his friends want to play a game of Worms and already got some juicy batteries and hot new apps for later. And suddenly their sensors pick up an enemy and suicide robot has to decide whether he does what he promised to his friends or follows his instincts. Will he blow up with the enemy because he can always play with his friends later? Oh the drama!
The suicide robots are Andy the Bomb from RvB?
That final screen shot instantly makes me feel a little tense and excited. It also gives a fantastic sense of movement and suspense.
One thing I’m curious about is the background texture of distant tunnels. I think it’s a great design decision. Very nice technique for giving a quick illusion of depth.
Yes it looks fantastic. I really hope Shamus goes through with the game so we can throw our money at him!
I have no experience with this kind of thing, and you don’t have time to prototype my guesses, but perhaps you can take a educated guess at:
how would this have gone differently if you assumed the player could see a circle, then for each square in the circle, if it contains walls, delete the “shadow” of the walls from the player vision?
I “think” you could find the shadow by drawing lines from the player to where the wall intersects the edge of the square, to where that line intersects the circle, but I could be way off (and am way off if you can have peninsulas in the squares, but it looks like all your squares have one dividing line)
I suspect that would tend to mean fewer calculations but more of them on the CPU, which is always a fussy trade-off. It also would be more expensive in very cluttered regions than open regions by quite a bit, since it would require calculating what is hidden instead of what is visible.
Shamus, I have to say if your actual coding is as far ahead of this series as you say, you must be getting close to having a ‘shippable game’ for a given value of ‘shippable’. Maybe not the best, or most varied, or most polished, but a lot of those things can be mitigated by price point – I would be annoyed to pay 5 dollars (a ballpark amount I pulled out of my ass on a comment a little while ago) for a game this unpolished, but I’d be fine with 2 dollars if it had procedural generation (a strong point of yours anyway) for virtually unlimited replay value of simple, strong game mechanics.
The first 90% of development is easy. It’s the second 90% that gets you.
And the third 90% of actually shipping the dang thing (storefronts, currency, support, marketing) is also hard work!
(that said, I would like a web log of that process — as Shamus did with e-book publishing)
I can’t help but notice that you were solving a problem of uneven edges on a map that is already made of straight lines connecting dots in known positions. If you’re committed to using marching squares (and having walls be the only thing that obstructs visibility), then I think you could achieve the same effect much more cheaply by casting those visibility lines only to the corners/nodes of the map. That would be a double-digit number of checks for the example in this post. (Disclaimer: I’ve never done anything like this.)
This is pretty much how I learned to do visibility testing.
This article is a really good explanation of the method.
That’s a really good link.
This looks awesome. I might prefer the obstructed areas darker for artistic reasons, but I think this is the perfect solution to the original conundrum. Now the question is whether to do the same for the AI, and have them guess where the player goes when line of sight is obstructed.
Isn’t this tendency to “overstep” while looking for boundaries (in space) exactly the problem with physics engine collision handling (but in time)?
Collision handling has been a mess every time I’ve tried it. The one time it was crucial was in a real physics simulation, so I could just let the thing run overnight. It feels considerably more daunting to try and do this in real time while hitting your 60fps mark.
The line of sight system automatically adds tension to any top-down/side-scrolling video game. Teleglitch uses it to great effect, and Mark Of The Ninja’s NG+ mode is far better for it as well.
This is looking fantastic. If it becomes a full game, I will be very interested to see how you frame the context, Shamus. I don’t mean a story or a campaign–just a smidgeon of worldbuilding, and justification for droid spelunking.
Exactly right. Physics engines in games work OK, until the point where precision issues become obvious and then it’s straight to hell.
An alternate idea might be to imagine that the robot isn’t perceiving things in our normal, visual spectrum, and that seeing it on the screen is merely a translation into the visual of what it is actually perceiving.
The robot could have a sonar-like “ping” radiating out at regular intervals, revealing locations, or a Doppler-radar sweep effect. One of the interesting effects of that is that it may be able to locate objects just barely out of visual perception (a small distance around a corner) because of radar/sonar bounce, but it would be hazy and not tell you any significant detail about what you’re seeing, just that something’s there.
Even better, perhaps one or both of those could be an *alternate* mode, such as a power-up that extends visual (and perhaps has longer range) or a back-up that kicks in when a certain kind of enemy weapon knocks out your lights for a while. It could also be used as the mode of vision for certain *kinds* of enemies (perhaps distinguishable by color of the eyes) which would cause more variation in enemy AI behavior along the edges of detectability.
Maybe you can even rob the ability from an enemy you destroy, but the effect only lasts as long as the batteries that came with the part, so you have to keep harvesting more to keep the power on. Maybe carrying the extra part slows you down as a trade-off, and you need to jettison it to pass certain areas where speed is crucial.
I was thinking something similar. The difference between a programmer and a writer (though our host is both). He sees the technical problem and says “how can I fix it?”
I see the technical problem and say, “oh, just say it’s radar. The wonky edge detection is the radar beam getting lost in the ground clutter and bending around corners.”
That last screenshot looks frickin’ awesome.
Like many others, I’m looking forward to giving you my money :)
A recent game with this 2D vision cone thing is Teleglitch (http://diemore.teleglitch.com/), which I haven’t played but I’ve heard pretty good things about. But they don’t even show the walls outside your vision, which makes it feel very claustrophobic (in a good way, I guess :) )
There’s also Mark of the Ninja, which is very good and available on Steam.(No demo, unfortunately.)
There is a demo on the 360 (as is required for all XBLA games)
An alternate way to do this would be to do reverse collision detection from every enemy back towards the player. Raycast a single line to the player’s position, check for collisions, slowly fade to invisible if found. 1,200 collisions becomes about ten.
You lose the nice lighting effect, but that can be accomplished by extruding terrain polygons away from the player’s position (to infinity). Think, shadow volume.
Clever, and probably better in most cases especially when it comes to the fade in/fade out effect–but Shamus’ method is a little more robust in terms of working with more features as they are added (I could see particle effects being difficult with this method, for example).
It’s fascinating to me to watch what was a programming decision become a design decision, it creates such interesting arguments for and against one or the other.
I was thinking the same thing at first, but I think you lose something important when you drop that lighting effect–namely, feedback for the player letting them know what they can see and what they can’t.
Further, I don’t know if the extrusion idea really works in practice–in order to do that, he would first have to determine exactly which polygons to extrude (unless he brute-force extrudes everything), and now we’re back to doing collision detection.
However if it does work, he could just do the extrusion and use the extrusion with the depth buffer to mask off his stencil buffer to get the exact same effect he has already created…
It would go back to collision detection, but it’s ~40 collision checks instead of ~1200. It also makes the shadow volume consistent without requiring any iterative refinement.
Why so much reduction? I can see the savings from not having to refine your collision detection to find the surface, but that’s only 3 or so checks per ray-trace that hits a surface. I would still think it would require checks on the order of hundreds unless there’s something to this I’m not seeing.
I do like the extrusion idea though, precisely because it makes the system more consistent.
You’d just draw lines directly between the player and the enemies, so in theory you’d need a lot fewer of them. However, Shamus apparently has enemies be partially visible instead of popping in, so more than one per enemy would be needed. Probably it’d save calculations until you hit some number of enemies, then would start costing them. The number would depend on how fine you want it.
But you still need to extrude the scenery to make the shadow effect, which I really think you need for player feedback. For that you either need to calculate what to extrude (back to collision detection, though it only needs to flag polygons to extrude, not refine the edges of them) or brute-force extrude everything (which would probably get expensive).
Only running collision detection to the enemies is definitely cheaper, but I think you need the lighting effect and implementing it puts you back to iterating through the scenery somehow (though I don’t think you need to ray-trace to do it, as I laid out below).
What about bullets or missiles? It seems like this system might end up “seeing” them go shooting down offshoot tunnels even though you couldn’t “see” the enemy lurking in the exact same place. That would be weird.
Either that or you’d have to check every bullet, and there goes your savings.
You’re essentially calculating a depth buffer from the point of view of your avatar… did you consider doing it in a similar fashion as OpenGL/ the graphics card? As this is 2D you have line segments instead of triangles. What I’m thinking is: walls are basically line segments; get all line segments currently in the scene; then, project them onto your depth buffer. The depth buffer would be an array which maps angles(=directions) onto “nearest wall in that direction”.
Curse you for posting this while I was typing :p
That, I think, is very similar to the technique I thought of when I saw the initial screenshot.
I would have projected out a series of lines instead of points. I gave my game engine tools to detect collisions between circles, rectangles, and lines.(no triangles yet, they will not be needed for this project) Run collision detection on a set of lines, find all intersection points(intersection between two lines), grab only the ones which are closest to the player. Total number of collisions tested is LxW, where L is the number of sight lines, and W is the number of Wall objects in the player’s vision range.
A bit less, actually. My collision engine has a culling grid similar to the one Shamus demonstrated.
This strikes me as much more intensive and complicated than it needs to be. To me, it seems like you would be much better off using the vertices in the scenery than running a brute-force collision detection algorithm.
I would have to sit down for an hour or so to work out the math, but can’t you just gather all the vertices within a certain radius of the player and use simple (compared to 3D) geometry to do what you want with less fudging and fewer calculations?
EDIT: Just had an epiphany for the math part, so I gotta share it.
First, the scenery would have to have more math baked into it. Every edge needs a normal and a center point. That’s not hard to add with the way you generate terrain.
Draw a vector between the player and every edge center point within a certain radius. Dot product all these vectors with all the normals from the same edges, and throw out any with a positive (or negative, depending on how you define your vectors) product. The edges with the wrong sign are all the edges facing away from the player, while the edges with the right sign are the ones facing toward the player.
That will give you a list of edges that block sight. Now, draw a fan, using all those edges to mask it off where visibility is blocked. If there isn’t some sexy technique for using the depth buffer to do this, you might need to calculate some intersection points and do some sorting, but that’s not too hard with relatively few points. When you’re finished you should have your stencil buffer, with fewer calculations and better accuracy.
For the “Sexy depth buffer technique” draw some huge quadrilaterals with your edge endpoints projected way out, and use the depth buffer to mask the fan. Easy peasy, visibility is done.
Yeah, this is what I was just thinking too. You don’t even need to bother with the center points, just do this for all the edges, it’s not really a problem if the back edges cast shadows.
And by “this”, I mean, start with the stencil buffer all visible, then draw shadows on it. The shadows can be QUAD_STRIP, where on one edge of the quad strip are the walls, on the other edge are the walls scaled out 100x from the center of the screen. (And then make sure you are always at least 1/100th screen size from the closest wall :) )
TA-DA! Stencil buffer done by just the GPU.
Interesting. I couldn’t follow some of the other proposals, but this one I get.
Unlike what I have now, this would do nothing when you’re in the open, instead of doing MORE checks. And the worst-case scenario would still be less intensive than what I have now. And it would indeed be pixel-perfect. And I could extend the visible range without worrying about performance cost.
I’ll put this on the list of things to try.
Yeah, I guess a lot of that math for finding the facing surfaces is only optimization when you use the depth buffer (originally it was necessary because I needed those edges to make the fan), but even then it’s easy optimization that only requires vector math, which the GPU is built for and Shamus is already familiar with.
Also, pure scaling makes me nervous. Do some math (either trig or normalize the vector) and make it a fixed length :P
EDIT: Odd, when I modified that first parenthetical statement, it moved me into the mod queue…
Well, but it is optimization that I’m not sure optimizes anything.
With my solution for every wall I need to draw a quad.
With your solution, for every wall, you need more data (or extra calculation on the fly) for the normal and the center. Then you need to do the check. Then if it’s a “good” wall, you draw it, which means more conditionals, and you probably loose the quad strips, which means more OpenGL calls.
I’m not saying it’s slower, but it’s not obviously faster, and the code is definitely more complicated.
As for the scaling, you’re probably right. I went with the scaling because I’m not sure about OpenGL, maybe you can change transforms during drawing a shape, or do it in a shader. But if done on the CPU, normalizing the vector probably makes more sense.
Heh, you replied to a post I edited that got moved to the moderation queue for some reason. Hopefully this will be straightened out when it’s released.
What I propose is a fairly simple culling algorithm which would reduce the number of polygons drawn. On the sample image from Shamus about a two thirds of the quad would be dropped, so I’m pretty confident in say it would improve optimization. However, given the relative simplicity of 2D rendering, the GPU will probably chew through these graphics like a beast even if you use brute force solution, so you’re probably right that the extra complication isn’t worth it.
At the same time, I think you are making it more complicated than what it is–which is fair, because it looks complicated when described in words. In practice, there’s only one matrix multiplication and a comparison and you’re done.
The level is built using tilesets, so the center-points, normal, and endpoints (handy for drawing quads) would be kept in an array. Compared to the space the actual tile textures take up, it’s not a lot of data. This data would need transformed to the player’s coordinate system (coordinate system with player at origin) when the tile renders. Not difficult if 3D programming is your thing.
With that info, the algorithm is:
1) DotProduct = centerX * normalX + centerY * normalY
((Note: with matrix math, you can matrix multiply a matrix made up of center-points by a matrix made up of normals to get an array of dot products. The GPU is good at this))
2) if DotProduct > 0, DrawQuad(Endpoint1,Enpoint2,Project(Endpoint1),Project(Endpoint2)
That’s it. Only one conditional and very little crunch that needs done. Again, probably not worthwhile unless you’re positively desperate for more rendering time, but not all that hard to implement either.
EDIT: And…an edit got me put in the mod queue again. Am I doing something wrong?
I have no idea what’s up with the spam filter. It stuck you in moderation, I pulled it out, then you edited it, and it put it back in.
I actually wouldn’t mind this, but it let three pieces of VERY OBVIOUS SPAM through yesterday. I understand the filter being tripped up if it’s too sensitive, and I understand it letting things through if it’s not sensitive enough, but I should not be having BOTH problems.
if (rand() % 2) blockMessage();
Dear Ms. Shamis YOUNG,
I am writing to you on behalf of my dear President of Nigeria, who has been forced into hiding due to his technoligycal progress. He has developed an SPAM FILTER which will reduce the worlds spam traffic by 92.45% However this is not without coust and is very expensive as well. I have been authorized to offer you this SPAM FILTER code for a reasonalbe price of $3000,00 dollars US, or $20,000 Great Britain Pounds. Your money will free the President from hiding and allow his research to continue. I know I can count on you for help. Please, think of the President!
Fr. John Rider-Waite, Esq.
There is sensitivity, and there is specificity. It is possible to have neither.
You beat me to it :)
Another possibility that might be more or less complicated to implement (But I wouldn’t know):
Since were nominally in a 3D environment, would it be possible to extrude the visible (flat) geometry normal to the screen (so now you’ve got vertical faces normal to the screen along all edges), then create a z buffer/visibility check using openGL’s own routines, with the viewpoint from the robot itself, 1 pixel high and with 360 degree angle of view, and use that to know how far you can see in each direction. Same goes for determining what an enemy robot can see, if necessary.
This is probably more computing but it could be done on the graphics chip, and most of it is already coded.
… or it might be unnecessarily complex because I don’t know how you would logistically efficiently convert the depth information from the robot’s perspective into a stencil. Should be possible doing coordinate transformation (to polar coordinates: map z puffer coordinate to angle, depth to distance, centered on avatar), but this might after all require some smart use of shaders or using the CPU, in which case just using the vertex information as described by Abnaxis would be waaay easier.
… actually yes, it would be. But still, I had an idea so it had to be written down :)
Ah, I didn’t have the optimization for only worrying about edges facing you, nice.
Yeah, I was about to go in this direction. Since the world geometry has very few vertices, and those vertices will always be the ones blocking sight, using them should be much simpler.
While I enjoyed the article, I feel that it’s the typical case of premature optimisation. As Herb Sutter put it: “Don’t optimize before you have proof that it is required.” Especially to find out whether the game is fun with the new visibility system: Just implement the simplest approach you can think of, and test it. Chances are, it will be fast enough to be testable. At that point, you can still optimize. You (Shamus) have spent probably considerable time to write a clever algorithm for a problem you might not even need to solve.
Last, but not least, you’re doing ~1500 search steps instead of ten times that. That’s obviously not worse. Or is it? Can you show that the 15’000 steps don’t perform better? Did you actually try the brute force approach and see that it was way too slow? This is not a silly question, let me give an example from work:
I had to find a bunch (a few hundred) of intersections of rays and a 3d-geometry of about 30’000 vertices. So I got myself a nice kd-tree, and starting shooting rays at it, and it worked splendidly, except it wasn’t really very fast. When I profiled it, I realized that building the kd-tree took me literally 95% of my runtime for the whole thing. As an experiment, I tried doing the intersecting the brute-force way: loop through every triangle in the geometry, check whether it intersects the ray. And lo and behold, it was faster to loop through the same 30k triangles a hundred times than it was to build a single kd-tree. I don’t know exactly why, but I suspect it’s mostly a matter of caching, because the loop could go through a linear array of stuff, while the kd-tree would deal with thousands of cache-misses. After adding a similar normal-dot-product-calculation that ignored back-faces, all the intersecting went from multiple seconds down into the 100ms-range.
So I got a x20 speed boost by not optimizing.
In similar news, I spent some time with functional programming, where everything is kept in “slow” linear lists. Except it’s insanely fast in most cases, because theses lists are actually cache-friendly arrays, and as long as you have little data (a few tens of thousands of entries), that’s really hard to beat, especially when you get multi-core for basically free, because you’re not bothered by the threads-and-locks model.
And I forgot to mention: I gained a further speed boost when I started to run more than one intersection at a time on the multicore CPU. It would have been nearly impossible to multi-thread the creation of a kd-tree.
The point is: Sometimes the hardware behaves not quite logically, and 10’000 operations can be much faster than 100, because the 10’000 can be better cached, jump-predicted and processed 8 at a time.
Ok, so I like the idea, and it makes sense that it would have a very positive impact on the gameplay, but I have to ask: If you’re not supposed to be able to see through walls, then why can you see the terrain behind the wall? It just seems incongruous, and if you’re limiting line of sight, it might make a lot more sense to the player if they can’t physically see around the corner, rather than being able to see around the corner, but not know if anything is there.
Of course I haven’t played around with it, and the light sphere might actually register intuitively in the mind, but for me it’s odd that you see “empty” corridors on screen when something might be there. Seems like it would make more sense to see nothing at all, but I suppose then the game might feel too dark.
The other great thing about imposing limits like this is that it makes upgrade options (which I recall you saying you wanted to add) much easier later, because all you need to do is start removing restrictions. “Vision Upgrade 1” lets you see farther, upgrade 2 lets you see map shapes, vision 3 lets you know if there’s something behind a wall, etc. From a player’s perspective, you’re augmenting them, but from a programming perspective, you’re actually removing downgrades.
Are you allowing enemies to act as LOS-blockers as well, or would that screw up your approach? Because the idea of a “shield” enemy who blocked visibility to what was behind it could have interesting implications. Or static barricades blocking narrow passages which block LOS, but can be destroyed, that sort of thing.
Yes, this is definitely a good feature. The simple fact that it invites so much thought means it’s got value.
That is how e.g. Starcraft works, too. You already know the map, so it’s always displayed, but you can not see what’s on it unless it’s in visible range of a unit.
I think it would be cool if out-of-sight terrain would only be shown if the player has seen it before at some point. But that would be another feature. And require a way to store what’s been seen before and what not, inside the whole level.
Depends on the “in world” assumptions.
Maybe this is a cave the robot has traveled before, that’s been invaded by killer robots recently. Maybe many Bothans died to get us a map. Maybe the robot has some terrain sensing radar with a longer range than visible light (which allows visibility of terrain to somewhere beyond the boundary of the screen, but which can’t detect robots.
Or maybe it’s a “no, YOU shut up!” item that doesn’t make sense at all. :)
Agree with the meta-point that it’s interesting conversation.
I’ve played 2D games where you couldn’t see any of the landscape at all outside your field of view, and I didn’t like it. It was very hard to remember where you were relative to the environment. I was always getting lost and reexploring previous areas. If you have to hide the landscape, hide only what hasn’t been seen at all and make it come into view permanently after you’ve viewed it once. This is doubly true if there is an exploration component to the game.
I think the reason for this is the fact that there is always going to be a lot less variety and detail in a video game, especially a 2D game, than there is in real life. And all that detail helps you remember things and orient yourself. So being able to see the landscape everywhere is unrealistic, but it compensates for another unrealism, namely the lack of detail. Take away the landscape sight and the player feels crippled.
I think you see this a lot in games. Mario can change the arc of his jump in the middle of it (unrealistic), but this compensates for not having fine control over how hard he jumps. In the early Castlevania games you couldn’t change a jump at all once you started, but now you had no way of adjusting how far you went and it felt crippling. In most games that require food, it keeps forever. Nethack added rotting food, but didn’t add anything to do with smelling it, looking at it, or tasting it to see if it was rotten, you just had to take a great big bite. “I wonder if this is rotten?” *CHOMP* *CHOMP* “Afraid so…” I found that very annoying. I’m sure there are other examples. Outside of explicit simulation games, gratuitous realism tends to suck.
“If you're not supposed to be able to see through walls, then why can you see the terrain behind the wall?”
I was about to post the same question. It just doesn’t feel quite right, when terrain is crisp all-round, and the enemies have such a hard cutoff. I’d experiment a lot with blurring the invisible terrain, or turning it into a radar-echo visual style.
Can we get wallpaper-sized versions of that second picture? It’s actually really beautiful.
Once you’ve found out which wall a ray collides with, how about, rather than stepping back and forth trying to figure out where the point of collision is, you just find the point of intersection between the ray and the wall? Since the wall and ray are just a line segment and a line, this shouldn’t be too hard.
Shamus isn’t drawing a line from the avatar to the wall. He has the system checking for a wall at increments along where that line would be. “Drawing a line” would equate to doing collision checks at every pixel along that line. The way he’s chosen to do it, he cuts down the number of collision checks considerably. If there is no wall at pixel 80 but there is one at 120, he has AT MOST 40 checks to run to find the edge (and there are “tricks” to use to reduce that, too).
Um. No, no collision checks needed to find the intersection of two lines. Just solve a system of two linear equations in two variables. It’s elementary algebra with no iteration required.
This is slightly less simple for lines with 2 dependent variables instead of 1 like you’re used to in Algebra, but yes, the equations are still pretty trivial. Once I figured them out(for line-circle collision handling), I started using them everywhere because they’re trivial to copy and paste around your code.
In my case, the code block(from collisions) for finding the intersection between two lines is:
float a1 = line1point1.y-line1point2.y;
float b1 = line1point2.x-line1point1.x;
float c1 = -(a1*line1point2.x)-(b1*line1point2.y);
float a2 = line2point1.y-line2point2.y;
float b2 = line2point2.x-line2point1.x;
float c2 = -(a2*line2point2.x)-(b2*line2point2.y);
//lines intersect somewhere.
//a1x + b1y + c1 = a2x + b2y + c2
//a1a2x + b1a2y + c1a2 = a1a2x + a1b2y + a1c2
//a1b2x + b1b2y + c1b2 = a2b1x + b1b2y + b1c2
//x(a1b2-a2b1) = b1c2-c1b2
intersection1.x = ((b1*c2)-(c1*b2))/((a1*b2)-(b1*a2));
intersection1.y = ((a1*c2)-(c1*a2))/((a2*b1)-(b2*a1));
Note: I actually look for 2 intersection points. Just because a pair of lines intersect does NOT mean the segments I care about do, so I find the two closest points and then do circle/circle checks between them.
Also, this completely skips a segment which is used to detect if the lines are parallel and use a different check to see if they overlap.
Minor aesthetic quibble:
If you haven’t already done so in the last few weeks, you might want to try putting an alpha blur of a dozen pixels or so on the edge of your masking region, so that the critters fade a little bit at the edge of vision, rather than cutting off quite so abruptly.
Here’s a quick photoshop-job on the screenshot from above, to illustrate.
Of course, this is assuming that doing so is practical within OpenGL- my experience there is negligible, but I would have guessed you could make some region alphafaded in much the same way that you change the texture colour…
/me goes back and rereads about “people trying to design your game through the art of passive-aggressive suggestions”, but can’t quite bring himself to delete the post
Wow, that does look pretty neat. It really makes the scene more tense, for such a small change.
Stencil buffering provided by fixed function OpenGL is an on/off proposition (either a pixel is drawn or it isn’t). If you fed the data from the stencil buffer to a pixel shader you could then alpha blend, but you’d need to pass enough information along to be able to determine how to blend it (ie: which direction does it fade out in?).
I don’t think that looks better though. Corners block your sharply, there’s no fuzzy area where you kind of see things behind corners.
I agree with kaypy. In the last screenshot they register instinctively as things popping out behind a foreground object, not as something coming into view. Since you are looking at it from the top, the field of vision registers subconsciously as the beam of a flashlight, and those have soft edges because they diffract around corners. Adding the gradient adds a bit of a “diffraction” effect without having to do any raytracing. I suspect the end result would feel more like a circle of light that the enemies come into.
Another suggestion (since clearly you don’t have enough of those!) is to make the enemies “darker” as they go to the outside of your field of vision. By this I mean they get more transparent, and colors that aren’t black get closer to black. This will make them seem to “fade in” from a distance instead of appearing all at once. Their eyes should be visible from a longer distance and should fade out more slowly, with any luck enhancing the illusion that the eyes are glowing. It should also be really creepy having a your first view of the monster be a dim glowing eye just on the edge of your field of view. This also lets you add a “fog” effect to some areas that makes the monsters fade out more quickly.
Now I have no idea how hard this is to implement so I would understand if this doesn’t get implemented. And I also know that it might already be implemented, or some other feature had been added that renders this irrelevant. It’s kind of like watching a game show on TV: You know they can’t hear you, but it’s fun to try to say the answer anyway.
I know you’re not soliciting advice for gameplay or anything, but I was just thinking through some of the gameplay possibilities this system opens up.
It might be fun to try a “radar pulse” item. A one-shot gizmo that you’d have to pick up or purchase or allow to recharge or something that reveals all enemies on screen for a second or so. Then maybe they disappear again, or maybe they leave a “last known location” indicator on the screen. Maybe it doesn’t show you the enemy, just a blob so you can’t tell what type of foe awaits.
You could also experiment with a “weapon” that lets you see through obstacles, but gives you complete tunnel vision. You can see where everything is, but your turning arc speed is limited and it gives you total tunnel vision outside the narrow cone you’re scanning. Maybe this weapon has a limited amount of seconds it can be used before being totally drained.
Experiment with e.g. allowing only the eyes (or only part of the eye) be visible in some cases, like in dark areas where you don’t have a light.
If you want to determine if two lines intersect, there’s a pretty straightforward bit of math you can use. See “Line-line intersection” on Wikipedia.
Basically, if you know two different points on one line (X & Y 1 & 2) and two different points on another line (X & Y 3 & 4), and want to see if they intersect, first solve this:
P=((X1-X2)*(Y3-Y4)) – ((Y1-Y2)*(X3-X4))
If P equals zero, then the lines are parallel and will never intersect.
If P is not zero, then the point where the lines intersect (X & Y 5) will be:
X5 = ( ( ((X1*Y2)-(Y1*X2)) * (X3-X4) ) – ( (X1-X2) * ((X3*Y4)-(Y3*X4)) ) ) / P
Y5 = ( ( ((X1*Y2)-(Y1*X2)) * (Y3-Y4) ) – ( (Y1-Y2) * ((X3*Y4)-(Y3*X4)) ) ) / P
Note that the first and last parts of these equations are repeated in both equations, so you could write the code so that they’re only solved once, instead of twice. In fact, since the X & Y for each square of geometry remains consistent, you could solve those equations for each type of geometry once at the beginning of the game, and then you’d just have to shift the line coming out from your robot to use the same coordinate system.
So, if (X5, Y5) is inside the geometry’s grid square, then you know exactly where the lines collide in that box.
I suppose I should mention that this is a matrix equation, so you might be able to get the GPU to solve it for you faster.
This is one of those areas that drive me crazy when coding. I’ll spend a load of time optimising a feature I end up scrapping. So the next feature I create I just make a “good enough” version to try out, if it works I can go back and optimise it later. But then I don’t know if the feature will work out until I implement another feature, which then hinges on another feature. Eventually my code becomes a horrible tangled mess. When I try to untangle and optimise it I end up cutting slight corners (I don’t need to update line of sight to static objects so frequently/at all) but the other code is relying on those checks.
Hey, Shamus, if you’re going to have the PC be surrounded by a field of light, you really need to change the player avatar to not be black. Black things don’t glow. It looks really odd to me now. If you made the player avatar white/yellow (or some other bright color) that would work great, particularly if you added a slightly yellow cast to the “lit” area.
Maybe it’s not a big deal, but to me there’s just currently a big disconnect between what the avatar looks like and this light it’s purportedly carrying around with it.
Heck, maybe you could even add some “story” (in the form of a text box at the start of a given “mission”) to the effect that you have to carry this Golden Sunshine Generator to X location for some reason. Maybe the light is more like the clouds of sparkles in Final Fantasy: Spirits Within that revealed the ghosts. This would make sense as to why it lights up the enemies but you can still see the terrain just fine.
The way enemy robots get cut off in straight lines looks awkward. Maybe you should have a transparency near the edge or some kind of blurring to make it less of a straight line.
It’s a bit weird that the level geometry is visible outside line of sight, but the enemies aren’t. I could see that being potentially disorienting.
i had this same problem. It is kind of disorienting. my suggestion, and even if the level geometry is hidden later on i’d still like to see it added. any robots that are partially visible add a silhouette to the unseen portions. The line just cutting them off looks off…
Making the avatar a yellow or a golden colour, as someone suggested, would I believe add a great psychological effect, because it would make Them very different from You.
You are Bright, surrounded by things that are Dark, Different and Hiding.
It would bepretty easy to see them as the Evil Enemy at that point :)
What about keeping the avatar dark while making the enemies bright?
Every time I think I’m a good coder who knows what he is doing, I read one of these posts and realize just how much more I have to learn before I can claim to be anything approaching an expert.
I have to wonder Shamus, how much time do you typically spend doing research while you are coding? Do you pull most of this stuff out of your knowledge banks, or do you spend a significant amount of time browsing forum threads and such, still?
I find I can start projects and I do really well for a week or two, then I find myself spending so much time trying to learn the techniques of what I’m doing that the project just grinds to a halt. I’m not sure how to break through this knowledge barrier gracefully, other than just beating my head against it until I learn it.
Why scan the entire area around the player? Presumably the game already knows where the player is, and where all the enemies are… I would imagine? So just test lines between the player and the enemies to see if they go through world geometry. If they do, enemy not visible. If they don’t, enemy is visible?
I imagine this has to do, at least in part, with the last picture in the post: You can see that some enemies are only partly visible; therefore, something more complicated than just “Is the enemy behind level geometry?” is necessary, e.g. “Is some part of the enemy behind level geometry?” On the other hand, it might be possible to check two points on the enemy’s polygon and then do a more detailed check if either of the two gets a hit. (Or maybe three or four points, so the positions of the points don’t have to migrate across the polys as the enemies’ positions relative to the player moves.) On the other other hand, I don’t know programming well or graphics programming at all, so perhaps this is actually a performance issue.
It’s also possible that this is a result of Shamus’ desire to lock in at 60fps. In general, the method Shamus has described will have a similar number of calculations at most locations in the levels he’s going for, while the method you suggest, as well as my alternatives, have a number of calculations dependent on the number of enemies… not onscreen… existing within the area of the screen? The point of this is kind of that some of those won’t be onscreen, which is making terminology a little difficult.
Anyway, while the overhead per enemy is probably low, I imagine that some parts of the game may flood you with enemies. Also, even for low-overhead operations, it’s good to have a fairly constant overhead so as to know how much you have left to spare when you add something else that might take more resources, which is, again, important when you’re locking in to 60fps.
On the other hand, Shamus may just not have thought of it. While he’s a good programmer (and probably better than I’ll ever be), he can’t think of everything. Or maybe that’s just not what he wants to do. While he keeps referring to this project as a “game” and talks about potential “players,” all of his programming projects are mostly for his own enjoyment in seeing how his coding ideas turn out. An actual playable game, if one ever materializes, would be just a side benefit, hence his comment in an earlier article about being herded by passive-aggressive suggestions in the comments… which I hope Shamus doesn’t take my suggestions here as. Like I said, I know next to nothing about graphics programming! (insert passive-aggressive musings about programming a 2d game in a 3d engine like OpenGL as opposed to SDL or SFML)
At this rate Shamus, you might be moving to a cute little shareware Indie game.
I certainly like the way it’s going.
I got one suggestion though (I think somebody else in the comments hinted at this too).
The walls, the sight scan/area should not see the walls/walls behind walls.
“UNLESS” the player has been there before.
As the player moves around the underground gets mapped.
And maybe add a outline to anything that had moved, to show where they where last seen (but no longer are).
It’s looking pretty sweet. However, it still looks a little strange, as if the robots are materializing into existence. It might look less jarring if you could see the whole robot as soon as part of it reached your line of sight…
You absolutely want to take advantage of your knowledge of the terrain when doing these calculations.
All you need to check is the line from the player to the terrain corners. (You KNOW there’s a straight line between the corners, so there’s your triangles. Corner->Player->Corner)
There’s a good article on the algorithm here –> http://www.redblobgames.com/articles/visibility/
Looking at that last image I have two questions/suggestions:
1) Can the edges of the visible sections be made fuzzy? i.e. penumbra blurry edges, not perfectly smooth. It may actually look better style-wise with the sharp edges that you have in that image… but thought I’d mention it for you to at least try.
2) You mentioned laser in one hand, missiles in the other. Do you have a control to turn your robot 180 degress? Not sure what your gameplay ideas are… but I think that might be fun. Have missiles have a slower rate of fire, or be a limited resource, or similar. Which makes “switching hands” necessary sometimes?
Anyway, two ideas I wanted to at least mention.
Have you ever heard of the game Monaco? It uses this sort of topdown line of sight idea.
I would also agree that the cutoff between visible and invisible should be blurred a little.
Anyway, I’d love to play this game so I hope you make it available!
This is looking fantastic! I do hope that you release this game ( even if only in source format — actually, especially in source format, I’d love to learn a few things from your code )
+1 to recommend both Teleglitch and Mark of the Ninja, which both do hidden enemies very well.
Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*
You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>
You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?
You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.
You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!
You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>