A running theme of this project is that 2D game development is like programming in Easy Mode. Everything takes less code, requires fewer steps, uses less CPU / memory, and has a larger margin for error. It’s amazing to be able to just make something without constantly getting snagged on annoying tech issues, performance trade-offs, and gameplay compromises.
Take collision detection, for example.
Early in the project I used rectangle-based collision. If I shot at a bad robot, the game would check for a collision between the bullet and the square region where it was drawing the enemy.
|Okay, pixels don’t suffer zooming nearly as gracefully as polygons. Then again, you can fix this by just not letting the player zoom in too far. Good luck keeping the player from looking at the walls too closely in your FPS.|
But this is a crappy solution. Basically every enemy is shaped like a box for the purposes of collision. That’s good enough when shots hit center-mass, but it’s really unsatisfying to have bullets score a hit when they pass through the (usually empty) corners of the rectangle. Unless I’m going to make all the foes square, this isn’t a viable technique.
So I change it to check using distance calculations:
|Now I only score a hit if my shot enters the purple circle.|
This turns the robot hit zone into a circle. Now, this still isn’t perfect. Sometimes bits of the robot poke out from this circle, and bullets go through those points. But this is hard to notice and you can’t really see it happening unless you catch it in screenshot. This is basically good enough.
Well, until we get here:
Neither of these collision systems is good enough in this case. This boss is very large and its shape is complex. Having bullets pass through parts of it is frustrating for the player. Having bullets strike empty air is unsatisfying. Neither case is acceptable. There’s no way around it. We need pixel-accurate hit detection.
This NeHe tutorial is the usual go-to approach for hobbyists. It’s pretty brute-force, but it gets the job done.
In the abstract, it works like this:
You zoom in with the rendering camera so that the bullet (or whatever point you’re checking for collision) would fill the entire screen. Then you tell OpenGL to go into “select” mode. Now start drawing stuff that you think this bullet might be crashing into. Give each one some sort of identifying number.
Okay, OpenGL, here is object #1. (Draw the first thing you think the bullet might crash into. It might be one polygon or ten thousand. OpenGL will treat all of them as “Object #1”.)
Then you give object#2, #3, and as many different objects as you’d like to check.
OpenGL won’t actually draw this stuff. Instead, it will keep a list of every object that lands inside the screen. (And remember the screen is super-zoomed in on the bullet, so anything that is within the screen is touching the bullet.) When you’re done, take OpenGL out of select mode and it will give you the list of stuff that would have been drawn. You then have a list of all the stuff the bullet is touching, and can respond accordingly.
Now, the most obvious objection to this approach is that it’s slow. You’re basically doing a little rendering loop for every single bullet in play. That sort of thing can get out of control fast. I also dislike it because it’s really cumbersome. It takes a lot of lines of code to make this happen. You need to do all the math to zoom the camera, you need to set up select mode, you need to gather up likely candidates for collision, you need to perform some sort of abbreviated render loop to draw those candidates, you need to extract the list from OpenGL and sort through it, then you need to clean everything up and put the camera back where it belongs.
In a 3D game, things are even worse. We’d have to take some other approach entirely. It would be time to bust out the trig textbooks and start intersecting lines (bullet paths) with planes (the polygons of all the stuff you’re shooting) to find out which dudes the player has shot. (Or whatever stuff is running into other stuff in this game.) This is also slow, complicated, and really annoying to debug.
But we’re working in 2D, and we’re drawing all of our robots from a single texture. Right now, that texture looks like this:
|For the purposes of illustration, I’ve made the transparent areas magenta.|
This gives me an idea. Normally I load this texture into memory, hand it off to OpenGL, and then throw it away after OpenGL makes a copy for itself. But instead of throwing it away, I’m going to hang onto it. And I’m going to switch back to using the old rectangle-based collision I started with. So let’s say the player shoots at a robot:
For the purposes of what we’re doing here, our bullets are single pixel in size, even if they’re visually massive and giving off a little cloud of glowing particle effects to make them seem even larger. If we detect our bullet has landed inside of the rectangle of a robot, then we ask the robot where it’s drawing from on the sprite sheet:
How far is the bullet from the upper-left corner of the hit box? 25% of the way across and 74% of the way down? That would mean the bullet is touching this single pixel of the robot’s texture:
I look at that pixel in the texture data I saved earlier and check to see if it’s opaque. It is? Then the player just shot this robot.
Now we have pixel-perfect collision for basically free, in just a couple of lines of code.
The only problem is this:
This is some sort of spinning… sawblade… cutty-thing. I’m going to re-do the art for this one. Someone pointed out it looks kind of swastika-ish, and not like “shuriken made of scythes”, which is what I was kind of going for. Of course, I might re-do the art for everything. The robots are basically just the point where my prototypes looked acceptable enough that I could ignore them and go back to coding.
At any rate, this reveals a slight problem with collision. The blades are spinning very fast. The bullets are moving very fast. The robot is moving fast. And outside of that circle in the center, the robot’s texture is about 80% empty space. Which means if bullets pass through the area where the blades are, they only have a 20% chance of landing on a solid pixel each frame. If the bullet spends 5 frames travelling through the robot’s space, that means it still has a ~33% chance of passing all the way through without hitting it at all.
You could excuse this by saying it’s like shooting through helicopter blades: Some rounds are bound to slip by. But it doesn’t feel right. It feels like collision detection is broken.
There are a bunch of ways I could fix this. The most obvious would be to have it trace all the points along the bullet’s path if it finds itself in the hit-box of a robot. That would help, but I think there would still be a chance for the bullet to pass through when the blades are moving fast. A more lazy way would be to make some of that empty dead space between the blades semi-opaque. An opacity of 1 (out of 256) would be invisible to the player but would cause bullets to collide the way your eye expects.
A well. When in doubt, put it off. I can make this call later after I’m sure I’m done messing with the art.
The Strange Evolution of OpenGL
Sometimes software is engineered. Sometimes it grows organically. And sometimes it's thrown together seemingly at random over two decades.
There are two major schools of thought about how you should write software. Here's what they are and why people argue about it.
Bethesda felt the need to jam a morality system into Fallout 3, and they blew it. Good and evil make no sense and the moral compass points sideways.
The Biggest Game Ever
How did this niche racing game make a gameworld so massive, and why is that a big deal?
The Opportunity Crunch
No, brutal, soul-sucking, marriage-destroying crunch mode in game development isn't a privilege or an opportunity. It's idiocy.