Normal mapping is the next step. It’s been a mainstay of AAA graphics since 2004A banner year for technology. Both Half-Life 2 and Doom 3 happened that year. While they weren’t the first games to do it, both were really great showcases for normal-mapping. and is one of the rare effects that I think justifies the horsepower that goes into it. I’ve never been stunned by depth of field effects, fullscreen anti-aliasing, or a lot of the other fancy-pants effects that required a new graphics card generation just for a bit of “Hm. That’s cool I guess.” visual flair. But normal mapping? Normal mapping is an honestly clever technique that solves all kind of problems.
In the past I’ve sloppily used the terms “bump map” and “normal map” interchangeably. I’ve always disliked talking about “normal maps” when doing these non-technical writeups because I didn’t want to have to stop every time and explain what a “surface normal” was. Without clarification, the reader is likely to assume a “normal map” has something to do with making things appear normal. Perhaps there are abnormal maps? The term “bump map” is just easier for the reader to graspAlso because I used to get them mixed up all the time. Nothing is better at helping you nail down concepts like having to explain them to someone else..
But now we’re working directly with the concept, so after committing years of sloppy terminology abuse we’re going to make an effort to get things right.
I’ve explained normals recently, so go read that if you want the long explanation.
So the problem is that we want worlds with lots of detail. If Gordon Freeman walks up to a wall, we expect the bricks on that wall to look 3D. If a light is shining down the wall, it should strike the tops of the bricks and not the underside. But we don’t want to have our artists build thousands and thousands of bricks just to create a simple room. Even if the graphics hardware can handle drawing them, that’s still not a great use of artist time. And even if we had unlimited artists, it would be incredibly difficult to have each and every room in the game contain Pixar-levels of extreme detailActually, a full-poly scene in a videogame would be WORSE than the same scene in a Pixar-type movie. In a movie, the author controls the camera and you can cut corners on the stuff that isn’t viewed up close. In a game, the audience controls the camera so EVERYTHING has to be high detail. for every element in the scene. Even if development costs and rendering power are infinite, you still have to worry about distributing the game, load times, physics systems, memory usage, and a dozen other things that prevent us from solving every problem with MOAR POLYGONZ!
|Here Half-Life 2 shows us what the normal map looks like instead of using it to light the scene. I was surprised at how many objects in the scene aren’t normal mapped, here. (Everything that’s not ghostly blue.) Curious that it seems to be the objects with the most detail that lack normal maps. I suppose this was a limitation of the day.|
So what we do is use a special texture called a normal map. It lines up with the texture (the picture of the bricks we want to draw) and describes the shape of our fake bricks. Instead of using the shape of the (perfectly flat) wall to light the texture, we use the normal map.
This should give you an idea of just how big a difference normal maps can make. Check out the keyboard, face, and hands:
|On the right is the normal-mapped view like we get in the completed game. Left shows us the “real” polygon structure. Click for Enlargified version.|
This is from Doom 3. You can see that in terms of polygon density, we’re not that far ahead of games like Deus Ex. Mitten hands. Triangle noses. Boxy scenery. But because of the normal maps this 2004 game looks closer to 2014 than it does to its quasi-contemporaries of the preceding years.
So we’re using color values to store spatial data. Because of this, normal maps look kind of odd. Each color channel represents an axis. If we think of the normal map as a tile on the floor, then red is west-to-east. So if a pixel is facing west, there’s no red and if it’s facing to the east it has maximum red. The green channel does the same for north and south. The blue axis points out of the texture map, going “up”. This is why normal maps tend to look so blue. A perfectly flat tile would be solid blue, since all the normals would point up.
Another example, just to beat this point to death. On the left we see the real 3D geometry we’re trying to represent. Our “wall of bricks”, as it were. The middle is the resulting normal map. And on the right is what you get in the game: A flat surface that’s shaded as if it was 3D. I’ll add that I love this particular normal map. Its extreme surfaces, large shapes, and lack of symmetry make it ideal for testing.
One last note before I get started is that Michael Abrash has said that “normal maps don’t look good” in VR. The whole thrust of the effect is to make a surface look bumpy without having to make the actual bumps, but apparently the illusion is shattered if you’re wearing a headset that gives you stereoscopic vision. Your depth perception kicks in and notices that the surface is actually flat. It goes back to looking like wallpaper, like in the days before 2004. It’s entirely possible that when VR comes we may have to brute-force render those bricks after all. We’ll see what happens as VR matures.
(I’ve ordered an Oculus Dev kit 2. They start shipping in August. I have no idea when I should expect mine.)
But that’s a worry for another day. For now we’re just getting bump mapping working.
The first problem we run into is that we don’t just want to use normal maps on the floor. Above I mentioned that the normal map behaves like a tile on the floor. Unfortunately, this is true even when it’s applied to the walls and ceiling.
|Obviously the lighting on the walls makes no sense, but also note how our outward-facing “bumps” have been inverted to depressions on the ceiling. The normal map malfunctions everywhere that isn’t a perfectly flat floor.|
The bumps continue pointing up, even when they should be pointing sideways. Now, the obvious thing to do would be to rotate the normals. If we’re doing a wall, then before we do our lighting calculations we can just rotate the normal value 90 degrees and the values will then behave like a wall. This works just fine, but is slow. It means we have to do complex re-orientation of the normal values for every single pixel we draw.
You might remember this diagram from a few weeks ago:
For any given rectangleIt’s actually a pair of triangles. The GPU can only think in triangles. It has no idea what a rectangle is. Poor thing. we have 4 vertices and usually several hundred pixels to draw. If we re-orient the normals, it means doing the same rotation on all those hundreds of pixels. The faster (although somewhat more convoluted) system is to rotate the light itself in the vertex shader. If we’re dealing with a wall, then instead of turning the normals 90 degrees we turn the light 90 degrees in the opposite direction. Then the pixel shader is set up to just treat every single polygon as if it was the floor.
It actually wasn’t quite this easy. Since I’m used to working with low-tech rendering systems, I’m usually lazy about my texture mapping. In an old game you might have a regular old brick texture. If you mirrored it, it was no big deal. You could even flip the image vertically. It didn’t matter. It’s just bricks. The only time you had to really care about orientation was when you had textures with words on them.
But orientation becomes very important when you’re using normal maps. If the normal map ends up facing the wrong way, then it can end up lit the wrong way. Details that should poke out (like bricks) become indentations.
I had an embarrassing comedy of errors because I forgot about this. I thought I was done, turned around, and saw one wall was inverted horizontally so that the protruding bubbles you see in these screenshots were catching light from the wrong edge. So I messed around with the math and fixed it. Then I turned around again and saw the walls which were previously fine were now inverted. I went around in circles like this for much longer than I care to admit. It finally dawned on me that the normal mapping had been working just fine the first time. I just had the texture mapped the wrong way around on one of my walls.
I felt like an Olympic athlete that won the gold in jumping hurdles and then tripped six times trying to ascend the podium for my medal. At any rate, let’s celebrate by lighting the scene like we’re game developers in 1997!
So it works. We’ve got a nice system that can handle arbitrary scenery and arbitrary lights and it will light, normal-map, and shade everything correctly. It doesn’t need to be a block world. This ought to work on any kind of scenery I throw at it.
The downside is that it’s really shockingly slow. Considering Doom 3 ran just fine on 2004 machines, the stuff I’m doing here is ridiculously slow, dropping all the way down to 30FPS when we should be hitting twice that with lots of time to spare. We’ll take a look at that next time.
 A banner year for technology. Both Half-Life 2 and Doom 3 happened that year. While they weren’t the first games to do it, both were really great showcases for normal-mapping.
 Also because I used to get them mixed up all the time. Nothing is better at helping you nail down concepts like having to explain them to someone else.
 Actually, a full-poly scene in a videogame would be WORSE than the same scene in a Pixar-type movie. In a movie, the author controls the camera and you can cut corners on the stuff that isn’t viewed up close. In a game, the audience controls the camera so EVERYTHING has to be high detail.
 It’s actually a pair of triangles. The GPU can only think in triangles. It has no idea what a rectangle is. Poor thing.
C++ is a wonderful language for making horrible code.
A video discussing Megatexture technology. Why we needed it, what it was supposed to do, and why it maybe didn't totally work.
There's a wonderful way to balance difficulty in RPGs, and designers try to prevent it. For some reason.
A programming project where I set out to make a gigantic and complex world from simple data.
Trashing the Heap
What does it mean when a program crashes, and why does it happen?