This first part isn’t important to the project. But we’re talking about it anyway basically because I want to.
Obviously in 3D space, the concept of which way is up or forward is completely arbitrary. We’ve got 3 axis, one for each spatial dimension, universally named X, Y, and Z. You can arrange these any way you like. If you want, X can be down, Y can be forward, and Z left. If we’re looking to assign an axis to the directions of left-right, back-forward, up-down, then we can do it six different ways:
XYZ, XZY, YZX, YXZ, ZXY, or ZYX.
Furthermore, we can change the orientation of any of these lines, so if we chose XYZ, we could have positive X values go east and negative values west, or we can flip that around and have the axis point the other way. So there are six ways to organize our axis and within each of those there are eight different combinations of which way they point.
In a pure mathematical sense, none of this matters. It’s all arbitrary. Instead of XYZ you can name your axis HUMPERDINK, SNAGGLETOOTH, and CARROTJUICE. They can be in any order and point any way you like. The math will all work out. But from a practical standpoint, we’ve basically settled on some conventions and you shouldn’t break from those unless your plan is to drive people crazy.
In the end, any of these coordinate systems will fall into one of two groups: Right or left-handed systems:
|NERD GANG SIGNS. From Wikipedia. Relevant article here: Cartesian coordinate system.|
Take a right-handed coordinate system and flip one axis, and you have a left-handed system. Flip another, and you’re back to a right-handed one.
Fine so far? No? Sorry. I tried.
By far the most popular coordinate system (or at least, the one I’ve encountered the most) is one where X points east, Y points up, and Z points south. This is a right-handed system. This is used by Oculus, all id Software games, and my former employer, and I’m sure a lot of other games out there. We’ll call this system Doom-space. Both Unity and Unreal Engine use left-handed systems, although I can’t remember how they arrange their axis off the top of my head. Personally, I’ve favored a system where X points east, Y points north, and Z points up. This works out to be a left-handed system. We’ll call this system Shamus-space, since in Euclidean geometry my ego is unbounded.
For the past few years I’ve favored Shamus-space because to move from 3D to 2D (let’s say we want to depict where the player is on an overhead world map) all you need to do is throw away the Z value. If you’re using Doom-space, then to make the transition you throw away Y, invert Z, and re-assign it to YThis is assuming you use the default OpenGL 2D mapping where X runs left-to-right and Y runs bottom-to-top, which is the ACTUAL source of all this chaos.. And that’s a lot more cumbersome and prone to mistakes. On the other handLiterally!, OpenGL defaults to a right-handed system, so to use a left-handed system you’ve gotta flip an axis. So it always feels like you’re at odds with the underlying system.
Sticking to my unconventional system has its costs. Whenever I check out code snippets or example programs from other coders, I always have to juggle everything around to make it work in Shamus-space, since Doom-space is so much more commonIf not more common in practice, then at least more common among the types of hobby-coders who share their work online.. In the long run, I think this cost is probably more severe than the occasional annoyance of not having it feel intuitive to me. So for this project we’re going to use Doom-space.
Also, I am sort of planning ahead. I don’t have an Oculus yet, but I plan to get one someday and I want to be able to re-use this code when that happens. Oculus provides you with all of the headset position and rotation info Doom-space, and I do NOT want to have to constantly convert between the two. Eugh.
EDIT: And it looks like I inverted the north/south axis in my above description. I’m not going to fix it, because this is a great example of the kind of confusion I keep running into.
Anyway. Let’s get this started. I don’t want to belabor the first steps of setting up heightmap terrain. I’ve already done three projects that involves heightmaps. Let’s skip the heightmap stuff and get to the shader work.
In the old days, we would begin with a flat grid:
|Taken from this ancient post.|
And then we would lift the points up to create hills:
That hill-building would be done by the CPU. We’d build all these polygons and then send them off to be rendered. But here we can skip that step and just dump all that work onto the GPUGraphics Processing Unit. Literally: Your graphics card. Don’t let the name mislead you. It’s not a single processor, but many. (Many many.) To do that we use a shader, which is a program that runs on your graphics card instead of on your CPU like all your other software. We create a shader, compile it, and then send it over to the graphics card to be used. That program will control how polygons are rendered, and can do all sorts of nifty things without troubling our poor overworked CPU.
When we’re using shaders we just render the original flat plane, and provide the shader with an extra bit of info: A texture image like this one:
We use the color values as elevation. So, basically we’re looking at a map of the world, and lighter = higher. The white spots will be mountain tops and black spots the low points. This is just to get us going. Eventually we’ll generate our terrain procedurally, but for now this is a quick way to get some polygons to work with. So as the flat plane is being rendered, the shader is looking at this texture, pulling out a color value, and quickly lifting up the vertex before proceeding.
Just so I can see what I’m doing, I have it color the terrain based on height. This forms some arbitrary strata. Again, this is just to get us going so we’re not looking at a field of solid color.
You’ll note that the world is flat-shaded. There’s no shading, no shadowing, nothing to give us a sense of contour. If not for my half-assed coloring, the terrain would be a single flat color. This is because, for the purposes of lighting, it’s still rendering a perfectly flat plane. We have the information to deform the plane to make hills, but we don’t have the information to know the angle of any particular point on the surface when we’re drawing it. If we don’t know the angle, then we don’t know how light will interact with it, which means we can’t shade it. For that we need a normal map.
We’ll do that next time.
 This is assuming you use the default OpenGL 2D mapping where X runs left-to-right and Y runs bottom-to-top, which is the ACTUAL source of all this chaos.
 If not more common in practice, then at least more common among the types of hobby-coders who share their work online.
 Graphics Processing Unit. Literally: Your graphics card. Don’t let the name mislead you. It’s not a single processor, but many. (Many many.)
A look back at Star Trek, from the Original Series to the Abrams Reboot.
A programming project where I set out to make a Minecraft-style world so I can experiment with Octree data.
Spec Ops: The Line
A videogame that judges its audience, criticizes its genre, and hates its premise. How did this thing get made?
Good Robot Dev Blog
An ongoing series where I work on making a 2D action game from scratch.
In Defense of Crunch
Crunch-mode game development isn't good, but sometimes it happens for good reasons.