It’s been a while since I talked about this series, but Goodfellow has been putting out a steady supply of really interesting work while my attention has been elsewhere. Part 16 of the series is now up, and it’s full of interesting ideas. Let me give a cliff notes version of what he’s doing:
Sending data to your graphics card is slow. (Relatively speaking.) Your graphics card is sort of like another computer. It has its own memory and its own processors. Your PC sends it a fat wad of data describing the position of the polygons in the world, and the GPU (your graphics card) has a jolly good think. When it’s done, it sends back the finished image. (Basically.) The problem is: There’s a limit to how fast data can be moved between the two. It’s like two bustling cities with vast ten-lane highway systems, but between the two is just a dirt lane.
The traffic between these places is measured in bytes. One byte can hold an integer from 0 to 255. That’s it. In C++, you can make a variable to do exactly this. If you’ve got two bytes, you can store values from 0 to 65535. Most of the time in graphics programming, we’re using variables called float. A float is 4 bytes, and can store non-integer numbers like 3.14 or 0.00001. When you send a vertex off to be rendered, it needs 3 float values, one for the x, y, and z values that say where the vertex is located. At four bytes each, that works out to 12 bytes. We also need three more floats to describe the texture. And we need three more for the surface normal, which is used to describe which way this vertex is facing, for the purposes of lighting.
That’s nine float values. At 4 bytes each, that’s 36 total bytes. If you try to render an object with 1,000 vertex points (chump change) you need 36,000 bytes, which is just over 35 kilobytes. Again, not a big deal. But once you start pumping millions of the dang things through the system you end up with a horrible bottleneck. You can send that data every frame and clog up your dirt road, or you can try to store it all on the graphics card and eat up all your GPU memory, but either way, you’re dealing with a glut of data.
But Goodfellow has implemented are really clever idea. Unlike more traditional games, a Minecraft-style world is made from cubes that are (assuming the programmer is not an idiot) exactly 1 unit in size. So even though you’re using float values that can store stuff like “12,552.08423”, the values are all 1.0, 2.0, 3.0, and so on. They’re simple whole numbers. They would fit in a single byte. In fact, less than a byte. You don’t even need the whole byte. Likewise, surface normals are usually able to define verticies facing any direction – you can make a sphere that is smoothly shaded. However, we’re rendering cubes, and the sides of a cube face in one of six different directions. Instead of three floats at four bytes each, we only need part of a byte.
So what he’s doing is reducing all of these values to integers, and “packing” them together. That is, several different pieces of data are sharing a single byte.
Imagine a guy doing the books for his company. In most cases. he’d fill in each bit of paperwork with the employee’s full name. But because of some freak of luck or extreme nepotism, everyone at the company is named either Adams, Smith, or Zoidberg. He can then save himself some hassle by filling out the paperwork with A, S, or Z as the last name, as long as they translate it back into the full last name when they go to fill out the paycheck.
Now, this takes some extra thinking on the part of both the CPU and GPU. Goodfellow has to write his game to condense everything into this shorthand. Then he has to write another program for the GPU (called the “shader”) that will take the shorthand and turn it back into the full 36 bytes of data for rendering.
I’ve never heard of anyone doing something like this. I would normally be worried that a process like this would slow things down, but it turns out that Minecraft-style rendering isn’t really taxing the GPU. It has plenty of time for this sort of business. (Remember, the GPU’s of today are made to draw bump-mapped polygons with several textures and all kinds of exotic lighting effects on them. Simply drawing flat cube faces leaves the GPU feeling bored and under-appreciated.
So he’s getting all this for “free”.
There’s a lot more going on. Be sure to check out the full article.
A screencap comic that poked fun at videogames and the industry. The comic has ended, but there's plenty of archives for you to binge on.
Best. Plot Twist. Ever.
Few people remember BioWare's Jade Empire, but it had a unique setting and a really well-executed plot twist.
A video discussing Megatexture technology. Why we needed it, what it was supposed to do, and why it maybe didn't totally work.
Final Fantasy X
A game about the ghost of an underwater football player who travels through time to save the world from a tick that controls kaiju satan. Really.
Pixel City Dev Blog
An attempt to make a good looking cityscape with nothing but simple tricks and a few rectangles of light.