|By Shamus||Aug 4, 2013||Programming||48 comments|
As in years past, I thought I’d step through John Carmack’s keynote and translate some of it into non-programmer talk, because I think there’s a lot here that’s worth talking about. Note that I am not an expert on all these things. I’m just drawing from my increasingly dusty experience as a programmer who long ago specialized in graphics.
The above embed is from IGN. It was the best, most reliable version I could find on YouTube as of this writing. The times given below are approximate.
02:00 “It’s amazing how close [the Xbox One and the PS4] are in capabilities.”
This quote was picked up by all the gaming sites so there’s not too much to say about it. I suppose this is a really juicy quote for people looking to watch a flamewar between the two major groups of console adherents, but it’s not really a controversial statement. The two machines are very similar, and that similarity was probably the result of both companies hunting for the sweet spot on the power / cost tradeoff when they went shopping for processors for their next-gen consoles.
John is talking about the Larabee architecture Intel was working on back in 2010 or so.
Right now, your GPU (your graphics card) and your CPU (the processor on your computer) are very different beasts. The GPU is specialized for the types of things you do in graphics processing. As a primitive example: You spent a LOT of time processing groups of 3 numbers. They’re what puts the “3”in “3D graphics”. If you want to calculate the position of something, you need to do some kind of math on those X, Y, Z values. Your GPU is designed to have multiple math pipelines beside each other so it can do the calculations concurrently. If you want to know X*2, Y*2, and Z*2, It can do all three math operations in the time it would take a standard CPU to do one. This sort of situation happens constantly in graphics processing, but not terribly often in regular processing.
For example, your regular CPU might be doing X + (Y * (Z + 1)). Or whatever. GPU architecture couldn’t help you with this because the operations can’t be done in parallel. You can’t calculate X until you get Y, and you can’t do Y until you’re done computing Z+1, so the three operations have to be done in order.
A GPU is amazing at doing 3D rendering, but you would have a devil of a time putting all of its power to use if you tried to use it for general computing. The reverse is true for a CPU. A CPU is a railroad track that lets a single train pass very quickly. A GPU is a three (actually way more that three) lane highway where cars can travel side-by-side so more cars can flow through in any given moment. You wouldn’t make the train go any faster if you gave more lanes, because the cars are all linked together in a chain.
Larabee was an attempt to have the best of both worlds. I guess? I’ve only read a little about Larabee and I don’t really understand what the end product would have looked like.
03:45 “Things like Voxel or splatting architecture or one of these things that’s not triangle rasterization.”
“Rasterization” is fancy programmer talk for “put the pixels on the screen”. Triangle rasterization is how all rendering is done right now. Everything that winds up on the screen in a videogame is made of triangles. Even text is just a picture of text, slapped onto a pair of triangles arranged in a rectangle. Even indie retro artsy 2D sidescrolling games that use hand-drawn characters are created by taking that hand-drawn art, turning the art into texture maps, and projecting the texture maps onto triangles.
It was the dominant style of rendering back in the 90’s when graphics cards were rising to prominence, so cards were designed around it. From that point on we were basically locked into using triangles. If for some reason you didn’t want to use triangles for rendering, then your game wouldn’t be able to use the graphics card in the computer, which means your game would run very slow. Also you’d have to write your rendering engine from start to finish and you wouldn’t be able to use any of the libraries, toolkits, code snippets, or anything else out there. You’d have to build everything yourself. This is a lot of work just to make your game incredibly slow.
Voxel here is somewhat confusing to some people because the word has gotten tangled up. There’s voxel data (storing data as a cloud of cube data) and there’s voxel rendering. Minecraft uses voxel data, but it’s rendered using good old-fashioned triangles. If you want to see voxel rendering, then the most recent example that I know about is the 1999 game Outcast. And since that game uses voxels instead of triangles, it’s not 3D accelerated. It doesn’t use your graphics card.
7:00 “Tons and tons of memory is the thing that makes game development a lot easier.”
This is really important on a console, where you’re likely reading from optical media. Maybe the player is on some gigantic level and suddenly it’s time for an elaborate cutscene. The cutscene can’t fit in memory at the same time as the level. Which means you need to drop the level data, even though you’re going to need it again the instant the cutscene ends. Since the level is stored on a DVD, it’s going to be very slow to load. Maybe you can put the level on the hard drive – assuming this console has one – but there are limits to how much you’re allowed to stick on the hard drive at one time. So you have to construct this multi-caching system where you pull stuff off the DVD and put SOME of it on the hard drive, and then pull it off the hard drive into memory. And no matter what you do, the player ends up looking at the loading screen twice.
It’s like the old river crossing puzzle. You know, it takes seven steps to get a wolf, a goat, and a cabbage across the river because the boat only holds two? Imagine how much easier the puzzle is if the boat can hold all four of you. It stops being a puzzle and you can get on with your day.