So last time I described how OpenGL used to work in the pre-graphics card stone age. Those were simpler, clearer days. Yes, they were also slow as hell and unable to do much in the way of fancy graphics. But, you know, simple.
But the simplicity couldn’t last. The evolution of OpenGL is basically a long series of refinements where more and more work was gradually moved to the graphics card. Let’s go over them.
As far as I can tell, this was the “killer app” of graphics cards. Your game shoves the texture maps over to the graphics card. Then it tells OpenGL how big the canvas (the screen) is. Then it sends a bunch of 2D triangles along the lines of, “This triangle occupies this part of the screen and uses such-and-such part of this texture.” The graphics card would do all the work of coloring those triangles in with pixels.
Let’s talk about a CPU. The CPU inside your computer is a complex beast. Even ignoring the fact that it’s actually many cores strapped together, it needs to be able to perform all sorts of operations. Code needs to be able to branch. (If X then do thing A or else do thing B.) It needs to be able to switch between running one program and another, completely different program constantly. It’s got this complex system where it brings in data from main memory and moves it into progressively faster but smaller banks of memory. Basically, it needs to be able to run anything. I’ve heard people refer to this as being a machine that is Turing Complete.
Not so with graphics cards. They just needed to fill in triangles with pixels. They didn’t need to do three completely different things at once, or handle complex branching code. Since their output was in colored pixels and not hard data, a certain degree of slop was allowed. The math could make certain approximations and shortcuts because if the output was 0.01% off, nobody would be able to tell. Even if you had superhuman eyes that could spot the subtle color differences, you’re using a monitor that can’t display differences that slight.
All of this meant that graphics processors could be much simpler. It’s a bit like comparing a classic fast-food place to the work of a single highly trained chef. The chef can make you almost anything you can ask for, while the fast food place can only make hamburgers. But the fast food place is optimized for it and can crank out 24 hamburgers a minuteSomething like that. You young people don’t realize this, but fast food used to be FAST. Back before McNuggets, McChicken, McRib, McFish, McSoup, McPizza, Fajitas, Grilled Chicken, Chicken Tenders, six different salads and ten different types of hamburger.. Simplicity is speed.
Simpler cores didn’t just make them faster, it also made them smaller. So instead of one giant core they could have a whole bunch of them. And those cores would do nothing but take in 2D triangles and spit out pixels.
2. Transform and Lighting
So it’s sometime in the late 90’s and we’ve got these graphics cards that take 2D trianglesThat is, a triangle expressed in terms of where it will appear on the screen. The graphics card has NO IDEA where the vertex might be in terms of your 3D game. and fill them in with pixels. That’s cool, I guess. It was certainly a massive boost in terms of our rendering capabilities. But it was clear they could be doing a lot more.
There’s a step I’ve been sort of glossing over here. That’s the bit of math you have to do to figure out if a particular triangle will end up on-screen, and if so, where. “Hm. Given that the camera is in position C and looking in direction D, and this particular triangle is so many units away, then the vertices of this triangle will end up on this part of the screen.” Like coloring in triangles with pixels, this is yet another brute-force, bulk, doing-math-on-three-numbers kind of jobYes nitpickers, it’s actually FOUR numbers. But I don’t want to burn through a couple of paragraphs explaining why. We’re already in a two-levels-deep digression. Let It Go.. Which means it’s a good thing to offload onto the graphics card.
This process of translating vertices from game-space to screen-space is called “Transform and lighting”I’m glossing over the whole “lighting” thing here because I don’t think we need to go over it in detail. And we’ve got enough ground to cover as it is..
Because I spent so much of my career riding the tail end on the technology curve, the timeline is always a bit muddled for me. Apparently all of this happened in 1999, but I didn’t really think about it for at least another four years or soWhich is a long time in computer graphics, and a really long time at this particular point in graphics evolution..
According to the NVIDIA page, the new T&L-capable cards offered “an order of magnitude increase in visual complexity”. An “order of magnitude” is just engineer talk for “ten times more / less”, but it sounds so much more impressive and technical than just saying “ten times”. On one hand, I’d caution against believing breathless claims made by marketing. On the other hand, that sounds pretty accurate. We really did get a huge jump up in model complexity.
Here is a screenshot from the pre-T&L game Thief:
The canister on that table is a cylinder with 5 sides. (Not counting the top and bottom.) It looks ridiculous. At the time, every polygon was a precious thing, and we couldn’t afford to waste them. So our games were filled with coarse geometric shapes. When we were able to offload a bunch of polygon processing to the GPU, we suddenly had enough polygons to make round things look round.
The problem was, this jump was almost too big. It was so big that we kind of didn’t need to make another. If we double the number of sides in the canister above, you’ll end up with something that looks completely round to the user. Maybe if they mash their face into the object they will be able to see the angled edges, but it’s no longer the glaring problem. The jump from 5 to 10 polygons is a lot more visually important than the jump from (say) 10 to 100, or even 1,000. We suddenly had as many polygons as we could hope to use on those old machines.
3. Vertex Buffers
Remember that a graphics card is, in a lot of ways, a separate computer. It’s got its own memory and its own processors. So when you want to tell the graphics card to render a triangle, you need to send it all of the information about that triangle. There is a bit of a choke point between the devices, meaning it takes much longer to send a triangle to the graphics card that it does to (say) move the triangle from one part of memory to another.
Once you have T&L, you’ll quickly notice that you spend a lot of time sending the exact same data to the GPU over and over again. Every single frame, you tell the graphics cards about the same exact polygons that are still in the same positions. (The walls of your level, and other non-dynamic stuff.) The camera is moving, not the walls. So why do I need to keep sending the same huge load of data every frame?
So vertex buffers give us a way to shove all the data from the PC and store it on the GPU. So instead of sending 1,000 polygons, I just need to tell the card, “Remember where I gave you 1,000 triangles? Draw those again, but with the camera in this new location.”
Like I said a few paragraphs ago: If we wanted games to continue improving visually, it was pretty clear that mindlessly cranking up the polygon counts wasn’t the way to go. To take the next step, we needed to change how we drew those polygons. And for that we needed shaders.
From this point, it was no longer possible to offer a generalized solution for all games. You could make a graphics card that was good at cel shading, but that would only help with games that were cel shaded. You could make a card good at bump mapping, but that wouldn’t do anything for games without bump maps. Instead of adding more “features” to the graphics card, we just needed to give developers a way to control all that raw rendering power directly. We needed to give them a way to write programs that ran on the graphics hardware.
So now developers need to make two shaders: A vertex shader to do the Transform & Lighting, and a fragment shader to do the rasterization.
Shaders made a lot of things possible or practical: Light bloom, anti-aliasing, various lighting tricks, normal maps. This was a massive turning point in game development. In one leap:
- Games took a massive step forward in visual quality. I think this was actually really important from a cultural perspective. Serious games now looked good enough that they could show up in a TV commercial without looking ridiculous. Before this point, only cartoon-y games (Mario, et al) could get away with this.
- Games took a huge jump in expense to produce.
- Games took a big jump in complexity. In the late 90’s, it was still possible for a small team to get together and make a AAA game without publisher backingBioWare is the oft-cited example of this.. This was the beginning of the end of that era. (You could argue that with the release of so many free AAA game engines, those days are returning. But that’s another article.)
Like a lot of technical advancements, you can argue about where to “officially” draw this particular line. I draw it at 2004. Half-Life 2. Doom 3. Thief Deadly Shadows. Compare each of those games with their predecessorsTo be clear, I mean you should compare Doom 3 with Quake III Arena, not with Doom 2. to see just how extreme this move was. Visually, I think Doom 3 has more in common with the games of 2015 than it does the games of 1999.
|Left is without normal maps. Right is standard rendering. Note the keyboard and face. Click for LOLHUGE! view.|
5. More Shaders
Since 2004, the graphics race has mostly been about what we can do with shaders. I can’t think of any big changes since then. Every once in a while shader programs get some new ability. Sometimes the ability is explicit. We’ve added some ability to manipulate pixel data in ways that weren’t possible before. Sometimes the feature is implicit. Graphics hardware is now fast enough to do some heavy-duty lighting effect that would have been too slow on the old cards. But either way, the steps have been more evolutionary than revolutionary.
And to be honest, for me all the changes kind of blur together here. The documentation on the OpenGL shading languages isn’t that great to begin with, so it’s pretty hard to piece together the various iterations of the language if you weren’t already following them when they happened.
The point of all this is: Right now, in 2015, you can use any or none of these features of OpenGL. You can render with all of the advancements of the last 20 years, or you can render raw immediate-mode triangles like it’s 1993. This has made OpenGL sort of cluttered, confusing, obtuse, and ugly. It’s poisoned the well of documentation by making sure there are five conflicting answers to every question, and it’s difficult for the student to know if the answer they’re reading is actually the most recent.
So now the Kronos group – the folks behind OpenGL – are wiping the slate clean and trying to come up with something specifically designed for the world of rendering as it exists today. Vulkan is the new way of doing things, and it is not an extension of OpenGL. It’s a new beast.
I know nothing about it, aside from the overview-style documents I’ve read and the tech demos I’ve seen. Given my habit of lagging behind technology until the documentation has a chance to catch up, I probably won’t mess with Vulkan for a few years.
In the meantime: OpenGL is strange and difficult, and there’s nothing we can do about it. This is a rotten time to be learning low-level rendering stuff. The old way is a mess and the new way isn’t ready yet.
 Something like that. You young people don’t realize this, but fast food used to be FAST. Back before McNuggets, McChicken, McRib, McFish, McSoup, McPizza, Fajitas, Grilled Chicken, Chicken Tenders, six different salads and ten different types of hamburger.
 That is, a triangle expressed in terms of where it will appear on the screen. The graphics card has NO IDEA where the vertex might be in terms of your 3D game.
 Yes nitpickers, it’s actually FOUR numbers. But I don’t want to burn through a couple of paragraphs explaining why. We’re already in a two-levels-deep digression. Let It Go.
 I’m glossing over the whole “lighting” thing here because I don’t think we need to go over it in detail. And we’ve got enough ground to cover as it is.
 Which is a long time in computer graphics, and a really long time at this particular point in graphics evolution.
 BioWare is the oft-cited example of this.
 To be clear, I mean you should compare Doom 3 with Quake III Arena, not with Doom 2.
Quakecon Keynote 2013 Annotated
An interesting but technically dense talk about gaming technology. I translate it for the non-coders.
Bethesda felt the need to jam a morality system into Fallout 3, and they blew it. Good and evil make no sense and the moral compass points sideways.
Starcraft: Bot Fight
Let's do some scripting to make the Starcraft AI fight itself, and see how smart it is. Or isn't.
A Star is Born
Remember the superhero MMO from 2009? Neither does anyone else. It was dumb. So dumb I was compelled to write this.
Trusting the System
How do you know the rules of the game are what the game claims? More importantly, how do the DEVELOPERS know?