|By Shamus||Aug 8, 2011||63 comments|
On Friday, August 5th, John Carmack gave the keynote address for Quakecon 2011. He does this every year, and always has interesting and important things to say about the industry.
This year, I thought I’d give my own commentary on what he said. His talk lasted for about an hour and a half, so there’s a lot of ground to cover here. I apologize for the awkwardness of this. Ideally there would be a way to seamlessly have my comments available during his talk, or a link to my comments, but instead we have to do this the other way. This might be a bit of a pain to watch. I’d suggest simply watching the whole thing, rather than trying to pause-and-play your way through it, but whatever works.
First is the talk in full, followed by my comments, with links to the individual timestamps in question.
Before we begin: Man, that thumbnail view of Carmack is really unfortunate. The guy spent an hour and a half looking calm, measured, and thoughtful, and the thumbnail managed to capture this expression of wacky, comedic bafflement.
It’s interesting that he mentioned Skyrim. Remember that ZeniMax now owns id Software. ZeniMax also owns Bethesda, so he’s talking about a product from a sister development studio.
I’ve belabored in the past the systemic development problems I see inside of Bethesda with regard to technology, but the basic list is:
- Bugs, or dysfunctional QA process. I don’t know what it is, but their games are always wonky and unstable.
- Inefficient coding. The system specs for Oblivion were about a graphics card generation ahead of where they should have been, as Oldblivion demonstrated.
- Haphazard art assets. Oblivion had 3,000 polygon boulders and 28 polygon tufts of grass. This is a waste even by 2011 standards, and was criminal in 2006. In the same game, they were using 128×128 textures for character faces, which was shockingly low for the times. This could be a management problem. It could be a tools problem. It could be a problem in the company culture. I don’t know, but the behavior has persisted through several games.
- Horrible development tools. The Elder Scrolls Construction Set (the editor for Oblivion) is just a terrible tool. Yes, it can DO everything it needs to do, but the interface is horrible, it eats a ton of resources, and it’s agonizingly slow and unresponsive. Compare this with Doom 3, where the editing process consists of opening up the console, typing “ed”, and immediately being able to edit the world seamlessly in a responsive, intuitive environment.
The point is, the things that Bethesda does poorly are things that id Software does exceptionally well. (Probably better than anyone else.) id Software nails the system specs (sometimes they’re higher than what might be ideal in a market sense, but they’re never higher than they need to be from an engineering standpoint) they have incredible tools, and their software is usually quite stable.
I don’t know how much cooperation or technology sharing we’ll see between these two companies, but I dream of a day where I can get Bethesda gameplay and id technology. Hopefully the next game (after Skyrim) will have some id Tech under the hood.
When John Carmack says something is “dismayingly complicated”, it’s like Lance Armstrong saying something is “exhaustingly strenuous”, or Bill Gates calling something “bankruptingly expensive”.
The megatexture tech does strike me as being that complicated, although it would have been a lot easier to pull off if this was a world where they could still ignore the consoles and go PC-exclusive. A lot of the complexity and effort seems to have centered on the challenge of getting the technology to work in a slow-reading, low-memory environment of the consoles.
I’ll explain megatextures in more detail below.
“Like any good engineer” – I think this problem extends beyond engineers. I call this “artistic myopia”, and I suffer from it now and again. It’s really easy to examine a piece of work in such detail that all you can see are the flaws, and you end up burning a ton of time polishing things that only a tiny fraction of players (most of whom will be fellow industry artists, scrutinizing your work) will ever notice. It’s kind of like how photographers and models look at photos and all they see are pores, wrinkles, and skin blemishes, and end up photoshopping them out, when most viewers will be too caught up looking at the beautiful face to notice. It takes a good bit of mental discipline to force yourself to see the work as the consumer would see. You need to be able to switch from “professional view” to “consumer view”, and know when it’s time to stop fussing with it.
If you manage to do this, please tell me how.
Here he begins his talk on the megatexturing system, which is the technological “hook” of this game. Here is what megatexturing is all about, as far as I’ve been able to work out:
In all other games, there are separate and distinct texture maps for all the various bits of scenery. I mentioned Oblivion before, where they would have a 128×128 texture for each face in the scene. Here is a 128×128 texture, just to give you an idea of how little data that is:
|Note that this is a random face texture I found through Google. It’s not a texture from Oblivion.|
In another part of the scene you’ll have a piece of fruit with another texture. And another for the bowl holding the fruit, the table holding the bowl, the silverware on the table, the furniture in the room, the clothes people are wearing, the walls, the windows, and so on. You can have hundreds of textures in play at any given time. If done well, objects will be textured according to how large they are, and how closely the player will examine them. Walls are huge, so they need lots of texture data. Silverware is tiny and you only see it at a distance, so it needs very little data. Faces are small, but the camera zooms in on them, so they need lots of data.
The problem is that in wide open spaces, you can end up with a lot of texture maps in memory. You have ten buildings in the distance. They’re too large to simply not render them, and their textures are immense. (Because when you get close, you need a ton of pixels. This is a building after all, and the player will expect to be able to walk right up to the front door without the building turning into a blurry mess.) Some people mistakenly think that mip-mapping solves this, but mips are used to make those huge textures look good when reduced to a small space, as on the buildings in the distance. Mipmaps can’t actually help with the problem that the base texture is gigantic and takes a ton of memory, whether you need it all or not.
See, you can use 99.9% of the graphics memory with no penalty. You can fill the graphics card up with texture data as much as you like, but the moment your memory usage exceeds 100% – even by a little bit – your framerate is going to drop like a rock. You should not be surprised to find yourself in single digits when this happens.
It’s very hard for artists to manage this texture data properly, because they generally don’t have the technical knowledge to understand the nuances of the problem. There’s a lot going on as the player swings their camera around and brings new elements into view and pushes other elements out of view. It’s hard to understand why one building will occlude the others behind it (thus removing them and their memory-sucking textures from the rendering) while a hill of the same size will not. It’s hard to look at the game and see where video memory might be going to waste. In the end you end up with with your artists sweating over esoteric technical details, which is not what you want. Artists make art. Programmers make technology. Try to avoid requiring too much cross-discipline work, because programmers are usually rotten artists and artists have no patience for technological fussing that requires they understand the details of GPU memory management.
The other disadvantage of traditional texturing systems is that they involve texture tiling. The artist takes a picture of a brick wall, and repeats it over the face of a building. If there’s a great big crack in the bricks, or some graffiti, then that same bit of detail will be repeated every (say) 16 meters. You can make textures tile more often, which will make them more detailed at the expense of making them more repetitive. Or you can go other other way, and have less repetition at the expense of everything looking a bit blurrier. Adding more cracks, graffiti, water stains, scorch marks, or discolorations will make the wall much more interesting to look at, but it will also hurt you because the player will see that same bit of interesting detail being repeated again and again.
Megatexturing attempts to solve all of this by letting the artists paint directly onto the world, without worrying about texture memory or tiling at all. They paint everything at a fixed resolution. (So you don’t have the Oblivion problem where some nutter slaps a huge 1024×1024 texture onto a teaspoon, or some other nutter puts a dinky little 64×64 texture onto a wall. If you download a enough user mods, you can probably see both of these problems in action.) The editor takes all of this high-resolution data and saves it to disk. They can put all the cracks and scorch marks and other detail anywhere they like, with no performance penalty or need to repeat the same detail elsewhere.
Carmack seems to describe the engine as having a single texture for the whole world, stored at varying resolutions depending on how close you are to any given bit. Josh likened it to Google Earth, which is a pretty good analogy. You can view Fresno in sharp detail without needing to have the data for Helsinki in memory. Like Google Earth, you can scroll from one location to another, and it will add detail where you’re headed and remove detail where you’ve been. Unlike Google Earth, Rage needs a lot more (concurrent) texture data, and it needs it in a tremendous hurry.
At run time, the game engine has to load in all of this data for the area around you. You end up with most of the scenery being drawn from a single texture, which is cut up into little bite-sized chunks. It sort of looks like the Minecraft texture:
Except in Rage, the megatexture is 4096×4096. (That’s 16 times wider than the example image you see above.) Unlike Minecraft, the individual chunks are constantly changing. Higher detail bits are pulled in as you get close to surfaces. The biggest hurdle isn’t making all of this work, but in getting the data off of the disk (especially if we’re dealing with a DVD or Blu-Ray) and into memory in a timely manner. A lot of Carmack’s talk centers around this challenge.
When complete, we have a system where the artists won’t have to worry about the technology at all. Well, except for…
The interesting thing is that now the artist is not limited by texture count or resolution as before (since those are now managed by the engine) but by surface area. Assuming I’m following his talk, the amount of texture data in use is dependent on how much surface area is being drawn. More importantly, each little bit of scenery pays a minimum cover charge to get into the scene. Maybe a one-meter crate uses a 128×128 patch of texture. However, the little lunchbox next to it also eats a 128×128 chunk, even though most of it is going to go to waste. Some of these tiny “lunchbox” items can be merged and share a 128×128 patch. But this will introduce a bunch of expenses that are not obvious to the busy artist. Making one of those “lunchboxes” more than a half meter wide will cause this object to nudge out its texture neighbors, suddenly hogging a patch to itself.
I’d love to know how he handles the uv addressing.
We’ll continue to work our way through this presentation tomorrow.