John Carmack 2011 Keynote Annotated:
Part 1 of 3

By Shamus Posted Monday Aug 8, 2011

Filed under: Programming 63 comments

On Friday, August 5th, John Carmack gave the keynote address for Quakecon 2011. He does this every year, and always has interesting and important things to say about the industry.

This year, I thought I’d give my own commentary on what he said. His talk lasted for about an hour and a half, so there’s a lot of ground to cover here. I apologize for the awkwardness of this. Ideally there would be a way to seamlessly have my comments available during his talk, or a link to my comments, but instead we have to do this the other way. This might be a bit of a pain to watch. I’d suggest simply watching the whole thing, rather than trying to pause-and-play your way through it, but whatever works.

First is the talk in full, followed by my comments, with links to the individual timestamps in question.


Link (YouTube)

Before we begin: Man, that thumbnail view of Carmack is really unfortunate. The guy spent an hour and a half looking calm, measured, and thoughtful, and the thumbnail managed to capture this expression of wacky, comedic bafflement.

3:45

It’s interesting that he mentioned Skyrim. Remember that ZeniMax now owns id Software. ZeniMax also owns Bethesda, so he’s talking about a product from a sister development studio.

I’ve belabored in the past the systemic development problems I see inside of Bethesda with regard to technology, but the basic list is:

  1. Bugs, or dysfunctional QA process. I don’t know what it is, but their games are always wonky and unstable.
  2. Inefficient coding. The system specs for Oblivion were about a graphics card generation ahead of where they should have been, as Oldblivion demonstrated.
  3. Haphazard art assets. Oblivion had 3,000 polygon boulders and 28 polygon tufts of grass. This is a waste even by 2011 standards, and was criminal in 2006. In the same game, they were using 128×128 textures for character faces, which was shockingly low for the times. This could be a management problem. It could be a tools problem. It could be a problem in the company culture. I don’t know, but the behavior has persisted through several games.
  4. Horrible development tools. The Elder Scrolls Construction Set (the editor for Oblivion) is just a terrible tool. Yes, it can DO everything it needs to do, but the interface is horrible, it eats a ton of resources, and it’s agonizingly slow and unresponsive. Compare this with Doom 3, where the editing process consists of opening up the console, typing “ed”, and immediately being able to edit the world seamlessly in a responsive, intuitive environment.

The point is, the things that Bethesda does poorly are things that id Software does exceptionally well. (Probably better than anyone else.) id Software nails the system specs (sometimes they’re higher than what might be ideal in a market sense, but they’re never higher than they need to be from an engineering standpoint) they have incredible tools, and their software is usually quite stable.

I don’t know how much cooperation or technology sharing we’ll see between these two companies, but I dream of a day where I can get Bethesda gameplay and id technology. Hopefully the next game (after Skyrim) will have some id Tech under the hood.

7:20

When John Carmack says something is “dismayingly complicated”, it’s like Lance Armstrong saying something is “exhaustingly strenuous”, or Bill Gates calling something “bankruptingly expensive”.

The megatexture tech does strike me as being that complicated, although it would have been a lot easier to pull off if this was a world where they could still ignore the consoles and go PC-exclusive. A lot of the complexity and effort seems to have centered on the challenge of getting the technology to work in a slow-reading, low-memory environment of the consoles.

I’ll explain megatextures in more detail below.

9:40

“Like any good engineer” – I think this problem extends beyond engineers. I call this “artistic myopia”, and I suffer from it now and again. It’s really easy to examine a piece of work in such detail that all you can see are the flaws, and you end up burning a ton of time polishing things that only a tiny fraction of players (most of whom will be fellow industry artists, scrutinizing your work) will ever notice. It’s kind of like how photographers and models look at photos and all they see are pores, wrinkles, and skin blemishes, and end up photoshopping them out, when most viewers will be too caught up looking at the beautiful face to notice. It takes a good bit of mental discipline to force yourself to see the work as the consumer would see. You need to be able to switch from “professional view” to “consumer view”, and know when it’s time to stop fussing with it.

If you manage to do this, please tell me how.

10:35

Here he begins his talk on the megatexturing system, which is the technological “hook” of this game. Here is what megatexturing is all about, as far as I’ve been able to work out:

In all other games, there are separate and distinct texture maps for all the various bits of scenery. I mentioned Oblivion before, where they would have a 128×128 texture for each face in the scene. Here is a 128×128 texture, just to give you an idea of how little data that is:

Note that this is a random face texture I found through Google. It’s not a texture from Oblivion.
Note that this is a random face texture I found through Google. It’s not a texture from Oblivion.

In another part of the scene you’ll have a piece of fruit with another texture. And another for the bowl holding the fruit, the table holding the bowl, the silverware on the table, the furniture in the room, the clothes people are wearing, the walls, the windows, and so on. You can have hundreds of textures in play at any given time. If done well, objects will be textured according to how large they are, and how closely the player will examine them. Walls are huge, so they need lots of texture data. Silverware is tiny and you only see it at a distance, so it needs very little data. Faces are small, but the camera zooms in on them, so they need lots of data.

The problem is that in wide open spaces, you can end up with a lot of texture maps in memory. You have ten buildings in the distance. They’re too large to simply not render them, and their textures are immense. (Because when you get close, you need a ton of pixels. This is a building after all, and the player will expect to be able to walk right up to the front door without the building turning into a blurry mess.) Some people mistakenly think that mip-mapping solves this, but mips are used to make those huge textures look good when reduced to a small space, as on the buildings in the distance. Mipmaps can’t actually help with the problem that the base texture is gigantic and takes a ton of memory, whether you need it all or not.

See, you can use 99.9% of the graphics memory with no penalty. You can fill the graphics card up with texture data as much as you like, but the moment your memory usage exceeds 100% – even by a little bit – your framerate is going to drop like a rock. You should not be surprised to find yourself in single digits when this happens.

It’s very hard for artists to manage this texture data properly, because they generally don’t have the technical knowledge to understand the nuances of the problem. There’s a lot going on as the player swings their camera around and brings new elements into view and pushes other elements out of view. It’s hard to understand why one building will occlude the others behind it (thus removing them and their memory-sucking textures from the rendering) while a hill of the same size will not. It’s hard to look at the game and see where video memory might be going to waste. In the end you end up with with your artists sweating over esoteric technical details, which is not what you want. Artists make art. Programmers make technology. Try to avoid requiring too much cross-discipline work, because programmers are usually rotten artists and artists have no patience for technological fussing that requires they understand the details of GPU memory management.

The other disadvantage of traditional texturing systems is that they involve texture tiling. The artist takes a picture of a brick wall, and repeats it over the face of a building. If there’s a great big crack in the bricks, or some graffiti, then that same bit of detail will be repeated every (say) 16 meters. You can make textures tile more often, which will make them more detailed at the expense of making them more repetitive. Or you can go other other way, and have less repetition at the expense of everything looking a bit blurrier. Adding more cracks, graffiti, water stains, scorch marks, or discolorations will make the wall much more interesting to look at, but it will also hurt you because the player will see that same bit of interesting detail being repeated again and again.

Megatexturing attempts to solve all of this by letting the artists paint directly onto the world, without worrying about texture memory or tiling at all. They paint everything at a fixed resolution. (So you don’t have the Oblivion problem where some nutter slaps a huge 1024×1024 texture onto a teaspoon, or some other nutter puts a dinky little 64×64 texture onto a wall. If you download a enough user mods, you can probably see both of these problems in action.) The editor takes all of this high-resolution data and saves it to disk. They can put all the cracks and scorch marks and other detail anywhere they like, with no performance penalty or need to repeat the same detail elsewhere.

Carmack seems to describe the engine as having a single texture for the whole world, stored at varying resolutions depending on how close you are to any given bit. Josh likened it to Google Earth, which is a pretty good analogy. You can view Fresno in sharp detail without needing to have the data for Helsinki in memory. Like Google Earth, you can scroll from one location to another, and it will add detail where you’re headed and remove detail where you’ve been. Unlike Google Earth, Rage needs a lot more (concurrent) texture data, and it needs it in a tremendous hurry.

At run time, the game engine has to load in all of this data for the area around you. You end up with most of the scenery being drawn from a single texture, which is cut up into little bite-sized chunks. It sort of looks like the Minecraft texture:

quakecon2.jpg

Except in Rage, the megatexture is 4096×4096. (That’s 16 times wider than the example image you see above.) Unlike Minecraft, the individual chunks are constantly changing. Higher detail bits are pulled in as you get close to surfaces. The biggest hurdle isn’t making all of this work, but in getting the data off of the disk (especially if we’re dealing with a DVD or Blu-Ray) and into memory in a timely manner. A lot of Carmack’s talk centers around this challenge.

When complete, we have a system where the artists won’t have to worry about the technology at all. Well, except for…

13:20

…this.

The interesting thing is that now the artist is not limited by texture count or resolution as before (since those are now managed by the engine) but by surface area. Assuming I’m following his talk, the amount of texture data in use is dependent on how much surface area is being drawn. More importantly, each little bit of scenery pays a minimum cover charge to get into the scene. Maybe a one-meter crate uses a 128×128 patch of texture. However, the little lunchbox next to it also eats a 128×128 chunk, even though most of it is going to go to waste. Some of these tiny “lunchbox” items can be merged and share a 128×128 patch. But this will introduce a bunch of expenses that are not obvious to the busy artist. Making one of those “lunchboxes” more than a half meter wide will cause this object to nudge out its texture neighbors, suddenly hogging a patch to itself.

I’d love to know how he handles the uv addressing.

We’ll continue to work our way through this presentation tomorrow.

 


From The Archives:
 

63 thoughts on “John Carmack 2011 Keynote Annotated:
Part 1 of 3

  1. MrGamer says:

    Sadly a product of my age but I had not heard of John Carmack up until recently. I feel a bit inspired myself by his words.

    How common is it for a game to use a minecraft style texture set?
    The light games I make use a system but I never really saw it in mainstream games. Is it just inefficient on a larger scale?

    1. Kdansky says:

      It’s a matter of scale. Modern 3D AAA games install what on your drive? 5 to 10 GB of data, or even way more? Now guess how much of that is textures. I’d expect at least 50%, or way more for a big and diverse world such as WoW. Textures are very hard to reuse without the player immediately noticing, and very space-hungry to begin with, compared to lists of 3D vectors for models. Films are not done much any more (and tiny, you can encode a solid hour of 1024p at 400 MB), and audio isn’t huge either compared to the sheer amount of textures you need.

      As for code: Executable code is ridiculously small. You can write naked engines within a few megabytes, easily.

      1. Kerin says:

        WoW’s actually not the best example because it reuses textures so gosh-darn often. It’s bordering on infuriating at times.

    2. mewse says:

      Many games use a minecraft style texture set (it’s typically called a “texture atlas”). Benefits include being able to draw more geometry in a single batch (since their textures are all shared in a single image). Disadvantages include bleeding occurring between the individual texture images in mipmaps, and lack of support for tiling.

      1. Alexander The 1st says:

        I wonder if Retro Studios does it differently then for Metroid Prime (Yes, this again – we can use Mass Effect 1 as well if you want, but that has loadevators, so while it uses the same concept, it’s not as well… :p): by deciding to use multiple texture atlases, they can swap them in and out when they require to, without dumping the *entire* game out to a loading screen.

  2. DanMan says:

    Reading through your world-building posts and listening to this Keynote has really taught me a ton about how graphics work. Again, all my experience in programming have been web pages and desktop applications involving text boxes and buttons.

    I was browsing around the internet the other day and found this: Unlimited detail . I thought from a technology standpoint it was pretty cool. It would be interesting to see what you (and others) who actually have experience in coding games thinks of the technology.

    1. Von Krieger says:

      The minecraft guy has two articles on that very thing. It’s not something new and there isn’t a whole lot you can actually do with that technology.

      For example you can’t really animate it.

      1. Pete says:

        I never really understood why is that that big of a problem. I mean, why cant you just use the existing tech for the characters and such and use voxels for the world?

        Also: http://www.youtube.com/watch?v=tkn6ubbp1SE

        Dont get me wrong, I get that voxels are not the perfect cure-all. It just seems like a good thing to me someone is experimenting with this.

        1. Kdansky says:

          There are a lot of better tries around, but UD throws up something very shady every few years. They are looking to be bought, it’s a scam.

          Last years version:
          http://www.youtube.com/watch?v=Q-ATtrImCx4

          Note that both videos are identical in features, despite 20 months between them. Lots of repeating, non-angled, non-rotated models, no animation or (dynamic) light effects. None of these issues are completely impossible, but UD shows no effort whatsoever in tackling them, but instead they are all about the outrageous claims.

          1. Exactly, I mean, they claim 100,000x more detail in the world than most games, but they never mention a tradeoff, or a downside, or really any reason why we should believe that statement. And that’s not a claim you can just throw out there without explanation.

        2. Ian says:

          I mean, why cant you just use the existing tech for the characters and such and use voxels for the world?

          That’s already been done. A few examples include Novalogic’s Delta Force and Comanche series and the 1999 PC game, Outcast. Other games, such as Crysis, use voxels to some degree (though I don’t believe Crysis uses voxels in its renderer).

          The main issue with the “unlimited detail” video is that it’s extremely deceptive. They try to imply that every single object in the world can be done with voxels with little to no penalty, which is false.

      2. Dragomok says:

        The posts Von Krieger is talking about are (in chronological order) here and here.

  3. Chr0n1x says:

    Sander van Rossen has a series of posts on virtual texturing/megatextures on his blog: http://sandervanrossen.blogspot.com/

    That has some more details on UV addressing and how everything works together. An interesting (albeit long) read on the topic, with references to RAGE literature and previous Carmack works.

  4. Infinitron says:

    John Carmack seems to be the only programmer in the industry who is creative at an academic level – not just an expert coder, but also somebody who could contribute at any Computer Graphics academic conference.
    Am I wrong? Has anybody ever heard of another game programmer who develops revolutionary algorithms and publishes papers?

    Of course, the tradeoff here is that iD Software games are generally uninteresting tech demos.

    1. Alexander The 1st says:

      Of course, the tradeoff here is that iD Software games are generally uninteresting tech demos.

      This is my thoughts about Rage. Of all I’ve heard about it, I’ve heard no narrative elements, nor any gameplay innovations, just graphical details. And it looks remarkably grey.

      That said, the guys at Retro Studios are probably up there as well – consider that their loading technology doesn’t seem to be replicated by anyone else in the industry…like seriously. Delayed games? Sure, but at least I wasn’t sitting between levels waiting on a loading screen.

      Now if only Carmack and the developers at Retro combined forces to make a new Descent game…I’d buy it on launch day.

    2. Kyte says:

      Well, Carmack & iD are known for believing story & plot are just icing in the gameplay cake, so it’s not particularly surprising. They basically specialize in tech demo games. Even gameplay’s not particularly innovative, Carmack’s most iconic games (Commander Keen, Wolfenstein 3D, Doom & Quake) were basically taking a game previously existing in consoles and make it viable on cheap PCs.

      1. Jeff #3 says:

        I’m pretty sure there was nothing like Wolf3D, Doom, and Quake on consoles.

        Commander Keen was most definetly an attempt to get good console style platforming on a PC.

      2. Not to mention that Doom and Quake were the quintessential first person shooters in terms of competitive content. I do like my story and my characterization and so on, but solid gameplay is always job #1, and in particular competitive or cooperative multiplayer games will always have far more depth of play and therefore value. It’d just be nice to have a combination. Blizzard has that pretty much nailed down, even if I hate Diablo and WoW as games: Deep gameplay combined with industry-leading story. See Starcraft, Warcraft, WoW, Diablo, etc.

  5. Hi, I’m the guy from the blog mentioned in a previous post.

    In all the time that I’ve experimented with virtual texturing there have been 2 giant problems with this approach: latency and storage.

    Latency is the time it takes before your 128×128 texture piece is loaded into memory, and it’s already hard to do this quickly enough when it comes from the hard-drive. It’s near impossible when it comes from an DVD, or worse Blu-ray.

    Storage is a much harder problem. Carmack said that uncompressed the texture data in Rage was 100 gigabytes. yes. 100. And they’ve went through a lot of hoops to get it that low. Hell, they’ve made their levels relatively small to get it to a low number of 100.
    Even worse, to get that 100gb on DVD they’ve had to seriously compress it to a point where a lot of the detail in that extremely detailed world got … blurry.

    And I haven’t even talked about the CPU overhead ..

    In my opinion virtual texturing is a great technology, but the world isn’t ready for it yet. The moment terabytes are thought about like we’re thinking about 100’s of megabytes now, then virtual texturing will be used all over the place. Until then, it’ll have it’s place, but it’ll have more downsides than upsides IMHO :(
    (most importantly, it’s very *expensive* technology)

    Obviously Id software did a great job working around those downsides.

    Unless someone figures out a way around the storage limitations at least, like using procedural textures.

    1. I’m pretty sure that ‘hd photo compression’ is some sort of lossless or near lossless compression, and didn’t make anything any more blurry

      1. Carmack himself said in his keynote that Rage looks more blurry close up compared to other titles that are out there.

  6. 4096×4096 actually doesn’t seem like a lot, I wonder if characters and particles are included in this. It’d seem rather wasteful if so.

    Unrelated: I made a pictorial commentary on Mass Effect 2. It’s heavily metaphorical – obviously – so critique is requested, but not required.

    1. Dragomok says:

      Regarding unrelated: that, good sir, is simply the best caption on deviantART I’ve seen so far.

      I would like to say same for that wonderful and accurate allegorical image, but this is the first of that kind I’ve ever seen on that site.

  7. Kdansky says:

    I rarely watch youtube videos of speeches, especially not if they are 90 minutes long. But I was sad when this one ended. Carmack talks well enough, and what he says is fascinating, at least for a software engineer like me.

    When I first heard of these megatextures, I thought that they would not really improve picture quality all that much. But have you seen the new Rage trailer in HD? It just looks unbelievably crisp.

  8. Piflik says:

    AFAIK, the thing about the Megatextures is actually to only load part of it at any given time (texture streaming). With standard texturing you can (and people do) create one single Texture for multiple objects (called Texture-Atlas), where you have different materials as tiling textures on separate sections of the image, so you have to only load that one image for the whole scene (just like your Minecraft example). This is mainly used for buildings (or walls/floors/ceilings in the current hose-level-design…); you have different models using the same image as the texture, thus reducing the needed draw-calls.

    Now, these textures can get quite big and they still have to be loaded into memory as a whole, and when you approach texture sizes like the Megatextures, that is simply not feasible (MTs used for Virtual Texturing in Rage and othe idTech5 games can get up to 128000px²; ETQW had 32k MTs and that was considered low-res). So you only load a little section of it (for the close parts of the level) and render a low-res version of the MT on geometry far away.

    1. Wtrmute says:

      I just want to mention that it’s funny that you wrote “px²”; technically, pixels are already an area measure (formally, as a fraction of the area of your screen), so seeing it squared was unexpected. Of course, I totally commend you for remembering to put in the squared sign on a “linear” measure being composed into an area, though…

      1. Piflik says:

        I was simply too lazy to write 128000×128000, so I just ²-ed it :D

  9. Newbie says:

    What’s Rutskarn doing in public?

    1. Ayegill says:

      My thoughts exactly

    2. Halfling says:

      I think that is really Rutskarn’s long lost father.

      1. Newbie says:

        That suggests Rutskarn was born via normal means…

  10. Robert Conley says:

    Just to blow everybody mind further, Carmack is a rocket scientist as well. (http://www.armadilloaerospace.com/n.x/Armadillo/Home) One of the world’s leading experts on designing and building rockets that can take off and land vertically. (i.e. a tail sitter like the rockets in the old serials and pulps)

    1. Dragomok says:

      I’ve also heard that he was the first person to make a car engine achieve 100 units of horsepower. Sadly, I can’t find any really reliable sources which could confirm that.

      1. Lindsay says:

        He did what?

  11. ccesarano says:

    I can’t watch the video from work, so I’m getting pure Shamus commentary here. Nonetheless, John Carmack is one programmer that I admire, despite the guy constantly talking over my head. He’s a real Technomancer of the modern age, or so it often enough seems.

    When you were first beginning to discuss the texture issues, my mind actually jumped to CSS Sprites. While John Carmack’s ideas went a different direction, it was interesting to see the Minecraft textures look to run on a similar idea.

    I’m only familiar with handling graphics technology as a web developer, but I imagine some of the restrictions are still valid. CSS Sprites are better because you limit the site to a single HTTP request, and even though you’re loading one large file, you are loading a single file that is typically still smaller in data size than several smaller images. Reading data off of a disk, I imagine trying to stream a single image file is better than multiple. Hell, you can see this when loading several images in a program like Paint.NET or Adobe Photoshop. One giant image may take a while, but loading 10 smaller images will take even more time to load and take up more space on the hard drive.

    I had never thought of the tech being applied to textures before, so I find it interesting to see Minecraft using such a system (or one similar to it). However, in web development we don’t always have to worry about what the user is looking at or how far they are, draw distances, etc. So I’m not sure if the technology is completely appropriate.

  12. Ander the Halfling Rogue says:

    Key to “consumer view”: Play-test it up like Valve. Listen to enough consumers, and A. You’ll get to see their perspective, or B. It won’t matter, ’cause you can take their word for it.

    1. Kerin says:

      I don’t think it’s actually that simple – there’s a lot of methodology that goes into designing your test groups properly, because if you just grab a bunch of people and ask their opinions they’ll more often than not be wrong. There’s a pretty sizeable gap between what the people want and what they will enjoy.

      1. Nick Bell says:

        For example, Valve doesn’t so much ask people what they want, but simply watch them AS they play. It is the actual gameplay that shows flaws and exposes problems, even if the player is happy at the end.

      2. Jeff says:

        There’s a gap that exists between what people think they want, and what they actually want.

        1. decius says:

          And the people who think the most about what they want are further off than the people who don’t think about it at all.

  13. Tesh says:

    Speaking as a game artist, that megatexturing tech sounds… pretty awesome. Remembering the niggling little details like the overlarge lunchbox might be onerous, but I’ve often argued that if the system is clearly documented with known variables like that lunchbox size, I can work with it. If my tools are rock solid, I can work wonders with them. If they keep changing, well, the art will suffer.

    Phrased another way, if all I have is MSPaint, but I know how it works, I’ll handle the artistic side just fine. If, during the project development, we shift from MSPaint to PaintPlus to Photoshop, sure, the tools progressively get better, but we’re going to have some consistency issues and likely, a lot of reworking and even tech problems.

    Oh, and on the “stepping back and seeing with different eyes” question, I’m not sure a professional will ever be able to look at things exactly like a fresh newbie (I have a degree in computer animation, and movie watching will never be the same), but for me, I try to maintain objectivity by taking in a lot of different games and art styles. It doesn’t change what I know about my art so much as give me a greater data set to draw comparisons with and see what other people might be seeing.

  14. Perseus says:

    If you treat the environment as a single texture, wouldn’t all the objects in it need to be static?

    1. Robyrt says:

      Not necessarily – as long as you have all the possible textures, you are OK. Check out the Minecraft texture map above, which has 5 frames of animation for lava.

  15. SoldierHawk says:

    I haven’t finished yet, but I had to pause to leave a comment. I LOVE this! I’m by no means a programer, and I certainly don’t understand the finer points of what you’re talking about sometimes, but you manage to make something that fascinates me from a distance very accessible. I love it when you write articles like this. :D Thank you!

  16. Jeff says:

    “Oblivion had 3,000 polygon boulders and 28 polygon tufts of grass. This is a waste even by 2011 standards, and was criminal in 2006. In the same game, they were using 128à—128 textures for character faces, which was shockingly low for the times.”
    Is it really a surprise that Bethesda invests their time into making sure their pretty sandboxes have nice rocks and grass while neglecting their characters (and writing)?

    “It's kind of like how photographers and models look at photos and all they see are pores, wrinkles, and skin blemishes, and end up photoshopping them out, when most viewers will be too caught up looking at the beautiful face to notice.”
    Am I the only one who thinks a beautiful model looks more beautiful when you can see the tiny pores, wrinkles, and skin blemishes? It makes the picture look more “real”, sharper – as opposed to an airbrushed blurry mess.

    1. Shamus says:

      Same here. Once in a while I’ll see a pretty face on a Facebook profile or blog and think, “Oh! She’s pretty.” And then I realize she just looks HEALTHY, because she’s not under-weight and over-photoshopped.

      I wonder if this is an age thing? While they didn’t have rampant photoshopping when I was in my 20’s, I do recall being more attracted to the “full makeup” look at the time.

      1. Blake says:

        I’m in my mid 20’s, never been a fan of the full makeup/photoshop look.
        Give me real people any day.

        1. Cuthalion says:

          This. And I’m in my early 20’s.

          1. Jeff says:

            I’m in my late 20s, for the record. Maybe the advent of HD has made photoshopped/airbrushed pictures look far too fake?

            1. Irridium says:

              Hell, I just turned 20 two weeks ago. I’ll take real over the other stuff any day.

  17. “Like any good engineer” ““ I think this problem …….and know when it's time to stop fussing with it.
    If you manage to do this, please tell me how.”

    I’m sure your question may have been facetious, and that the real meat of the post is the wonderful words of John Carmack…but since you asked :P

    I’m an illustrator/storyteller and as far as my personal experience, the only person I’ve met with the ability to switch from professional view to consumer view (there must be others of course, I have just never personally met them)
    When going through film and art school it was both a blessing and a curse
    My work was almost always among the top 5 of my classes
    My classmates would often seek my advice on how to improve a specific piece
    And I could never bring myself to care about my work in the least
    To me it was just a job
    It may sound like I’m building myself up, but I would never recommend an artist to work the way I work
    When your seeing things from the point of view of both the creator AND the viewer a portion of the magic and enjoyment is lost
    I remove my personal view from my work and instead visualize what the audience will most likely see and enjoy, once I hold that in my minds eye it’s only the matter of putting it to paper
    I end up creating for an imaginary audience and not myself
    If your goal is to have fun and be creative, then make the art that appeals to you and an audience will find you
    If your goal is to play to the crowd, then stop making art for yourself and you will always have a job but no investment in what you create
    This is more a warning then an actual description of my process, since as I said in the beginning
    “I’m sure your question may have been facetious”

    Cheers
    Snowball

    1. Then again, you make better stuff if you can look at it from the perspective of the viewer. I’d rather my work be better and lose some magic.

      1. okey dokey then

        The Process

        Step 1: Never create anything unless you can picture it clearly in your mind, no doodling or notes. Everything must stay in your head until you have examined every inch. In fact you should see it so clearly that you could describe it easily with as few words as possible. (this is a big one!)

        Step 2: While it’s in your head, apply every variable you can think of. What would this character look like if they were a blonde, what if the king were a queen, what if the dog were a cat.

        Step 3: Examine the common trends of media and entertainment. The goal is not to copy, so much as be aware of what has the most appeal. A lot of this is done subconsciously, but you can not afford to allow your brain to withhold information. Train your subconscious to be conscious. This involves quite a lot of research and social study.

        Step 4: Sit yourself down to work with no distractions. No music, no movies, no TV. Work as quietly as you can, since you’ll want to be able to “hear” your brain more clearly.

        Step 5: Understand each step towards the finished product. Know what the characters thoughts are, know what their bones look like. See things inside and out.

        Step 6: Once you start, do not work on anything else. You can make multiple passes with breaks and what not, but do not work on other work. Only the one you have chosen to work on first

        Step 7: Work until it is what you see in your head. If you’ve followed the previous steps it should already be complete. It’s just a matter moving it from inside your head to outside. If you want to change something at this point, it’s already too late, stick to the original.

        So you can see that the majority of the process is planning and setup. Strangely the simplest part is what most people would call “the work”.
        I’ve produced illustrations and storyboards using this process and now the pre-process takes very little time and the production is extremely fast (I’m often finished well before anyone else)

        Good luck
        Snowball

  18. Wtrmute says:

    Shamus, regarding the bit at 13:20, I didn’t get the impression that the problem has anything to do with area at all. Rather, the issue was when artists tried to reuse modular pieces of geometry/texture (like your Bethesda boulders and grasses) and the system kind of choked up on that, because it couldn’t package those modular pieces of texture with the ones that went “near” it, as that concept stopped making sense.

    What I really thought interesting was the part about static analysis and scripting, but I’ll leave that for the post when you actually discuss these segments tomorrow or in whichever part of this series you do.

  19. Chevluh says:

    The ur-link on sparse virtual textures aka megatextures:
    http://silverspaceship.com/src/svt/
    Though some details of id’s implementation probably differs it’s certainly pretty close.
    Also Lionhead’s implementation, complete with paper:
    http://www.youtube.com/watch?v=M04SMNkTx9E1

    In that implementation UVs are 16 bits integers (or fixed point, I guess, depending on how you look at it). Then to find which block you need for a given UV pair it’s sorta stupidly elegant, you just need to cut each coord into two parts: upper bits will be the indices of the texture block you want inside a 2D array containing all the blocks for this level, remaining bits the UV coords inside that texture block. So if you take the 8 most significant bits of U and V you’ve got the indices for the 256×256 blocks level (aka the 8th subdivision level), 6 most significant for the 64×64 (the 6th), and so on. You want more precision, you use more bits for UVs, of course.

    Then for authoring you just handle UVs in whatever way you want for a given object then the texture packer will translate them to wherever in the megatexture the texture data for that object was put at texture compilation time.

    Also yeah, the system doesn’t really need a constant resolution for a given area (you can perfectly give a small object UVs that stretch half the megatexture and it’ll work fine), but that’s probably what they found convenient. The real problem is that ideally you’ll want texture blocks that are close in-game to be packed close in the megatexture, so as to avoid calling blocks all over the place (which, through mipmapping, also means calling fewer blocks in general), but that’s not trivial (since you’re packing textures that were over an arbitrary 3D world into a 2D structure).

  20. Dwip says:

    Well, the first 20 minutes or so was interesting (I’ll watch the rest when Shamus does his commenting) in a “Man, my brain is too puny for this discussion” sort of way. As skeptical as I was about the 128×128 chunks thing, I decided to Google some screenshots, and man, that looks pretty amazing. On the other hand, it’s apparently 21-23 gigs of my hard drive space worth of amazing, which is gigantic.

    On another probably random side note, a couple things about the Bethesda zinging, since it’s my one peeve about you, Shamus:

    – Wouldn’t a better standard of comparison to modern games be Fallout 3 (or maybe F:NV) and not Oblivion? Yeah, some of the same texture and asset problems, but it’s a way more efficient version of the engine with a lot of the bugs ironed out. It of course remains to be seen what Skyrim will bring us.

    – So, would you mind terribly unpacking what’s so brilliant about the Doom 3 editor? Speaking as a guy who’s spent the better part of 5 years inside the CS, but who has no experience whatsoever with Doom 3 or its editor (I Googled some tutorial vids is about it), I’m kind of curious. This probably won’t sound like it, but I really am.

    I mean, yeah, the CS has its faults (I could go on), but it does get the job done, and done pretty well at the end of the day, which is to say that, unlike any given Bioware editor, perhaps, it actually works, and uses less resources in the process than the IE window I’m typing this in.

    Were it me, if I was angsting about the CS, I’d start talking about things like BSA unpacking, dirty edits, Windows Vista, and load order rather than the interface, which is at worst pretty standard and at best better than the monstrosity that was the Morrowind CS (insert cane shaking here). I look at some Doom 3 editor videos, and some guy is spending 10 minutes on how to move the camera around an interface that looks for all the world like a worse version of what I deal with in the Oblivion CS, and all I can think of is how soul crushing and Blenderesque that must be.

    But I don’t really know, and, like I said, I’m really kind of curious – you’ve bitched about the CS, you’ve bitched about Valve’s tools, but what does an editor that Shamus actually likes look like?

    1. Shamus says:

      Yes, Oblivion is worse than Fallout 3, which is exactly why I singled it as an example. I was trying to highlight the problems in a way that could be understood by non-technical people, and it’s a lot easier to see a problem when it’s more pronounced. If you’re trying to describe what a broken bone is, it’s easier to point to a guy with his femur sticking out than pointing at a guy with a limp and a blurry X-ray.

      The Doom editor IS the game, you don’t have to run an editor on TOP of the game. So, you see exactly what you’re getting, and if you want to test your work it takes seconds, not minutes.

      “Were it me, if I was angsting about the CS, I'd start talking about things like BSA unpacking, dirty edits, Windows Vista, and load order rather than the interface, which is at worst pretty standard and at best better than the monstrosity that was the Morrowind CS (insert cane shaking here). ”

      Heh. Never tried the Morrowind editor. I didn’t bring up those other things because they’re pretty abstract for most of the people reading this. (I wasn’t even aware there were windows Vista problems.) The point was to say that id makes solid tools that allow for quick iteration, which would be a boon for the Bethesda art team.

      1. Dwip says:

        “Yes, Oblivion is worse than Fallout 3, which is exactly why I singled it as an example.”

        Fair enough, and you’re right as far as it goes, though in some ways Fallout 3 is actually worse – An engine that’s much improved over Oblivion, but Oblivion texture sizes. Doesn’t do anybody any favors. Bethesda’s really strange in that way.

        I do think it worth noting, though, that at least in regards to engine issues, it’s not like Bethesda’s stood still here. *shrug*

        “The Doom editor IS the game, you don't have to run an editor on TOP of the game. So, you see exactly what you're getting, and if you want to test your work it takes seconds, not minutes.”

        Would something like this video be a fairly adequate representation of the Doom 3 editor? Because I hear what you’re saying, and I used stuff like that in the 90s and it was pretty great, but watching that, I see a guy drawing red boxes in what looks like the CS with a worse camera. I’d rather just throw down some statics and go home.

        Too, and here I’m probably just being too esoteric, while I take the point about iterations, I’m not wholly sold on the point as related to Oblivion and its spawn. I’ve tried both small incremental changes with plenty of in-game time for testing, and I’ve tried writing large swathes (say an entire quest) followed by testing, and leaving aside my problem of treating testing time as a good excuse to randomly bunnyhop all over fences and roofs and things, it’s generally more efficient to do large chunks and then test in large chunks given that A) static placement is really easy; B) quests pretty much need to spring forth fully-formed in order to do anything useful. If I was drawing boxes and mapping textures and things like Doom, I’d probably feel different.

        Kind of an apples and oranges thing, I guess.

        Shortly, re: the rest:

        – As far as the Morrowind CS, think the Oblivion CS with a generation less polishing and absolutely horrendous interface choices. Also no documentation of any sort, which made those early years…exciting. Oblivion’s CS is a vast leap forward in that sense.

        – As far as the technical difficulties, that’s where the real issues for people are. The fact that you have to burn a minute or so for the game to launch every time you test has nothing on the fact that if you want to even get in the door you’re going to need to find a third party BSA unpacker, never mind TES4Edit, Wrye Bash, OBSE, and what have you. Going to guess Doom 3 doesn’t have that issue. The Oblivion CS in particular has all manner of issues with Vista (UAC practically kills the whole game, but that’s another thing entirely) – Want to create a new esp? You may or may not be out of luck. Adding esp details in the CS? Will corrupt your file and destroy your work. Actor previews? Crash the editor. Compared to that, the interface has nothing to trouble me.

        It’s also STILL a better experience than hand-editing XML files in Civ IV, but I digress (more). And at least it’s not Daggerfall.

        Anyway, that was probably far too lengthy a digression, and I do take your point.

  21. StranaMente says:

    Wow…
    And I thought I caught a glimpse of what he said during his speech.
    Now I get that maybe I understood less than 10% of what he said.
    It’s clear that he knows what’s he talking about, but due to time constraints he wasn’t able to articulate all the things he wanted to talk about.
    On the other hand you too really know what you are talking about and know your audience (and have no time constraint) so this is way clearer.
    I’m glad you decided to do it.

    I was wondering, by the way, if you’re going to address the less technical (and more general) problems he talked about briefly here and there.

  22. Henkye says:

    Thanx for annotation, pleasure to watch Carmack as well to read this.
    But i cant agree with 3)
    Played Oblivion in times when 8800GT was new and the graphics was still awesome. Even the HW demands seemed completely acceptable (compared to Gothic III for example). Too bad Oblivion sucked at gameplay, but thats different story …
    Also cant agree with saying: “and their software is usually quite stable.”
    Their software is ROCK STABLE. Played Q1-4,Doom III & RTCW. It never felt down. As far as i know, only Blizzard can be compared to them.

  23. MaxDZ8 says:

    Thank you very much Shamus for posting this. I’ve been away from Carmack’s talks for a while but I must say I’ve found he’s pretty much the same guy as I recalled back when he was using a .plan.

    I don’t think he’s just smart. I also admire his passion for what he does and its will to explain to other people what he thinks in a successive way. I’ve had no trouble following this first section – I wish they would have recorded it a bit louder however!

    1st post here is from a guy who had not heard about Carmack “until recently”. You have been missing a lot.

  24. pneuma08 says:

    “Haphazard art assets” reminds me the time when some friend of mine were building a game for school, and they needed a model of a plane, which would be viewed in third-person from far away. The guy who agreed to make the model dropped out of contact for a few months, then gave them a model so detailed, it had a button on the pocket of the pilot.

    Some people just don’t know how to look beyond themselves, at the bigger picture.

  25. Andrey says:

    John Carmack mentions in report PVS-Studio tool.

    PVS-Studio is a static analyzer that detects errors in source code of C/C++/C++0x applications. There are 3 sets of rules included into PVS-Studio:

    1.Diagnosis of 64-bit errors (Viva64)

    2.Diagnosis of parallel errors (VivaMP)

    3.General-purpose diagnosis

    More details about this C/C++ analyzer: http://www.viva64.com/en/pvs-studio/

    Download page: http://www.viva64.com/en/pvs-studio-download/

    Free keys for open-source projects: http://www.viva64.com/en/b/0092/

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply to Halfling Cancel reply

Your email address will not be published.