Quakecon Keynote 2013 Annotated: Part 1

By Shamus Posted Sunday Aug 4, 2013

Filed under: Programming 49 comments


Link (YouTube)

As in years past, I thought I’d step through John Carmack’s keynote and translate some of it into non-programmer talk, because I think there’s a lot here that’s worth talking about. Note that I am not an expert on all these things. I’m just drawing from my increasingly dusty experience as a programmer who long ago specialized in graphics.

The above embed is from IGN. It was the best, most reliable version I could find on YouTube as of this writing. The times given below are approximate.

02:00 “It’s amazing how close [the Xbox One and the PS4] are in capabilities.”

This quote was picked up by all the gaming sites so there’s not too much to say about it. I suppose this is a really juicy quote for people looking to watch a flamewar between the two major groups of console adherents, but it’s not really a controversial statement. The two machines are very similar, and that similarity was probably the result of both companies hunting for the sweet spot on the power / cost tradeoff when they went shopping for processors for their next-gen consoles.

03:00 “Larabee”

John is talking about the Larabee architecture Intel was working on back in 2010 or so.

Right now, your GPU (your graphics card) and your CPU (the processor on your computer) are very different beasts. The GPU is specialized for the types of things you do in graphics processing. As a primitive example: You spent a LOT of time processing groups of 3 numbers. They’re what puts the “3”in “3D graphics”. If you want to calculate the position of something, you need to do some kind of math on those X, Y, Z values. Your GPU is designed to have multiple math pipelines beside each other so it can do the calculations concurrently. If you want to know X*2, Y*2, and Z*2, It can do all three math operations in the time it would take a standard CPU to do one. This sort of situation happens constantly in graphics processing, but not terribly often in regular processing.

For example, your regular CPU might be doing X + (Y * (Z + 1)). Or whatever. GPU architecture couldn’t help you with this because the operations can’t be done in parallel. You can’t calculate X until you get Y, and you can’t do Y until you’re done computing Z+1, so the three operations have to be done in order.

A GPU is amazing at doing 3D rendering, but you would have a devil of a time putting all of its power to use if you tried to use it for general computing. The reverse is true for a CPU. A CPU is a railroad track that lets a single train pass very quickly. A GPU is a three (actually way more that three) lane highway where cars can travel side-by-side so more cars can flow through in any given moment. You wouldn’t make the train go any faster if you gave more lanes, because the cars are all linked together in a chain.

Larabee was an attempt to have the best of both worlds. I guess? I’ve only read a little about Larabee and I don’t really understand what the end product would have looked like.

03:45 “Things like Voxel or splatting architecture or one of these things that’s not triangle rasterization.”

“Rasterization” is fancy programmer talk for “put the pixels on the screen”. Triangle rasterization is how all rendering is done right now. Everything that winds up on the screen in a videogame is made of triangles. Even text is just a picture of text, slapped onto a pair of triangles arranged in a rectangle. Even indie retro artsy 2D sidescrolling games that use hand-drawn characters are created by taking that hand-drawn art, turning the art into texture maps, and projecting the texture maps onto triangles.

It was the dominant style of rendering back in the 90’s when graphics cards were rising to prominence, so cards were designed around it. From that point on we were basically locked into using triangles. If for some reason you didn’t want to use triangles for rendering, then your game wouldn’t be able to use the graphics card in the computer, which means your game would run very slow. Also you’d have to write your rendering engine from start to finish and you wouldn’t be able to use any of the libraries, toolkits, code snippets, or anything else out there. You’d have to build everything yourself. This is a lot of work just to make your game incredibly slow.

Voxel here is somewhat confusing to some people because the word has gotten tangled up. There’s voxel data (storing data as a cloud of cube data) and there’s voxel rendering. Minecraft uses voxel data, but it’s rendered using good old-fashioned triangles. If you want to see voxel rendering, then the most recent example that I know about is the 1999 game Outcast. And since that game uses voxels instead of triangles, it’s not 3D accelerated. It doesn’t use your graphics card.

7:00 “Tons and tons of memory is the thing that makes game development a lot easier.”

This is extremely relevant following RAGE, where Carmack and the other iD developers had a really, really hard time getting megatextures to fit in memory, particularly on the PS3.

This is really important on a console, where you’re likely reading from optical media. Maybe the player is on some gigantic level and suddenly it’s time for an elaborate cutscene. The cutscene can’t fit in memory at the same time as the level. Which means you need to drop the level data, even though you’re going to need it again the instant the cutscene ends. Since the level is stored on a DVD, it’s going to be very slow to load. Maybe you can put the level on the hard drive – assuming this console has one – but there are limits to how much you’re allowed to stick on the hard drive at one time. So you have to construct this multi-caching system where you pull stuff off the DVD and put SOME of it on the hard drive, and then pull it off the hard drive into memory. And no matter what you do, the player ends up looking at the loading screen twice.

It’s like the old river crossing puzzle. You know, it takes seven steps to get a wolf, a goat, and a cabbage across the river because the boat only holds two? Imagine how much easier the puzzle is if the boat can hold all four of you. It stops being a puzzle and you can get on with your day.

 


From The Archives:
 

49 thoughts on “Quakecon Keynote 2013 Annotated: Part 1

  1. yash says:

    You have probably seen this by now but just calling this out. They say the new Everquest is built using voxels to implement environment destruction on a huge scale:
    http://www.escapistmagazine.com/news/view/126545-EverQuest-Next-Redefines-Next-Gen-MMOs

    1. Tuck says:

      The world is built of voxels, but the rendering is still via polygons. See Miguel Cepero’s blog at http://procworld.blogspot.com for lots of details about his VoxelFarm engine that’s being used by SOE for EverQuest Next.

  2. Hieronymus says:

    “Larabee was an attempt to have the best of both worlds. I guess? I've only read a little about Larabee and I dodn't really understand what the end product would have looked like.”

    I think you meant to hit the ‘i’ key, there.

    1. silver Harloe says:

      “I think there's a lot here that's worht talking about”

      1. Hieronymus says:

        Hey, it’s an honets mostake! ;)

        1. BrgzSdklnX says:

          At least he spelled his name correctly, right guys?!

            1. Daemian Lucifer says:

              Oh god,now theyre donig it for typos.

    2. Scampi says:

      Which means you need to drop the level data, even through you're going to need it again the instant the cutscene ends

      ‘though’, not ‘through’

  3. Otters34 says:

    “…I think there's a lot here that's worht talking about.”

    This is seriously engrossing. Looking at the thought behind why so many things are the way they are is always fascinating to read.

    Thanks for doing this Mr. Young.

  4. Aldowyn says:

    I wonder how many people watch this at QuakeCon? I know it’s the keynote, but it does seem to be chock-full of programmer jargon oftentimes. I suppose there’s enough in there that a relative layman (or at least a gamer) could understand.

    1. postinternetsyndrome says:

      I don’t know what kind of people actually go to quakecon, but I have a feeling it’s not primarily cod kids.

      1. aldowyn says:

        there’s a big difference between ‘not cod kids’ and ‘people that know what rasterization is’. Although I haven’t watched it myself so maybe it’s not all that jargon-intensive. *shrug*

  5. Infinitron says:

    Maybe the player is on some gigantic level

    The problem is that he ISN’T in some gigantic level. Or if it’s gigantic, there’s barely anything going on in it. The consoles’ low memory has crippled game design for the past decade.

    What I’m afraid is that even in the next generation, levels will continue to be cramped and sparse because designers don’t know how to do anything else, and players have gotten used to it.

    1. Daemian Lucifer says:

      In all fairness,not all the blame is on the memory of consoles.For example,an office in deus ex 1 was a desk,a computer screen,a plant in the corner,a chair,and a couple of drawers.Same office in human revolution is a computer with all accessories(mouse,keyboard,etc,etc),paper clips,stacks of paper,bunch of pens,bunch of drawers with stuff in them,etc,etc.Graphics these days is going for more detail instead of sheer volume.

      1. mewse says:

        In all fairness, the reason that there’s so much stuff on desks in Human Revolution and other modern games is that there exists the memory to store it and the GPU power to render it.

        Adding more memory and more GPU power just incentivises developers to make use of it. Which, of course, heavily drives up the cost of production, but then doesn’t increase the size of the market or the final profit.

        There are a *lot* of companies which have gone under (or been purchased by big studio-conglomerates like EA or Ubisoft) due to the rising cost of making full use of increasingly powerful hardware, when one tries to compete in the AAA space.

        All of which makes me suspect that more memory in this next console generation won’t make it easier to cope with “gosh, I don’t have space to load this cutscene, so I’ll have to unload some of the world first, then reload it later” — it’s in developers’ interests to completely fill the available memory regardless of how much of it there is. So it’s far more likely that developers will fill it up, still be in exactly the same situation as before, and now be paying more in order to generate all those extra art assets. Because that’s the way it’s always worked in the past.

        Or maybe this will be the time it works, that developers finally stop feeling a need to compete on graphics fidelity. Maybe. But I’m not holding my breath. Spent too long doing this stuff to really believe that it’s likely the industry would change in that way. :)

      2. Infinitron says:

        The clutter in DX:HR was mostly static and couldn’t be moved, though.

        Also, console game from 2011, PC game from 2000…yeah, they aren’t THAT far behind.

        1. Zukhramm says:

          How much memory does that PC game from 2000 need? Because of lack of memory cripples game design, that game should be terrible.

    2. ENC says:

      “The consoles' low memory has crippled game design for the past decade.”

      In a very narrow-minded view, I suppose.

      There’s also the fact that they have made games VERY widespread for over 20 years due to the fact that gaming systems actually became affordable for the average Joe, which allowed game budgets to skyrocket, and allows for games like Braid or Limbo to come about when so many people are into games now that their marketbase is viable.

      1. Aristabulus says:

        Both the Xbox 360 and the PS3 are operating on 512mb of memory. The original Xbox had 64mb. The PS2 had 36mb. As the benchmark for graphical fidelity keeps going up, a given generation’s physical RAM limits get you less and less as the gen goes on.

        By contrast, my nothing special Pentium 3 desktop built in 2001 started with 512mb of RAM and 32mb for the GPU, ended with 1.5gb of RAM and 256mg for the GPU.

        Carmack has been working with his face on the bare iron of the chips for 20+ years. I’m willing to take his word on those matters as sincere… you should also.

        1. Michael Pohoreski says:

          > The PS2 had 36mb

          Technically the PS2 had 40 MB _total_ memory:

          * 32MB main RAM (EE)
          * 4MB VRAM (GS)
          * 2 MB IOP (Original PlayStation CPU)
          * 2 MB Audio (SPU)
          Additionally 16KB VU0, 4KB VU1, 16KB scratchpad

          I shipped a few PS2 games so I know from first hand experience. Or if you don’t believe me:
          http://en.wikipedia.org/wiki/PlayStation_2_technical_specifications

      2. Shamus says:

        He said it crippled game DESIGN, not the industry. :)

        I think the real heart of the problem is that for the past couple of generations, the memory has been small for what the rest of the hardware could do. You’ve got a processor that can (say) push X polygons, but the memory isn’t quite big enough to hold that many polygons. The fact that you have so little memory AND optional hard drives means that every game needs layers of caching, which adds headaches and complications. Like I said above, you spend all your time trying to solve various multi-stage river crossing puzzles. He’s saying we have proportionally more memory now, which means we’ll spend less time on these backend technical problems.

        You’re also creating a false dichotomy where we have to choose between “cheap consoles” and “enough memory”. The last generation was something of a price spike from the previous one, and had some of the worst memory limitations. (We can blame a lot of the cost on Blu-ray, I suppose.) I hadn’t considered this before, but now I wonder if they’ve been putting their money into CPU power because that looks better in the “which console is the best?” type arguments. I dunno.

        1. Wedge says:

          It occurs to me now that Nintendo had something of a stroke of genius with the N64. About halfway though the N64’s life, they released an expansion pack that doubled the system’s memory. Lots of new games required this pack, and they used it to significantly improve graphics without compromising on other areas of game design. It was quite affordable on its own, and actually came free with a game (I think it was Donkey Kong 64?)

          Anyway, I think this is a thing that modern consoles could easily do that would help extend the life of the console and give devs more room to work with–and I’m sure devs would love it, since they’ve been bitching about the memory limits of consoles since the beginning of time.

  6. Shivoa says:

    The ‘what would a Larabee look like’ can reasonable be extrapolated from what that team ended up releasing (the MIC called Xeon Phi). As you mention, CPUs have large caches, operation reordering, smart branch prediction, and so on to make them good as doing more than taking a load of independent operations and computing the answers. Larabee was x86 but was actually going down the GPU path, not so much mixing it up by trying to find a middle ground.

    Take a load of simple Pentium (or simpler) cores that make an Atom or recent ARM chip look like the CPU end of that line between compute and branching performance and push as many of them as you can onto a big chip (and Intel are the fab king so are a node ahead at dumping transistors into a given area of silicon). The design is all about making the most of technology today, so use the smart scheduling to get 4 threads working on each x86 core (so Hyper-Threading pumped up a notch) and make sure the use of SIMD (fast execution of one instruction on many data blocks – a vector in one tick) is a focus. When you can fit 64 of these simple x86 cores onto a single chip and each core has 4 threads and SIMD can boost what a chip can do with limited instruction decode cycles then this is a GPU design, only Intel canned the project and sold it as a competitive HPC platform (while their other GPU team took the group up to ‘good enough’ performance with their iGPUs which almost every person who buys an Intel CPU is forced to pay for). More details of Xeon Phi.

    Hopefully a combination of this not by my area of expertise and trying to make that description easy to digest hasn’t resulted is me saying anything stupid.

    1. lethal_guitar says:

      Exactly – the point is that it offers the parallelism of a GPU but is programmable like a x86 CPU.

      There are areas besides 3D rendering where GPUs vastly outperform CPUs, like matrix multiplication and other data-parallel tasks where you have a lot of data and need to perform a single operation on all of these data items. Now whenever you need to calculate a large data-parallel task, you might be able to increase performance by offloading it to your GPU (this approach is called GPGPU – general purpose GPU). Consequently, you’re treating your graphics card more like a co-processor – and this is what the Xeon Phi was designed for.

      When using GPUs, you need specialized programming languages and environments like CUDA or OpenCL. With the Xeon Phi on the other hand, you can write normal C code and then use annotations to have it run in parallel, similar to this:

      #pragma offload …
      #pragma parallel for …
      for (i=0; i<HUGE_NUMBER; ++i) {
      theOutput[i] = theInputA[i] * theInputB[i];
      }

      Without the pragmas, this is just a plain old C-loop which runs sequentially on your CPU. But when using the Xeon Phi, this will actually partition the input data into sections and compute the results in parallel.
      Doing this is also possible with a GPU, but much less intuitive.

  7. Karthik says:

    I sat through the entire three hour presentation (including Q&A), and I doubt I understood more than 10% of what he said.

    There was one part I was totally fascinated by, though: his talk of writing game engines using functional programming (and by his experiments with Haskell and Lisp). I can’t wait to see what you make of it, Shamus.

    To anyone who hasn’t watched the whole thing–Carmack recently rewrote Wolfenstein in Haskell, which is a (mostly) pure functional language. As I understand it, functional programming avoids maintaining or changing the global state of a system, and works entirely by writing routines that return non-destructive results.

    This is somewhat antithetical to the idea of a game space/world, which is all about affecting state. (In a functionally written FPS, you would have to give the bullet to the NPC routine and say “Shoot yourself with this and return the resulting version of yourself to us”.)

    Carmack says this kind of programming makes writing games harder at first, but could make maintaining a million line code base much easier because most of it is not generating any (buggy) side effects.

    1. In fact, it’s not even generating any side effects.

      In programming, side effects are things that happen that aren’t part of returning a function. To use your example, in conventional programming languages a “shoot yourself with this bullet and return the new version of you” function might also do some rendering or delete the bullet before it returns the new NPC.

      While losing buggy side effects is a plus, losing ALL side effects is a huge plus – it makes everything much clearer to understand and makes everything simpler. Instead of functions doing many things and then finishing, functions can only do one thing. And that is why it’s easier to maintain a code base like that: you can just look at a function and know exactly what it does.

    2. Peter H. Coffin says:

      There was, WAY back in the day, a Macintosh game called Bolo that actually did that kind of “Hand the bullet to the NPC and say shoot yourself with this”. When you fired at an opposing player, the target’s machine would be the one to track whether the shot was good enough to hit, because THAT is what needed to know the outcome first, and shots COULD be dodged if one was both lucky and quick, then it reported back to the shooter (through the Astoundingly Slow Network available) the outcome. And this quirk was clearly documented because it occasionally looked like the target was cheating by moving into or out of the way of the shell at the last instant. But that’s the kind of stuff you had to pull to make a multiplayer shooter game in 1990…

  8. Brandon says:

    If I may have my moment of pedantry… While triangle rendering is certainly the most common accelerated mode of rendering, other modes were used as well. The Sega Saturn used quads instead of triangles, as did some of the early PowerVR 3D hardware.

    1. Kayle says:

      More pedantry: Nvidia’s first product, the NV1, rendered quadratic patches instead of polygons. It was used in a few graphics cards, most notably the Diamond Edge 3D.

      However, triangle rendering was pretty firmly embedded in workstation hardware 3D graphics since at least the late-80s, if not earlier (though much earlier the hardware would have been drawing lines rather than filled areas). I don’t know what the visual simulation people (i.e. airplane simulation) used for rendering, though I get the impression that scanline-based algorithms were used for quite some time, which make rendering convex polygons not much more difficult than triangles.

      1. Brandon says:

        I never understood why quads didn’t catch on. I know that’s one more point of math to process, but a single quad can mimic a triangle, while it takes two triangles to mimic a quad. I would think the extra math for a single point would have been reduced in early 3D games by needing fewer polys for many scenes.

        1. WJS says:

          Triangles are inherently planar, which quads aren’t. Also, a degenerate quad (“acting as a tri”) really makes a mess of various parts of the maths. Triangles are just simpler.

  9. Jonathan says:

    C&C Tiberian Sun used voxel models for all non-infantry units. I believe Red Alert 2 was built on the same engine, which also used voxel models. For the time, they were much better looking than polygon-based units.

  10. The Rocketeer says:

    Quakecon?! Get back to work on Unrest, you bum!

    1. Paul Spooner says:

      I’m pretty sure Shamus has nothing to do with Unrest. You’re thinking of Rutskarn?

      1. The Rocketeer says:

        The above was a nod to the running gag that Rutskarn looks just like John Carmichael/is the same person.

        1. Bryan says:

          Surely you mean John Carmack? I thought Rutskarn was his kid.

          (…insert Airplane joke here…)

  11. Dumb question here.

    If game consoles are basically cheap p.c.’s then why are there only 3 console makers versus tens if not hundreds of laptop p.c. makers?

    1. theSeekerr says:

      1) Not all previous generations resembled cheap PC’s. Particularly the PS3, which was incredibly expensive to build at the time of its release, and was forcefully un-PC-like in its design.

      The original Xbox did resemble a PC, but viewed as a PC it was actually pretty powerful for the price.

      2) The hardware isn’t the hard part. The software, the developer ecosystem, that’s the hard part. See also: Ouya (and to a lesser extent, the Playstation Portable). Bootstrapping a new console to profitability is hard.

      1. I guess I worded this question horribly. I shouldn’t have written “cheap”, I should’ve written “weak”. But even still some consoles are on par with computers at the time of their launch.

        I also forgot to think of how consoles are usually a loss leader. Sold cheaper than it costs to produce them and then tagging an extra 10 dollars to each physical media sold.

      2. Peter H. Coffin says:

        Yup. TREMENDOUSLY powerful for the time. To put it into context (and I’m going to handwave some really techincal details to make it comprehensible to everyone), in 2007, Intel was making consumer parts for 32-bit machines with Core Solo and Core Duo processors. The 360 had a *3*-core 64-bit RISC processor in it and the PS3 a single-core 64-bit RISC processor with what amounted to *six* accelerator cores hung off it. and that’s before any of them actually got to GPU hardware… They were crazy powerful for the time, and if they had been willing to open up the can of worms involved in memory expansion, we’d not be talking about next gen hardware for another five years from now.

    2. ENC says:

      Gaming consoles can’t have their costs kept down by small companies. It’d be like trying to open a supermarket against someone like Safeways or Walmart; you CANNOT beat them in price (in a console’s case that’s performance as well as price) so you beat them on service and convenience. Except you can’t with a console, because people want price and a games library, and to create your own console you need to build trust from developers who believe the console will have a wide audience before they’d be willing to develop for it.

      Look at the Ouya; it uses Android, and is guaranteed to have a wide marketbase, but is still a buggy mess as they can’t throw money at it to make it work.

      As for laptops, they’re made by companies like Dell or HP which have servers on the side so they can have brand awareness and reliability associated with them. For something like Alienware, people buy them at a premium because they’re “gaming” things and they look cool, but when you want a PS4 all you want it to do is play games (and possibly bluray), and the PS4 will always be better at playing games than an Ouya.

  12. JPH says:

    I missed QuakeCon again! UGH.

    Every year I remind myself that there’s gonna be a great big PC gaming convention happening not an hour away from where I live. And every year I manage to miss it somehow.

    1. Alexander The 1st says:

      Just tell people you’re the ninja on stage during the keynote.

      Instant profit.

  13. Primogenitor says:

    7 minutes and 1119 words. So for the full video that would be … 27,016 words. Hope your keyboard is feeling up to it!

  14. Fezball says:

    My favourite solution for the river crossing puzzle http://www.youtube.com/watch?v=f-ROdRgRRsY

  15. MaxDZ8 says:

    Oh, it’s that part of the year again!
    Anyway, I think the example about GPU parallel processing is quite not the best. Exploiting 3 coord-processing might have been a thing in the past but it’s not such a big win now – NV is fully scalar AFAIK while AMD GCN retains some of the parallel processing.
    The big win is that the CPU-equivalent of “threads” are easily 100x the count of a CPU.
    Not to mention they got a instruction set which currently makes sense: there was no dot- or cross-product on CPU last time I checked, nor parallel trig instructions, albeit there’s plenty of instructions to deal with video decoding and some other tasks which are apparently more important. AES is one of those examples – although I’d agree that’s quite important!

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply to Zukhramm Cancel reply

Your email address will not be published.