Quakecon Keynote 2013 Annotated: Part 4

By Shamus Posted Sunday Aug 11, 2013

Filed under: Programming 39 comments


Link (YouTube)

As before: In the process of going through this I'm bound to commit minor omissions, errors, misunderstandings, grammatical errors, or war crimes.

Times are approximate.

41:00 “I don’t mind blocking for the 1.2 milliseconds it will take for this to come in from flash.”

Carmack is talking about the difficulty of loading resources while rendering. This is mostly a problem with multi-threading.

The idea is that your program has several independent threads. One is running the game itself. Another is pumping all the data to the sound card. Another is loading in geometry or textures. They all do their own thing and in an ideal world they wouldn’t interfere with each other.

But what happens when some asset isn’t ready just yet? Like, it’s time to draw a rusty crashed alien spacecraft. Maybe you’re still missing the doorknob for the spaceship. Maybe you’re missing the texture for the “I brake for Strogg” bumper sticker. Whatever. It’s time to draw the scene, there’s something you need, and you don’t have it just yet. Maybe you’re waiting 1.2 milliseconds for the asset to arrive from flash memory. Or maybe you’re waiting for something in the neighborhood of 12 milliseconds to grab it from the hard drive. Or maybe you’re looking at a terrifyingly long wait of 100ms (a tenth of a second) for it to arrive from (shudder) optical media like a DVD.

In this specific case, Carmack is probably just talking about the time it takes to move stuff around in in from flash memory. He’s saying he wants rendering to STOP until the asset is in place, since the game can’t proceed without this asset. (For the sake of argument let’s say this thing is critical.) Instead of the game doing a painstaking inventory and making sure everything is ready BEFORE it begins drawing, it’s way easier if you just start drawing and stop again if something isn’t ready yet.

Hardware vendors HATE this idea, since it makes their graphics hardware perform more slowly for things that aren’t their fault and are out of their control.

56:00 “CRT’s back in the old days had essentially no persistence.”

Some of you young folks might not remember the early 90’s laptops, which had ghastly persistence. Persistence is where it takes time for a pixel to stop shining once it’s no longer wanted. If you’re writing white text on a black background and you backspace to remove a letter D, how long does the after-image of the D linger onscreen?

On a CRT (Cathode Ray Tube, those big heavy monitors that re quickly going extinct) the pixels would go dark instantly. On the LCD of a laptop in the early 90’s, it would linger for a long time. I don’t actually have the numbers, but it felt like a second or so. This was really strange when you would wave the mouse around the screen and it would leave this trail of after-images behind it.

LCD’s have gotten much, much better, but they’re still not quite as crisply on/off with their pixels as those old CRT’s were. Sure, they’re better just about everything else (weight, space, power usage, flat viewing surface, etc) but we did take on this persistence problem. You don’t really notice it very much during normal usage, but when you’re wearing the screen on your face I gather it becomes important.

59:00 “We update 60 times a second, like clockwork […] and that’s why we have vertical tearing.”

He’s taking about this visual problem:

Vertical tearing.  (Simulated.)
Vertical tearing. (Simulated.)

1:13:00 “Robustness first, then predictability, then performance.”

This is kind of a turning point for games. The reverse has been true for over two decades. An entire generation of programmers has risen in this world where you’re fighting for every CPU cycle. They would bend over backwards coming over with tricks to get the most graphical bang for their buck. The code wasn’t pretty, but it was fast and that’s what counted. And if the code was ugly, who cared? You were going to throw most of it way in four years when technology changed and you needed a completely different set of hacks to make things work.

And now we’re finally reaching that point where people are saying, “We’ve got plenty of power and we’ll probably still be using this code eight years from now. Let’s make sure the code is reusable, clean, easy to read, and properly annotated.”

It will be very interesting to see this new mindset percolate throughout the industry and see how it changes our approach to making games.

1:15:00 “OpenGL won.”

I actually have no idea where the battle between the Big Two is at any given moment. You can render with OpenGL, or you can render with DirectX. Each of these libraries gives the intrepid programmer a way to talk to the graphics hardware and make it draw stuff. Which do you use?

GL was first by a long way, and was designed by and for people who programmed in vanilla C. DirectX came along years later and was more of a C++ creature. There was a tug-of-war for years, but by the middle of the last decade I assumed that DX was winning mindshare based on the number of big-name engines that used it.

Back in the 90’s I dabbled in both and I hated the way DirectX worked. There were more steps to do simple things and there was a lot more typing involved. I remember DX as having obnoxiously long variable types and for their documentation being unforgivably horrendous. I’m sure the latter got better, but I already knew my way around GL and I never saw any reason to make the switch. Today, almost 15 years later, I’m still using OpenGL.

1:18:00 The functional programming stuff.

This is a big topic and I know people are kind of eager to hear my thoughts on it. The thing is, it might take me a while to do it justice. I have another programming series planned after I’m done with this Quakecon stuff. I’m writing a kinda-game thing and I might end up talking about this stuff in the process. It doesn’t make sense for me to delay writing about my own code and while also spoiling what I’ll have to say, while also not giving the subject the attention it deserves.

 


From The Archives:
 

39 thoughts on “Quakecon Keynote 2013 Annotated: Part 4

  1. Zukhramm says:

    Aaw, and I was specifically waiting for the functional part. Given the problems I usually face in the code I write, more functional type ideas seems like a great solution, and I try to do it even without being in a particularly functional language.

    1. silver Harloe says:

      By writing that post, though, you pretty much remove yourself from the audience Shamus is usually trying to address with these annotations. You already know what functional programming is and why it’s a useful pattern of thinking. The good part of his saying he might address it separately later is that he might go into the kind of depth you and I would appreciate more.

    2. Alan says:

      If you’re a programmer type, I recommend Carmack’s “Functional Programming in C++“. Very accessible, assuming you’re already a programmer.

      1. Brandon says:

        This is really interesting to me, because this is essentially how my first year computer science professor taught me to write C++. Whenever he would find places in my code where a function was having side-effects, he would circle it and write that whatever it was should be handled somewhere else, or removed entirely if it could be handled more elegantly or wasn’t even necessary.

        It makes so much sense when you think about it, and it makes life so much easier if you write code with it.

        Thanks for the link to that article, Alan.

        1. nerdpride says:

          Interesting.

          Code projects are one of the reasons I like reading this blog and I’m glad to be connected to this other Carmack blog now too.

          +1 comment for doing more programming stuff.

        2. Volfram says:

          When I was learning, this concept was extended to everything and that’s how I understand “Object-oriented” programming. Every function, every class, should be a black box, and you display a set of inputs and a set of outputs, and the rest of the program should only even care if the black box exists if it has to interact with it in any way, and it shouldn’t care at all how the black box works.

          Oddly enough, I learned the most of this concept from my HDL classes, because hardware logic modules function almost exactly like a multi-threaded object-oriented program(each wire and gate is an individual thread, each module is a black box).

          So learning hardware helps me write better software.

          1. Zukhramm says:

            I guess, when talking about function, black box means the same thing as pure. The problem comes when you put the black box around the object, but without pure function, and then have black boxes containing other black boxes, and the same black box touched by multiple different places in your code. That can easily become a mess.

      2. Canthros says:

        Thanks for linking that! It’s the closest thing to a generalized, practical argument for functional style that I’ve ever seen. It also mirrors some feelings of my own, from my own experience (which is in standard, every-day business programming and is much more boring), which is nice: I feel less like a wild-eyed crazy man, now.

        I may have to pass this around, some.

      3. Fine and all, but did anybody write “Dysfunctional programming in C++”? Arguably more important, because I’m sure lots of people are doing that without a proper guidebook.

        1. Canthros says:

          It’s been my experience that dysfunctional programming muddles along fine with nary a guidebook or formalization to its name.

        2. Brandon says:

          Someone has written about something VERY similar.

          http://www.shamusyoung.com/twentysidedtale/?p=9557

      4. Cuthalion says:

        Read through that. Thanks for posting! It’s timely, as I just ran into similar considerations in my own coding with a method that was tricky to extricate from everything. Plus, when I should allow methods to modify external stuff has been an ongoing consideration as I try to balance productivity now with prevention of future misery.

    3. kdansky says:

      I will suggest taking a look at Scala. It’s easy to set up (IntelliJ recommended), and its bible (“Programming in Scala” by Odersky) is a great read. I’m not actively writing much code in Scala, and my work is in C++. But I can really feel and see the difference in style it made for me: I rely as little as I can on side-effects, I try to always work on copies, or return copies, and so on. I can still optimize later, following Herb Sutter’s wise words:

      “Don’t optimize until you have actual proof that you need to.”

      I was never good at functional programming at Uni, but I really see no way around it in the next decade, because it does two things really well that we struggle with right now: It is great at dealing with concurrency, and it is much easier to reason over, which in turn results in lower bug count, and easier maintenance. So I’m making an effort right now to slowly but surely add these skills to my repertoire.

  2. Aldowyn says:

    ‘kinda game-thing’ like most of your programming projects, then? Sounds like you’ve decided you’re actually going to write about it, cool! Looking forward to it ;)

  3. Amarsir says:

    “Robustness first, then predictability, then performance.”

    If it’s the same people doing it, that could be a tough switch for a lot of bad habits. I self-taught programming in the 80s, supplemented by external, high school, and college classes in the 90s. And at some point it was ingrained on me that “text parsing is computationally heavy and should be avoided if at all possible.”

    Now that may have been outdated by the time I learned it, and it certainly is today. But it stuck with me so I still inherently shy away from it. And just a few years ago if asked whether Twitter was possible, I’d have said “all that text parsing? No way!”

    Bad habits…

  4. Bryan says:

    > In this specific case, Carmack is probably just talking about the time it takes to move stuff around in flash memory.

    I understood it not so much in that way, but more as the time it takes to move stuff from flash memory into the graphics card memory. (Or the integrated memory, given what he said earlier about merging the two making it so much easier to build stuff.) He talks for a while about mmap()ing stuff like texture (and probably vertex) data directly into the video card, and then just letting the hardware and OS kernel handle the actual transfers, and just dealing with the extra latency this can cause.

    Oh, right, mmap().

    For people who have never programmed POSIX, mmap() is the OS interface function that takes a bunch of parameters, including a file descriptor (the handle you get back from the OS when you open a file), an offset into the file, a byte count, some protection flags, and a couple other things I’m probably forgetting, and sets up your process so that what looks like a memory access ends up going to the file instead. It returns the virtual address of the start of the mapping.

    With 64-bit pointers, you can mmap() *tons* of data into your address space, and assuming the data is readonly (which for game texture or geometry data it would be), the OS kernel will take care of copying data from disk (or flash) into memory as you touch it, and will also take care of dropping data from memory as you start to get close to using too much. It does introduce variable latency for memory access (especially with spinning disks), but it makes the programmer’s life *FAR* easier. And the variable latency can usually be worked around, as well: there’s a separate mlock() call that can force certain ranges of virtual memory to be present (along with munlock() when you’re done), or if there is never any memory pressure, the program can simply touch the data once to get it all loaded up, and then not worry.

    This (along with writing dirty (modified) pages out to the swap space and removing them from memory when there’s pressure) is the reason almost nobody cares about how much physical memory is actually present anymore. They all just assume a ~4G address space for 32-bit mode, and a ~16E address space for 64-bit mode. It’s a huge freedom when writing code.

    Extending this to the video rasterization hardware would *probably* have very similar effects: variable latency of memory access from the GPU, because sometimes it would have to page the data in from raw storage, but making life *far* easier for game designers. On x86, for CPUs, this works by marking the pages not-present (in the hardware), which causes the program to be paused and the OS to get notified when the CPU tries to access them. When that happens, the OS loads the data up (from the file backing), marks the page present, and resumes the CPU instruction. Some similar idea may work for GPUs, but only with some kind of memory-management unit to handle the page presence flags, and some way to resume GPU instructions.

    Be very interesting to see what happens there.

  5. Andy_Panthro says:

    Can I just say, as someone with zero programming knowledge outside of a few lines of BASIC, I’ve really enjoyed these annotations.

    I haven’t actually watched the videos either, but found your explanations for various things illuminating.

    Thanks!

    1. Volfram says:

      It’s a lot of fun. I found the video this year was a bit on the quiet side, so I put together an audio file and loudened it up a bit. I’ve been listening to it periodically, say as I walk to and from the store every week. Seems to keep me pretty well in line with Shamus’s annotations.

      I could potentially send you the audio file, if you’re interested.

  6. Ed Blair says:

    Enjoyed the series! Thanks for writing it up for us!

  7. Peter H. Coffin says:

    “CRT's back in the old days had essentially no persistence.”

    Sure they did. They had a persistence that a great deal of work went into to get phosphors tuned in with the electron gun to fade sharply after whatever the optimum refresh rate was for that particular tube. (That whole thing of setting the graphics property to 30, 60, 75, whatever Hz that still lives on in the graphics properties.) And that only came about once we actually HAD graphics games to deal with. Back in the age of dinosaurs, when text ruled dataprocessing, the level of persistence was stunningly high — even turning the screen off left a readable image for a good second or two.

    1. Volfram says:

      Yeah, I remember playing the 3rd Myst game and noticing that my monitor almost had motion blur built in, because if I traced a bright light around the screen it would leave a trail.

      By the time I had figured out what this “persistence” he was talking about was, I’d concluded that Valve was probably right(it makes sense) but I would have also assumed that “fastest refresh we can get and strap it to your eyes” would be sufficient prior to the story.

  8. X2-Eliah says:

    Hmm. “OpenGL won”… wait, how? Aren’t ~99% of all modern games (let’s say last 8 years or so) written for DirectX? Even if he is referring to the emergent indie games in OpenGL, that’s still nowhere near enough to claim that ogl is winning the competition.

    Also.. Just consider this – what do graphics card makers put on their boxes when releasing a new product? “DX 11.1 ready” or “OpenGL somethingsomething ready”? (Hint: it’s DX). That alone should show which one is winning…

    (Mind – not saying that DX is better. I don’t have enough knowledgebase to make a proper judgement about bestness).

    1. Atarlost says:

      OpenGL is open and cross-platform. It can be made to work on anything. DirectX is Microsoft. It may dominate Windows and no doubt it’s what the Xbox uses, but I would guess that other consoles would use OpenGL. As an open standard they can implement it on nonstandard architectures and don’t need to pay licensing to Microsoft.

      The major mobile OSs have a POSIX lineage and are probably OpenGL only and they’re getting to be a nontrivial portion of the industry these days.

    2. Shivoa says:

      What are all games coded for the mobiles (iOS and Android), OS X & Linux on PC, and much of the console world (Ok, PlayStation has it’s own API for graphics so you aren’t actually going to a common OpenGL API but you can if you want) using? Hint: DX is only on Windows and Xbox. Also, quite a lot of dev tools on even Windows are coded to an OpenGL buffer, not a D3D one (but this is far from a comment based on an exhaustive look at all different 3D apps on PC). So when you look at it that way, OpenGL won. Or at least it didn’t lose, despite MS trying to kill it.

      Yes, D3D is what Windows games are coded to but if you support other platforms then you have an abstraction layer and you make sure it can talk to OpenGL. And Windows hardware (as most consumer GPUs are primarily for Windows gaming) is specified to the feature level (with some games now having to state both feature level for hardware and API level for the OS – this game need DX9c (feature level 9_3) hardware but you need Vista as it need a DX10+ API being a common requirement) but you will also see the OGL version on most spec lists. In fact, Intel started out making their iGPUs only really target D3D with some stupid OGL version support but over the years have increased their support and software work to actually make OpenGL useful on their APUs. AMD are a bit behind (who knows, they expose the features via the extension mechanism but I have no idea why their drivers are GL4.2) but nVidia are riding OpenGL with driver releases on the day a new revision is announced (and now they’ve moved to caring about supporting mobile GPUs with high end features they obviously talk about OGL there too).

      1. Volfram says:

        I have also found OpenGL to run an average of 10% faster in all comparisons I’ve been able to do between the two. Between that and the fact that I want to be developing for stuff that will possibly run ANYWHERE means there’s really only one option.

        1. Bryan says:

          As Valve found out as well. L4D2 ran (using their benchmarks) at 270.6fps on D3D, and 303.4fps on OpenGL, both in Win7. Run the OpenGL version on Linux, and 303.4 jumps a little more, to 315.

          But 30fps out of 270 is, indeed, around 10%.

    3. Eruanno says:

      Well, DirectX is dominant on games when it comes to three platforms: Microsoft Windows, Xbox and Windows Phone.

      That being said, there are plenty of other platforms that are using OpenGL, even more so with mobile becoming a thing now. This includes Nintendo Wii/Wii U, 3DS, iPhone/iPad, Android, Linux and to some degree Playstation 3/4/Vita (although it is some sort of hodgepodge version on PS that is and isn’t OpenGL, I’m not entirely sure how that works).

      Also all 3D software and applications made to model and create game assets almost exclusively run using OpenGL rather than DirectX (Photoshop, Maya, 3DSMax and a whole bunch of others come to mind).

      So yes, DirectX is being used a lot on Microsoft platforms for games, but OpenGL is still used a whole lot when you look at the overall big picture and start including other software. As for which one is better… I honestly have no idea.

      1. Shivoa says:

        There is OpenGL (an ES version specifically, with extensions for the hardware capabilities and some other Sony tweaks) on the PlayStation platforms but you don’t use it a lot is what I’ve generally heard. The keyword to find more details about it is PSGL. It doesn’t use GLSL but nVidia’s Cg as the high level shader language and it has been tweaked to extend ES 1.0 to where it needed to be for the hardware to make sense (with a load of 2.0 features because obviously this needs more than a fixed pipeline to access the non-unified but programmable shaders). Which was kinda what I was waving my hands at above, Sony are leaning on OpenGL but I wouldn’t really say the PS3 is an ideal OpenGL box if you wanted to divvy up platform support.

        Most games (as far as I know) mainly code to the lower level Sony API (keyword: LibGCM or just GCM) and if you’ve got a bit of coding background then I can’t recommend these Newcastle Uni worksheets enough. Just the absolute basics of how to get your code working on a dev PS3 in a few short workshops.

      2. Bryan says:

        There’s also WebGL, in browsers. (Though that’s based off ES, so it doesn’t have the fixed function option; shaders are required.) But yes.

      3. Michael Pohoreski says:

        > This includes Nintendo Wii

        I shipped two games on the Wii and _implemented_ a subset of OpenGL on it. It does NOT natively use OpenGL.

        The graphics library is called: GX

        It is heavily _inspired_ from OpenGL.

        Shivoa’s comments about almost every PS3 game using LibGCM is correct.

  9. Phantos says:

    That looks more like it’d be called horizontal tearing to me. But then I’m dumb, so…

    1. X2-Eliah says:

      Yeah, it would seem so, but Shamus’s pic is basically accurate (if exaggerated). I guess the logic behind the statement is not “there is a vertical tear line somwhere”, but rather “there is a tear line intersecting the vertical”.
      Another way of looking at this is as follows – if you look at that image from the side (along horizontal lines), the tear is only 2 pixels wide [well, 0 pixels, but 2px width makes it apparent). That’s a tiny tear. If you look at it from the top/bottom (i.e. vertically), the tear is as wide as the given pixel-space itself. That is a massive tear. Since this tear influences the vertical a lot more, it’s called vertical tearing.

    2. kdansky says:

      If your game runs at 30 Hz, or at a flickery 60 Hz (but with VSync off and tear lines), it will look so much worse than at a fluid 60 or even 120 Hz, even at lower resolution, details or effects.

      To test that out: Load your favourite current game up, and set AA to the highest setting (x8, usually) you can find, switch Vsync off and play a bit, and then pull it all the way down to 2x, but add Vsync and compare. Note: This won't work for Skyrim, because its Vsync implementation results in catastrophic mouse lag.

      Dark Souls is a great example to demonstrate how ugly tearing can be, because you can either have a really sluggish 30 FPS (fixed), or 60 FPS (with unbelievable tear lines) when you add DSFix. It’s at least as bad as Shamus’ example.

      1. Volfram says:

        For all its beauty, the first Uncharted game is, too, if you’re using Component or HDMI. Composite doesn’t output a high-enough resolution image for tearing to show up.

    3. Volfram says:

      After reading X2Eliah’s post, I have concluded that it’s called both vertical and horizontal tearing, and “tearing” is sufficient. You don’t get tearing on the other axis due to the way pixels and the underlying memory map are updated, anyway.

    4. Decius says:

      Technical explanation: Video cards output a stream of pixels to the monitor, starting from the top left and going left to right one row at a time. At any given point, the video card has one picture that it is sending to the monitor, and it transmits the color of the X+1th pixel after the Xth pixel.

      The video card is also updating the picture that it sends to the monitor rapidly. This update happens all at once- the prior picture is dropped only when the new one is completely finished.

      When a new picture comes up while the monitor is receiving information about a pixel halfway down the screen, the video card can start sending the new picture when the last one is still halfway drawn. This results in the top portion of a graphical frame to be from a different picture than the bottom portion; if during that period of time the camera is panning to the right, the top portion will appear displaced to the right by the amount that the camera panned during that one update.

      A mode called vsync prevents the problem from being expressed by only allowing a new picture to be finalized when a frame has been completely drawn to the monitor. There is a tradeoff: If your video card can consistently calculate 58 graphical frames per second, and your monitor updates at 60 frames per second, then without vsync activated you will have tearing and 58 different graphical frames per second; if you activate vsync, then you have only 30 frames per second, because it takes longer than one monitor frame to draw the next graphical frame, and the one after that can’t even start until the next one can be started; each picture lasts for two full monitor frames.

      But I ramble.

  10. Cuthalion says:

    This seems marginally on topic… What do you (Shamus or really anyone) think about Carmack’s joining Oculus VR as CTO?

    1. Volfram says:

      I think it bodes good things for the Oculus Rift. Carmack has already done a good deal of research into the field of head-mounted displays, and the man has a very deep understanding of both good coding and human perceptions of the world. I look forward to what comes out of it.

      He was practically a part of the team anyway when it was announced that Doom 3 BFG would be the first Oculus Rift-ready game.

  11. Michael Pohoreski says:

    1. LCD's have gotten much, much better, but they're still not quite as crisply on/off with their pixels as those old CRT's were.

    Actually LCDs are _finally_ just as good as CRTs. see nVidia’s LighBoost + 144 Hz monitor such as Asus VG248QE or Asus VG278H.

    Asus VG278H High Speed LightBoost Video
    http://www.youtube.com/watch?v=hD5gjAs1A2s

    The problem with LCDs is a) their viewing angle is garbage, and b) their PQ (picture quality) is garbage due to (a).

    2. 59:00 “We update 60 times a second, like clockwork […] and that's why we have vertical tearing.”
    That’s a total COP OUT.
    http://www.blurbusters.com/faq/60vs120vslb/

    Myself and others can tell EASILY the difference between 30 fps and 60 fps.
    http://www.testufo.com/#test=framerates

    *Modern* LCD now support 100+ Hz. For computer generated images you really NEED ~100 Hz to minimize temporal artifacts.

    Related: Back in 1995 racing games were running their physics simulations at 100 Hz. Today they run them at 300 Hz for improved accuracy. Having a dog-slow 30 fps throws all the hard-work in the gutter.

    3. 1:15:00 “OpenGL won.”
    Carmack’s comment is WRT to this: EVERY embedded device / smartphone (iPhone, Android) uses OpenGL ES. DirectX is nowhere to be found.

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply to Canthros Cancel reply

Your email address will not be published.