John Carmack 2011 Keynote Annotated:
Part 2 of 3

By Shamus Posted Tuesday Aug 9, 2011

Filed under: Programming 65 comments

Here is part 2 of my commentary of John Carmack’s Quakecon 2011 keynote. As before, the entire presentation is first, followed by my comments, with links to timestamps.

Link (YouTube)


Carmack talks about the PS3 and the way its memory space is divided up. This has always been a hot-button topic when I’ve tackled it, but it’s been a few years since we had that conversation. Perhaps heads have cooled and we can try again…

This is a big part of why I was so frustrated with the PS3 hardware after release. Sony said things about how fast their processors were and about how much memory the thing had, and all of that was technically true while being irritatingly beside the point from an engineering standpoint. There’s a lot of memory there, but it’s Balkanized into these chunks with fixed uses. There are a lot of processors, but they all have fixed uses as well. So you end up with situations like this one, where the PS3 actually has less useful memory than the Xbox 360.

Sony got away with this because the explanation for why their machine wasn’t better was highly technical. Gabe Newell famously criticized the machine. Sony fans, most of whom are not programmers, couldn’t really understand the inherent problems in developing for highly unorthodox hardware, and wrote him off as a PS3 hater. I got the same treatment when I made fun of the thing. (Only making fun of Halo caused more fury and personal attacks.)


Imagine you’re running a warehouse filled with consumer goods, and it’s your job to move stuff around. Let’s say it’s the storage area for a big-box store. You need to receive new goods from the loading area, store it on shelves, and move the goods out to the other side when they’re ready to be put out for sale. In the Xbox warehouse, you have just one forklift, which can handle 2,000 lbs. In the PS3 warehouse, you have a forklift for moving consumer electronics, another forklift for moving kitchen appliances, another for furniture, another for clothing, another for toys, and another for “everything else”. All of these can handle 1,000 lbs. The people who designed the PS3 warehouse will tell you that you have six fork trucks that can move 1,000 lbs., which means you can move stuff three times as fast as those losers in the Xbox warehouse.

But the truth is that it’s really hard to keep all of those trucks moving without them getting in each other’s way. You only need to move toys once a week and furniture every other day, so those fork trucks are rarely used. The restrictions on what each truck can carry imposes restrictions on how you can lay out your goods. For example, popular things like consumer electronics and kitchen appliances can’t be stored on the same aisle or the trucks would block each other, so one of them has to be moved to a less-optimal location. And finally, the trucks themselves take up a lot of floor space, leaving less room for moving around and storing goods.

In the end, the speed gains from having six forklifts is nearly negated by the various limitations. Worse yet, the extra expense is incredible, the job is a lot more complicated, and there are some things you simply can’t move because they’re too heavy for your trucks. The worst part is, everyone keeps telling you that you should be moving things three times as fast. When you try to explain that it’s not that simple, they tell you you suck. “Do you know how expensive this hardware is? If you can’t make it go faster then you’re the one with the problem.”

From a software engineering standpoint, it’s going to take a lot of work to put any of that extra PS3 power to use. That extra work will only benefit your PS3 version, and does nothing for the PC and Xbox 360 versions. If you don’t do that extra work, your game won’t be using the machine to its fullest and all that expensive processing power the user bought will be going to waste.

I still believe, as I asserted a couple of years ago, that the PS3 was engineered in such a way as to choke off the competing platforms by building a developmental wall around Sony to make porting difficult. This would have increased the number of de facto exclusives. However, with them trailing in market share, the wall is working the other way and keeping developers from bothering to port to this quirky, unconventional beast. Ironic justice, I suppose. (Assuming I’m right. Note that this is all conjecture on my part. I think it explains Sony’s behavior better than, “they were just dumb”, but we’ll never know what really happened inside the company unless someone writes a tell-all book.)


Blu-ray drives are slower (latency) than DVD’s? I did not know this. That makes the earlier problems even more pronounced. If you decided to alleviate the congestion problems in the PS3 warehouse by keeping less goods on hand and simply ordering smaller batches of goods more often, now the ordering system is set up so that there’s a longer wait between the time you order more televisions and the point where they show up in receiving. And now I’ve probably stretched this warehouse metaphor too far.

To be fair, I don’t know how big the delta is between DVD and BR read speeds. Note that this is about latency, not throughput. BR can probably deliver more total data a second, but it takes longer between the time you request a block of data and the time when data actually arrives in memory. It’s another case of the PS3 hardware limitations exacerbating each other.


“Four levels of locality” – this is indeed a complicated system. At first the data is sitting on your optical media. (DVD or BR) Then it gets pulled in, which takes a long time as computers reckon things. To spare you that wait the next time the data is needed, this data is saved in a temporary file on the hard drive. (I can’t believe this game will actually run on an Xbox with no hard drive. Amazing.) The data is also held in general memory. And finally, the data is put into graphics memory, where the texture is available for rendering.


When he says “Tex sub image two dee”, he’s talking about glTexSubImage2D (), which is an OpenGL function. This function takes a block of memory and places it into a texture in memory, or a portion of a texture. Going back to my example image:


You’d use this to replace the contents of one of those little squares. I use this heavily in Project Frontier, when generating the terrain textures. I’m sort of terrified to think about what might be going on under the hood in my program. Some of my textures are as large as 2048×2048, and if glTexSubImage2D () copies the entire thing in memory when I update a little 128×128 patch of it, then that’s a really painful performance hit. Best of all, it sounds like it only happens on some systems, or with some drivers. Wonderful.

It says a lot that even after these years of working with DirectX, Carmack still speaks (and perhaps thinks) in terms of OpenGL.

He’s explaining that glTexSubImage2D () has a lot of stuff happening in the background that can make it painfully inefficient in some circumstances. He’s explaining this to illustrate how PC can still struggle, even though it’s ten times as powerful as a console. On the console side, the program has direct access to video memory. On the PC side, you have to navigate everything through the ever-changing landscape of graphics drivers, which might be doing all kinds of extra processing that you don’t need. It relates to the dirt road problem I discussed before.


“Intel’s current graphics hardware is getting decent”. I didn’t know this, and it’s an interesting twist. Basically, those crappy, built-in graphics cards on cheap PC’s and laptops are approaching the power and functionality needed to run games properly. The problem has been that there was never much of an incentive for Intel (or other manufacturers) to make their built-in graphics hardware better. Why spend money on it? Most people won’t care, and the ones that do care are the ones who will probably buy a $200 graphics card anyway. So Intel just puts the cheapest hardware it can in there.

I’m sure they’re still putting the cheapest stuff in, but things have advanced so far that even the cheapest graphics hardware is good enough to nearly keep up with the current-gen consoles. If this trend continues and we don’t suddenly get another console generation, then we’ll see an increase in the number of PC’s that can run games without needing to buy a graphics card. I doubt it will ever be enough to make the PC a true market rival to the consoles, but in the long run we might get a few more ports, and PC ports might be less horrible. In an ideal world, graphics cards will be for people who want to run the game at max settings on a ginormous monitor, and people content with medium settings won’t need one.


“A thousand characters should be enough”. Sigh. C++ takes a lot of flak for this, because this is a really common problem in the language. A programmer needs to reserve some memory to hold something. A person’s name, a directory name, a list of available fonts, or whatever. You have no idea how big this data will be. The sloppy solution is to just imagine what you think will be the biggest number you’ll need, multiply by two, and use that. If I need to store the name of a place, I might think 40 characters is enough, so I’ll reserve 80 bytes “just in case”. Then someone from Taumata­whakatangihanga­koauau­o­tamatea­turi­pukakapiki­maunga­horo­nuku­pokai­whenua­kitanatahu shows up, the program tries to store the 97-character name in the 80-character slot, and Things Go Wrong. (If you’re lucky, you just crash.)

The forward-thinking (but still lazy) programmer might avert this by reserving TEN times the space he thinks will be needed, but that will eventually lead to a lot of wasted memory, and won’t really solve the problem in a guaranteed-safe way.

As Carmack warned elsewhere in his talk, this is the stuff of holy wars.

“This language is flawed because you can shoot yourself in the foot.”

“ANY good tool can be used wrong. It’s only a problem if you’re a bad programmer.”

Programming languages all have trade-offs. Readability. Learning curve. Performance. Portability. Consistency. Availability and usefulness of third-party libraries. The cost of maintaining code. Breadth of built-in tools. Flexibility. When people argue about which language is best, they’re usually gathered around the fault lines formed by these various trade-offs.


The tear-line problem:

Your monitor updates at a fixed interval. Your videogame does not. If the game isn’t done drawing the next frame of the game, the monitor can simply repeat the last image you were shown, and you’ll have to wait for the next monitor refresh to see the new image. The upshot is that if the game dips below 60fps, you’re effectively going to be seeing 30fps. If the game misses 30fps, it dips to 15fps. If the game is running at (say) 20fps, then some frames take twice as long as others to appear, and the game will shift between 15 and 30 fps. I can feel and see this when it happens, and it’s annoying. (The same effect of bouncing between 30 and 60fps is much harder for me to detect. It’s one of those things that some people can’t notice, but which drives other people crazy.)

The other solution is for the game to show the new frame as soon as it’s done, even if it’s not time for a new frame. You might have noticed this option in games, usually labeled as “disable vertical sync”. It will just slap the new image into place, ready-or-not. This leads to situations where the part of the screen is the new frame and part of the screen is the old one, like so:

Vertical tearing.  (Simulated.)
Vertical tearing. (Simulated.)

I’ll add that 60fps is really, really hard to maintain in a complex game with lots of things going on. You need to have your threads and your scheduling working just perfectly. 30fps is many, many times easier to pull off. I’m anxious to play Rage, just to see how much I can feel the difference. It’s been years since I played a new game that ran at that speed.

We’ll wrap up this series tomorrow.


From The Archives:

65 thoughts on “John Carmack 2011 Keynote Annotated:
Part 2 of 3

  1. Alan says:

    I have to say that I looked at the time for that video and thought that perhaps I would just watch a few minutes.

    An hour and a half later, I have a better understanding of all the stuff which goes on in a game.


    Also, thanks for posting these clarifications and updates, they really help explain some of the issues.

    1. Fists says:

      I did the same, don’t think I’ve ever played an id game and definitely never been a huge fan but that was interesting and I want to play Rage now.

      Also the way he talked about zenimax/Bethesda makes it easier to forgive them for being rude to Mojang

    2. Eärlindor says:

      Same as Alan.

      This was a good talk to listen to. And I appreciate the annotations, they really help make it more comprehensible.

  2. Simplex says:

    It's been years since I played a new game that ran at that speed.

    This may be considered nitpicking, but when I read “at that speed” I imagined the game will be ‘faster’, while in fact it will be more ‘fluid’ (sorry if I used wrong word here, I am not a native English speaker).
    Also, I totally see the difference between 60 and 30 fps, especially if it fluctuates wildly. Fluctuating 30-60fps is for me worse than constant 30fps – it’s less fluid but it’s consistent so you get used to it. But when you have silky smooth 60fps one second, and then not-so-silky 30fps the other second, than the difference is jarring. If rage will actually run in 60fps on consoles and have beautiful graphics, this will be a remarkable achievement.

    Blu-ray drives are slower (latency) than DVD's? I did not know this. That makes the earlier problems even more pronounced. It's another case of the PS3 hardware limitations exacerbating each other.

    All PS3 had hard drivers, so I assume that Rage will in fact have a large mandatory install – the larger, the better, because it will make BR latency issue moot.

    As far as I am concerned, Carmack can dump the whole Rage on my PS3 HDD, storage is so cheap nowadays, that noone should complain – especially that you can slap any 2’5” HDD into ps3, not just Sony approved 3x more expensive “official PS3 hdd” – I am looking at you, Microsoft.

    1. Jonn says:

      Seeing the difference in frame rates is a funnily subjective thing. For some people, a fixed 30 FPS is smooth, and more just doesn’t make a difference.
      And then there are others like me, who can see the difference between 100 FPS and 120. Fortunately 60 FPS is smooth enough for me, even 30 FPS is ok with more frequent breaks – it just makes it surreal to read so many people saying “120 Hz monitor is pointless, human eye can’t see more than 10 updates per second anyway”

      1. Raygereio says:

        I don’t know; as you said it’s subjective. But as far as I’m concerned when you get to high enough numbers, then I remain convinced that a lot of it is more about what you expect to see/hear instead of what you’re actually seeing/hearing.

        It’s a similar thing to how some people claim that they can hear the difference between a mp3 with a bitrate of 256Kbps and one with 320Kbps, which in my experience is bullshit.

        1. Jonn says:

          There’s also the technical side of things, which in this case is basically hardware. Not all monitors are created equal, after all, and in store demonstrations can show 2 monitors with the same output, yet one has a smoother image than the other. This isn’t image processing at work, its just down to quality components giving quality results (which are admittedly easily overlooked or ignored).

          Same applies to bitrate: given midrange speakers operating at moderate volume, a moderate bitrate is all you need. With high fidelity speakers cranked up loud, the difference is (generally) very obvious, though.

          In fact its a good parallel: if you haven’t tried top-range audio equipment, there is simply no reason to think there is any real difference. Every person I’ve known to hold this outlook, given a chance to try high quality speakers or headphones, has said there was a notably difference. Not all would pay the premium, but 20+ people have agreed there is an improvement (all had interest in a form of audio where there is a point to high fidelity; if you just want thumping bass and that’s it, there’s no real need to bother here).

          It’s similar with 120Hz monitors, when you see them yourself its likely you can spot the difference while you look for it. Many people simply don’t get enough benefit to be worth the investment.

        2. TSHolden says:

          I have a buddy that can do that; we once tested him for the better part of an hour, and he could tell us without error if it was 320, 256, and for fun we mixed in some Youtube and he always caught it.

          I can’t tell the difference between sound qualities to save my life. I can, however, distinguish between the upper reaches of different framerates, and I can spot poor aliasing at 30 paces. It’s what you’re focused on, and I think a lot of it is self-reinforcing: once you start noticing framerate / audio quality / acting ability / whatever, you quickly (unintentionally) train yourself to notice it everywhere, with increasing intensity.

          1. Jonn says:

            There is a good side to that self-training, if you are aware of it, and that is building tolerance as you build sensitivity.
            If a game has a framerate that is both stable and suitable, even if it has been capped at 20 FPS, I can adjust to it (with plenty of eye-stretching breaks).
            At the same time, I can estimate rather accurately high framerates, as in 80+.

            Which leads to a factor that I’m not sure how to describe so everyone understands what I mean, and it goes back to what I said above about ‘suitable framrate’.

            Some games can be fluid and lovely to watch at only 60 FPS, while others remain jerky and clunky even at 120 FPS.
            Avoiding overly technical stuff, lets compare it to old film: if you have a smooth and even progression from frame to frame, and your film is intended for 24 FPS, you can boost that up past 100 FPS with some basic (by modern standards) technology, and have it look fantastic.

            In comparison, with the same 24 FPS film concept, imagine the frame to frame progression isn’t totally smooth – for an example, lets say there is a ‘count’ of 5 for smooth progression. Instead of 5, 10, 15, 20 and so on you have 6, 9, 14, 23 and so on in semi-random fashion.
            Now when you try to increase the framerate, it looks jerky, even at 100+ FPS.

            That isn’t meant to be an accurate technical description, just an analogy and nothing more. Hopefully the intent is understandable.

  3. Boison says:

    Shamus, I can see why this guy is your hero.

    It was uplifting to hear his comments on integrated graphics, since I’m a laptop user.

    1. MaxDZ8 says:

      I am almost convinced Intel is starting to consider graphics seriously. I just wish they could stop the CPU “onloading” BS.

  4. Lalufu says:

    About that 60/30/15 fps thing. While this is technically true the reality is a bit more complex. Just because you miss a frame update slot for 60fps (and have to show the same frame twice) does not mean you have to fall down to 30fps for the next second. If all the next frames can be delivered fast enough you’ll be able to show 59 frames in one second, which is not 60, true, but it’s definitely not 30, either.

    1. Zak McKracken says:

      yep, but in the moment where that single frame is missing, you’ll see the same image for 1/30th of a second. The problem here is that duration for a single frame is not 1/60th, going down to 1/59th. It’s either 1/60th or 1/30th.

      Also, the next step down isn’t 15 fps but 20 fps (every third of the 60 frames)

      And: I’m not sure whether I understood that right, but I think Carmack said it was useful going down to constant 30 fps for some time if you realize you can’t manage the 60. That might actually be better than losing single frames.
      The problem with the tear-line I understood like this. If you have one now, and you go back to regular 60 frames, it will stay in place, except if you slow down rendering a bit so it wanders all the way through the frame. Nvidia and AMD have agreed to work on something that will let the game go slightly faster than 60 fps, so the tear-line can be made to go back to the upper screen border.
      … did I get that right?

      1. Jonn says:

        Obligatory not-an-expert line first, and now the answer: he talked about a solution in-game to tearing, where it will lock the refresh while over 60 FPS, and unlock it if you drop below (he mentions shortly before that point that locked rate with insufficient frames to fill leads to nasty stuttering).

        AMD and nVidia are agreeing to make a driver update to allow that behavior of ‘lock if stable, unlock if not’.

        Beyond that, my understanding is that tears due to variable but consistently inadequate frame rate will ‘slide’ the tear up and down the screen based on the lack of frames; once your frame rate climbs over the locked rate (60 in this case) it fixes itself.

  5. Rem says:

    I actually remember a lot of bloggers getting a fair share of flak for talking down the PS3 as something similar to “the modern Sega Jaguar”.

    On the note of Intel graphics, I assumed he was talking about the Sandy Bridge and Ivy Bridge platforms. Apparently it’s Intel’s answer to AMD’s APU Fusion platform. I know squat about the technology at the moment, but apparently it’s a decent contender to AMD’s A8-series APUs for game performance. That being said, neither platform should replace a console or a good discrete card solution unless you like 25-30 fps with every setting set to low.

    Still, less chips on the mainboard means less space to mount them and, in turn, better performance on smaller form factors. That’s always a plus.

    1. Zak McKracken says:

      Yeah, I think an integrated chipset will always be that much slower than a separate card, just as laptops will always be slower then stationary computers.
      That combined with the realisation that nothing can ever be fast enough makes Carmack’s commentary sound weird.
      After all the Atari 2600 was fast enough for games back then, and these days your phone is several orders of magnitude faster, so …
      The only way I can make sense of this is that in the context of portability and power consumption (and the increased importance those have gained lately), fewer people will be ready to spend 300+ bucks on a separate graphics card that needs 200+ Watts of power, makes lots of noise and is only ever needed for AAA games that came out after it was purchased.

      1. Jonn says:

        Presumably it has more to do with standardization – the on-board GPU (or whatever it ends up as) tends to be the same across motherboard families, so if it is as good as the current console, that can be the new ‘moderate standard’.

        Think of it as, consoles have a set output resolution to televisions; for on-board, you set a standard monitor resolution and a standard refresh rate to cater for. The people who want more / larger / faster monitors pay for the cards to handle the extra.

      2. Jamey says:

        I will point out that an integrated GPU that has a wider memory bus than say PCIx16 (or whatever a separate video card is plugged in to) could in theory be faster. But then we’re getting into the upgradeability (being nil) problem, which in effect turns your computer into a console at that point. So I’m all for expansion slots. But I would love it if I could buy a motherboard with a built-in game-capable GPU that didn’t have to communicate across the PCI bus.

        1. WJS says:

          Uh, what? PCIe bus speed is faster than disk access. What the hell are you running where that is a bottleneck?

    2. Rem says:

      On a side note: Atari*


  6. blue_painted says:

    For “flack” I think you mean “flak”

    Just a pet hate of mine!

  7. Bodyless says:

    Personally, i never noticed a tear line and i am always turning vertical sync off. Granted, i dont care much about graphics and have been sticking to 1024×768 for almost 10 years now (mostly because windows stays readable). But without screenshots, tear lines would be an urban legend to me.

    1. Simplex says:

      Some games tear less, some more. some tear at the top, where it is hardly visible, some tear in the middle, where it is very visible (to me at least).

      1. Jonn says:

        That picture above is either exaggerated, or a near-worst-case example.
        In normal circumstances, the difference in frames is smaller – if you are changing your view-point quickly when it tears, you can see it, but if you have a near-stationary view, you filter it out subconsciously.

        EDIT: above is based on games run in situations where frame-rate drops are temporary; with consistently low FPS it will most likely be more pronounced.

    2. Kdansky says:

      First off, if you run games at such low resolution and detail settings, then you will probably never even have a tear, because you get a solid 60 fps anyway.

      Set your settings to very high, go to a place where you have 20ish, and then turn around. It can very pronounced.

      I for one can live with the jarring 30fps-60fps jumps (I cannot seem to notice them), but I register vertical tearing easily. I’ve set my drivers to force VSync in all circumstances.

      1. Simplex says:

        I see both tearing and fps fluctuations, so I am forcing vsync and sometimes sacrifice resolution, (lack of) FSAA for stable 60fps.

  8. Simon Buchan says:

    “It says a lot that even after these years of working with DirectX, Carmack still speaks (and perhaps thinks) in terms of OpenGL. ” – Though Carmack is surely familiar with DirectX, Rage is near certainly still OpenGL (He mentioned about half a year ago that if driver support for OpenGL was (still) sucky he might port the relatively tiny section of the code to DirectX)

    “C++ takes a lot of flack for this” – technically that would be C. Anybody still using ‘char extensions[1000]; strcpy(extensions, glGetExtensions());’ over ‘std::string extensions = glGetExtensions();’ is just a masochist.

    1. Wtrmute says:

      Technically, it should be OpenGL’s fault. Why can’t you just call glGetExtensions() and just receive a const char * of whichever length OpenGL needs and let it own the string is beyond me. It’s not like you have to write into that string to add an extension, anyway.

      1. Blake says:

        I’m assuming you mean have it return a new c string for your application to free() later?
        Then OpenGL would be responsible for allocating new memory and doing a memcpy every time you called that function.
        I’d much prefer allocate that myself using my own memory management or just on the stack depending on the uses.

        If you needed to store the string for some reason (unlikely but possible) then then you could always just count how much you need and do it yourself:
        const unsigned char* extensions = glGetString(GL_EXTENSIONS);
        int length = strlen(extensions);
        // malloc and copy til you’re good and happy.

        But honestly, when would you ever want to save this string?
        You’d be far more likely just to get it at the start of the function, parse it so you have your options set in your data structures then never look at it again. No need to malloc() or free() anything.

  9. Re: C++ memory allocation. See, Shamus, that’s why you have a string class in the STL. And please please please don’t tell me it’s slow. First, that’s most likely a myth. Second, how do you think it is in Java? Or Python? Fiddling with strings is complicated. But C++ does give you the tools.

    1. Kdansky says:

      The example used by Shamus was a String, but the real thing Carmack is talking about is an internal video buffer. You should not use stl for internal memory buffers which are copied to the graphics card. ;)

    2. Blake says:

      It’s not its speed, it’s about memory management. Important stuff on consoles, super important stuff in VRAM.

      Having said that though do STL strings store hashes for quick comparisons and such? I know the string library our lead wrote only has each string internalised, ref counted, and stored in memory in one place, and the string objects you pass around only hold a single pointer meaning they each have a tiny footprint, are super quick to copy (just adding 1 to an int), have super quick comparisons (pointer == pointer), and are only slow when constructing a new one from a c-string.

      STL strings rely on your libraries implementation and I’d imagine in almost all cases store their own buffer meaning copying or comparing a string would actually require looking through the whole string.
      Fine for some uses, awful for others (like implementing a scripted language such as lua).

      1. Simon Buchan says:

        Most implementations of std::string directly contain values shorter than 16 chars (no pointer to heap memory), and always copy longer values. It turns out that all the fancy string implementations of hashing, interning, slicing and ref-counting are slower than not in most C++ usage patterns, especially in a threaded context.

        1. Mephane says:

          That, and most string copying can be entirely avoided when you pass string as const std::string&. And strings returned by value will most often be written directly into the target object in an optimized build; C++11 even provides move-semantics, which guarantees that if a function returns an std::string, std::vector etc, only a few pointers are being copied around, not the actual data.

          And as a side note: A lot of STL classes totally dissolve in a fully optimized build and nothing but two pointers or the like remains. The smart pointers (especially in C++11) are known to have the same performance than bare pointers in an optimized build, they just automate the compiler(!) to perform a delete operation then the pointer runs out of scope.

          I find it sad that there is still a lot of prejudice against STL and std::string.

  10. ccesarano says:

    After the announcement of the Xbox 360 and PS3 I found myself joining the site for various reasons. It was an interesting time, but one of the biggest discussion points was on how the PS3 was actually a worse development environment. I was a big hater of the PS3 at the time.

    Fast-forward so many years later, and I’ve been playing games on my PS3 more than my 360 lately (heck, I don’t remember when I last turned my 360 on). Of course, that’s because my PS3 is new, and most of my local friends own a PS3 so if I want to lend games out I have to get it on that platform. But ultimately platform preference ends up having nothing to do with power.

    It is interesting to see the stark contrast between the two systems, though. The Playstation Network doesn’t have much in the way of Indie titles, and ultimately (for me at least) is a better location to buy old PS1 games. The Xbox 360 is loaded with smaller games, be it on Indie Games (Cthulu Saves the World) or Xbox Live Arcade (Twisted Pixel games like Comic Jumper and Ms. ‘Splosion Man).

    Nonetheless, when it comes to the big AAA hitters, it’s very interesting to see that none of them want platform exclusivity. It is somehow more profitable to develop for PC, Xbox and PS3, despite the PS3 having such a different architecture and framework, than it is to develop exclusively on Playstation or exclusively on Xbox/Windows. Only first party studios remain exclusive, or lower budget companies choose a side. And recently, Sony’s had the better big budget AAA experiences.

    I think this is one of the reasons I don’t care much about the Wii U’s graphical prowess being outdated within a few years of release. I just want to play interesting games at this point. With that in mind, I actually wonder if the world would be better off if Microsoft, Sony and Nintendo all developed some unified framework to make cross-platform-development easier and cheaper.

    But that’s right up there with “when pigs fly”.

    1. Kdansky says:

      The thing with “old PS1 games” (which also counts for N64 re-releases and so on):

      I have played them already. I don’t need a re-release. Give me something new instead. Which is where the XBoX360 shines. XBLA offers a ton of new stuff, while the Wii and PSN both just wallow in their classics. I have a Wii, which I use rarely, and a PC, which I play a ton of games on.

      1. John Lopez says:

        This is one of those personal preference things. My primary console is a 360 because of XBIG and Arcade, but one person’s “wallow in classics” in another persons chance to catch up on games they missed (i.e., they *didn’t* play already).

        Thus this personal preference will be driven by how complete one’s experience with the older generation systems was. My son could care less about “xbig games that aren’t as good as free Kongregate Flash games” and “expensive crappy arcade games”. Why? Because he is young enough he didn’t play all those classics and thus likes the archival game’s availability.

        I don’t thing he or I are *wrong*… just coming at the same libraries from different perspectives.

    2. Jeff R. says:

      I presume this (the fact that the big games tend to be XBOX/PC/PS3) is because the major studios, at this point, have accumulated compilers and libraries that hide the differences between the consoles from most of their developers (apart from a roving team working to squeeze a bit more performance from whatever their currently least-performing platform, possibly.)

  11. Alex says:

    Shamus, you’re brilliant but you’re being too easy on Sony. They really are that stupid. Remember when they said this:

    “We don’t provide the ‘easy to program for’ console that (developers) want, because ‘easy to program for’ means that anybody will be able to take advantage of pretty much what the hardware can do, so then the question is, what do you do for the rest of the nine-and-a-half years?
    So it’s a kind of–I wouldn’t say a double-edged sword–but it’s hard to program for,” Hirai continued, “and a lot of people see the negatives of it, but if you flip that around, it means the hardware has a lot more to offer.”*

    They thought that making a platform that was conducive to development was a risky and irresponsible move because, “hey, what would happen if just ANYONE could design games for these things?”
    That’s mind-rendingly stupid.

    Your idea that they were trying to wall good devs into developing only for PS3 makes sense. However, they should have seen that cross-platform games were in vogue and that (development) convenience is more popular than (hardware) extravagance.

    Great commentary, though. Proceed.


    1. Kdansky says:

      I still think that he has been misunderstood. What he wanted to say was:

      You can either have good performance and be easy to program for, or you can have better performance but be hard to program for. Obviously being hard will actually look like you have worse performance until people have learned how to do it properly, but that is what we chose to do.

      This would also explain the “what do you do for 9.5 years”-part. Because if you are easy to program for, you’ve got a slight edge over the 360 at release (due to more expensive hardware), and then that’s it. But if you put very sophisticated circuitry in there, your edge over the 360 will get wider the better people get at writing for the very exploitable hardware. A complex architecture with six forklifts can offer way more clever tricks to exploit than a simple one, yet doesn’t just cost that much more.

      1. Peter H. Coffin says:

        This is why I’ve long contended that the last consumer computing device that was fully “wrung out” in terms of having the very limits of the capacity of the machine routinely exploited was the Apple ][+, and it took 5-6 years for that as well, with much more accessible documentation and programming tools than the PS3 has. (For comparison, the Commodore 64 came close, but everything thereafter has been far and away replaced with new hardware evolutions far more quickly than developers have really had time to more than scratch at before the next new feature set comes along.)

        1. Blake says:

          I’d say the PS2 has probably nearly reached it’s limit. God of War 2 on that was quite a feat I’d say.

  12. Raygereio says:

    In an ideal world, graphics cards will be for people who want to run the game at max settings on a ginormous monitor, and people content with medium settings won't need one.

    The sad thing is that it used to be like this and then all the big advancements in graphics technology happened in rapid succession. A lot of good that did us.

    1. Jonn says:

      I hope that was sarcasm above. Mainstream lust over pointlessly realistic graphics notwithstanding, I would rather have high quality image capabilities in games. Art and aesthetics trump raw power, but the freedom to combine both approaches is worth a few technical stumbles in my book.

      I’m not one of the crazies who thinks a game has to be bleeding-edge or its hopeless, and I utterly hate hearing yet another game reviewer eviscerating a game because “the graphics are horrid, it looks like it was made two years ago.”
      Even now, I go back and replay old games – as far back as DOS era. And while I don’t want those gems to have 3d or anything, I can make a VERY long list of games that would be substantially better if they were higher resolution.

      This relatively rapid surge in technology has lead to smart developers not locking games into fixed resolutions, so when I eventually go back to them I won’t have to deal with such a huge fall-back.

      1. Raygereio says:

        No, it wasn’t sarcasm.
        A lot the things that I see as issues in the videogame industry can be tied to how fast those advancements in graphical prowess came around. I don’t think we would have al of the same issues if those advancements hadn’t arrived so fast after eachother.
        Mind you, that doesn’t mean that the advancements themeselves are the issue. I like pretty pictures as much as the next guy after all. The fault lies rather with how the industry has dealt with those advancements.

        1. Jonn says:

          Fair point, but the surge HAS done us a lot of good. Letting an indie dev throw raw power and good aesthetics at a modern GPU gives better results than forcing them to trace everything by hand, after all.

  13. noahpocalypse says:

    Irrelevant to the subject matter, but Halo’s two-weapon system does make sense. Aside from the obvious “How can he carry Rocketz, Sniperz, Machine Gun, and everything else?!?!?!” which we can totally disregard if we’re talking about gameplay, it prevents one person from having everything. In games where 1-9 holds your weapons, a rocket right below someone might not kill them. In Halo, a rocket within a few feet of someone will kill them, even if they’re at full health (shields). A Sniper headshot kills instantly. (Or, one body shot takes down their shields. After that, another body shot will kill them.)
    The point is, you can have one-hit kill weapons (which is nice, so you might not take any damage if you’re careful), but you can only carry one other weapon. So you have Rockets, and then you might want a Shotgun. That means anyone at long range can kill you easily, but in close quarters, you trump them. Alternatively, you could carry a DMR or BR (scoped rifles) and have good medium range. So someone at really long ranges or really close ranges will kill you.
    And, of course, there’s grenades and melee. The idea is to choose the right weapon for the job you intend to do. Get a good team, and have everyone pick a different range weapon, and you’ll do well. If you’re going solo… Play in a free-for-all playlist.
    It’s balanced. That’s why lots of people play it.
    EDIT: Apologies if this is flamebait. Just saying that I think Halo is a good game where you CAN go tactical.

    1. Raygereio says:

      I don’t know. I can’t really call being able to choose the apropriate weapons loadout for whatever you’re going to do “tactical”.
      Weapon loadouts would fall under logistics; while that certainly does influence tactics, logistics aren’t tactics in and of themselves.
      This was also not to flamebait: just saying it doesn’t fit within the term “tactical” for me.

      Also for some reason I’m suddenly reminded of one review for Alpha Protocol that – amongst other silly reasons just because they didn’t get a paycheck from Sega – blasted the poor game for allowing you to “just” have two weapons at the time. I don’t have a real point with that. Just felt like saying it.

    2. Gndwyn says:

      noahpocalypse, what you’re saying makes most sense in the context of multiplayer. I think a lot of the people who complain about 2-gun limits are talking about how it makes the single-player campaign less interesting.

      1. Adam P says:

        Halo campaign is an interesting beast. You’ll have a rocket launcher that’s full on ammo and an automatic rifle. You come across a sniper rifle. Do you drop the rockets or the automatic, or do you just pass over the sniper rifle? It’s interesting because this choice is presented to you, but you have no way of knowing which weapons are going to be useful in the next 5 minutes or even the next 30 (if the rest of the mission lasts that long). And you can’t stop, realize you made the wrong choice, and go back to pick up the weapon you dropped because a door closed behind you at some point.

        It’s interesting because there is a lot of depth to it, but that doesn’t mean it’s a good system. And every modern shooter adopted this flawed system! Bungie at least had the sense to try and put weapons that you would want in your path so that you would have an optimal weapon for an upcoming scenario. From what I’ve seen, other developers just copy the system “because Halo did it” without thinking about the why or the implications of doing it that way.

        Another interesting facet to the two-weapon limitation in campaign is in co-op. Now you and your buddy can carry four weapons total. You can specialize and be ready for any concievable scenario, with maybe your buddy only carrying the rockets and the sniper rifle, while you carry an assault rifle and a shotgun. Or you can go halfsies on the munitions. So at least the limitation gets you to share.

        Honestly? I think the only reason Halo is limited to two weapons is because of how you switch weapons. Press Y to toggle between primary and secondary weapons. Add any more to that list and you’re going to either have to cycle through all of them until you get to the weapon you want, or some complex menu (or button combination) is going to be needed to make selection quicker. And even though the limitation sucks, having to wrestle with a menu several times a minute would be even worse.

        1. Mephane says:

          It's interesting because this choice is presented to you, but you have no way of knowing which weapons are going to be useful in the next 5 minutes or even the next 30 (if the rest of the mission lasts that long).

          This also might depend on the nature of the game. Just Cause 2, for example, also has a similar weapon limit – two small guns, like pistols, SMG, one holstered to either side, and one big gun, strapped on your back – and I think in that game it worked perfectly. However, it is open-world and somewhat sandboxy in nature, so replacing that rocket launcher with a sniper rifle does not have the same possible implications than in a rather linear FPS. In JC2, you can always attend the black market and buy a different weapon there.*

          *Bought weapons are delivered with a helicopter, who drops box containing the weapon of choice. You can then pick up the weapon in the box, and you previous gun in that slot will then be put on the ground next to the box, and can be picked up again as long as you don’t move too far away (and the game despawns the thing). So à­t is totally viable to order a sniper rifle to take out a handful of tough guys and the pick up your rocket launcher again.

          1. noahpocalypse says:

            I see what you’re saying, and you have a point. I’m just saying that the nature of the weapons in Halo, what with several instant-kill weapons (Sniper headshot, Sword slash, Gravity Hammer, Rockets, point-blank Shotgun), make it so you would be UNBELIEVABLY overpowered if you could carry even three weapons. (FYI, in Halo 3 you could dual-wield and have another weapon on your back. You just couldn’t holster your second single-hand weapon.) And the maps are generally designed so that you can go and do CQC with the snipers. You might need a precision weapon to kill one or two of them, but the nature of the game, what with shields allowing you to take a punch (or several), allows you to choose the way you engage the enemy.
            Take TF2 for example. (First, let it be known that my main computer is a 1.6 Gh Netbook, so I can’t play many PC FPSs. So I don’t know too much about all the cool updates for TF2. But I have it for the Xbox, and what I do know is…) You can only carry certain weapons, right? You can’t pick up your fallen foes’ loadout. Because each weapon group has it’s own specialties. It wouldn’t be balanced if a Medic could start building teleporters into the enemy’s base after their two Heavys and Scouts (all armed with rockets) clear it out, and right after he’s done, follow and heal them. As far as multiplayer goes, this is ( in a sense) limited, isn’t it? (I’ll readily admit it doesn’t detract from the fun. I enjoy a good game of TF2.)
            Halo’s actually (in the same sense) less limiting because once you kill a Sniper or someone with a Shotgun you can take those weapons. (It also lets you do cool gametypes like Sniper Shottys or Swords-only, or Grifball with Hammers and Swords.) (Not out to prove Halo’s superiority, that last part just popped into my head.)
            Many FPSs for the 360 _force_ you to play the game a certain way. You can’t snipe in the dark hallways of a prison very well in those games. You might be shooting at enemies far away from the playable areas of the map. In Halo, with the fixed reticle, you can snipe indoors. (If you practice.) In Halo, you might face a few enemies you can’t melee, but they’re never very powerful. One headshot with a semiauto or burst-fire weapon is an instant-kill. (So Halo, within the parameters of it’s sci-fi, is technically more realistic than CoD or something where you’re shot again and again.) It offers more freedom (not in landscape size, in how you play) than any other FPS I know of for the 360 or PS3. (Fallout excluded; that’s more RPG than shooter, even though the weapons have great variety. It can be a melee-fest, if you want. But so can Halo. =D)

            Point is, Halo is unique because you play how you want to play. Picking one method may limit another temporarily, but that’s just for game balance.
            (How did I do? I hope I don’t sound overly aggressive, just… Passionate. =P)

  14. Vect says:

    I still feel quite ashamed that most of the stuff he says flies way over my head. I’m not exactly a programmer and the few times I tried my hand at it I quickly learn that I simply do not have the patience for it. I simply cannot do code worth a damn and I usually have no interest in the highly technical aspects of programming a game.

    Great, now I feel like the target audience of Mass Effect according to EA.

    1. Raygereio says:

      Don’t feel bad. I have something resembling a basic understanding of the subjects Carmack’s talking about and most of the speech is also beyond me.
      That doesn’t mean we’re stupid; we just happen to not know that much about those topics.

    2. Irridium says:

      Don’t worry. Not understanding Carmack when he talks about code is the same as not understanding Steven Hawking when he’s talking about science-stuff. Your not stupid, they’re just worlds smarter than you, and me.

      Hm… that sounded worse than I intended… Point is, they’re super-duper smart, we’re normal.

      1. Ragnar says:

        It’s not a shame to not understand what they talk about. What is a shame is if you just give up trying to understand at the first sign of something that might be more complex than you currently understand. Note directed at anybody specifically here, but I get quite tired of people just exclaiming “I don’t understand” instead of actually trying to understand.

    3. Aufero says:

      I used to think of myself as a programmer, but since I last coded for a living in 1992 I’m in pretty much the same boat. Both that talk and the commentary are fascinating and informative, but I suspect it would take me a year to catch up enough to understand everything Carmack had to say on the subject. (And he’s just covering what, to him, are the basics.)

  15. TheRocketeer says:

    I know this sounds cynical, but seeing this really makes me think that the famously-negative tone of what I’ll call ‘fan discourse’ is never going to improve.

    In order to have an intelligent critique of a game and why it is the way that it is, the kind of nested problems, domino effects, and blasphemous techno-sorcery that John discusses in his keynote would have to be understood and acknowledged by anyone wishing to involve themselves in the discussion in some meaningful way.

    But even the most rudimentary concepts of modern games craftsmanship are far beyond the grasp of any layman. It seems like even engineers and programmers find difficulty in wrapping their heads around it all sometimes. And that threshold is rising higher and higher with every new AAA-title that comes out.

    And that’s just the technological aspect- just one facet of many, and any one of them are indispensable to games criticism. When you dislike a game, you can either use what limited technical and critical knowledge you possess of the game, its development and the concepts behind them to parse where the programming difficulties were, where the design could have been stronger, where the visual and sound direction fell flat, where the thematic strengths and weaknesses spoke to you or disenchanted you… or you could say that the game sucks, its designers are greedy whores, and anyone that likes it is a fag.

    All media have this problem, of course. But the sheer complexity of games as a medium and the production thereof mean that it is magnified many fold. Literature isn’t getting more complex. Cinema isn’t getting more complex. But gaming started out an arcane trade, and has become steadily more and more incomprehensible to its consumers as the years have passed. And its going to keep getting worse for the foreseeable future.

    The quality of an art’s criticism and dialogue is founded upon a comprehensive understanding and appreciation of that art. Such an understanding of games is already justifiably rare, and the maturing of the medium is dependent upon a steady development of its artistic and scientific components that will only broaden this gulf. Maybe it sounds cynical to say it. But maybe that’s just the way it is.

    Of course, I’d be overjoyed if the future proved me wrong.

  16. Chris says:

    “A thousand characters should be enough”. Sigh. C++ takes a lot of flack for this, because this is a really common problem in the language.”

    I think you mean, C should be taking a lot of flack for this. If OpenGL gave you a std::string or similar this wouldn’t have been a problem. Even if Carmack had been using C++ and just wrapped his call to glGetString in std::string it wouldn’t have been a problem. If he wanted to stick to C (Which he obviously would because he is horribly misguided) he could have used bstrings from The Better String Library. Alternatively glGetString could have taken a char* and length parameter and filled out the string for you, returning a failure code if you didn’t provide enough space. It was a very, very avoidable mistake. Seriously, don’t use raw strings, there is *no* excuse for not using bstring or std::string.

    1. Shamus says:

      “Seriously, don't use raw strings, there is *no* excuse for not using bstring or std::string.”

      Sigh. And so the absolutist arrives. Like the man said, this is the subject of holy wars.

      Did bstring exist in 1995? Was is available for use in commercial code? Was it mature, stable, and widely known enough that it would be reasonable to expect him to know about it, and be able to drop it into a commercial software and expect it to work?

      “Everyone” uses C++ now, but the migration was still underway in the 90’s. There’s a migration cost associated with the move, and the benefits weren’t as clear then. The costs are more than just converting your old code and adding type checking. There’s a finite number of hours of work, and properly exploring C++ to take advantage of the language would have subtracted time from one of the other projects.

      I don’t object to the assertion that he made a serious mistake. (He said as much himself.) But the idea that “there is *no* excuse” for not doing things in 1995 the way they are done in 2011 is just silly.

      1. Simon Buchan says:

        Quake was C. Quake 3 was the first C++ id game, so “C should be taking a lot of flack for this” is perfectly accurate. C++ didn’t exist as a real commercial language at the time: compilers were terrible for one thing, and the majority of communal knowledge about how to even use C++ in order to get it’s benefits without shooting yourself has only risen in the last 5-10 years – Carmack was the exact opposite of “horribly misguided” for mid-’90s. However, I think it was the Q&A for this talk that he mentioned that he wished someone had shown him a C++ book that talked about how it was intended to be used rather than him judging it by looking at the code written at the time – presumably he means patterns like RAII and smart pointers.

      2. Chris says:

        Obviously he couldn’t have used STL if it didn’t exist or didn’t have wide compiler support for it. And I am not objecting to him using C in those days, but I’m betting the problematic code looks like this:
        char szExtensions[1024];
        strcpy(szExtensions, glGetString(GL_EXTENSIONS));

        At that point he could have used something like this:
        struct String
        char* szValue;
        size_t len;
        void StringAssign(String* pString, char* szFrom); // if len isn’t big enough do free() szValue, malloc a new szValue, strcpy etc.
        void StringFree(String* pString); // free() szValue

        I assume bstring does something along these lines underneath

  17. thebigJ_A says:

    Turns out it’ll take 22GB to install to the 360 hard drive. And here I’ve been existing with the original 20GB HD all these years.

    See, with this and Skyrim, I’m finally thinking about upgrading the thing, but the drives are still stupid expensive! Yeah, there’s cheaper third-party ones, but who knows how good they are. They could have a slower read speed for all I know, and I do know they don’t play original xbox games (not really a problem for me, the only orig. xbox game I still own is Morrowind, and that’s best played on PC).

    Then again, I am also debating getting a proper PC that can play these games, but that’s even MORE expensive…

    Man, I just don’t know what to do.

    1. Simon Buchan says:

      Then you’ll be happy to know Carmack recently announced on twitter that they figured out installing one disk at a time!

  18. Winter says:

    What makes me sad is that, although 60 fps is pretty serious, it’s actually not that high a number. My big CRT monitor was able to hit 115 hz, which is almost twice the refresh rate of modern LCDs.

    (Of course, LCDs can actually hit 120hz pretty easily, but there are some tradeoffs and… the real problem… due to complete bullsh some intellectual property issues, everyone uses HDMI and that can’t support 120hz because it doesn’t have the bandwidth. Due to HDTVs being the holy grail of monitor manufacturers (i guess if you could produce an inferior product and sell it for five times as much then you’ll probably want to do that) there’s a strong push to move to and stay on HDMI.)

    Anyway, my extended aside aside, games are focusing on 30hz for the sake of providing prettier graphics, but this provides a much worse experience overall. Sure it’s fine in single player, if you’re aiming at a more “cinematic” feel, but as Carmack explains actually playing the game is a much worse experience. Running 60hz is, similarly, worse than 120hz. (Although the difference is much smaller even though it’s still a doubling.) The idea that 60hz is some kind of unattainable dream makes me, as a “competitive gamer”, vaguely angry. That said, i hugely respect Carmack for pushing 60hz. Achieving that, while actually looking better than the modern FPSes, is practically a miracle. (Yes, Rage is way better in a lot of ways. No the textures aren’t going to be as detailed, but it has big open spaces (which a lot of other modern FPSes do not) and things like that. Carmack is really amazing…

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply

Your email address will not be published.