My analysis and annotation of Quakecon 2013 keynote continues. As before, I want to caution you that I’m sometimes out of my depth as much as anyone. My graphics knowledge is years out of date by now, and even when it was springtime fresh I never got as close to the hardware as Carmack does. I’ll probably make small errors or omissions in my notes.
Times are approximate.
11:30 “I think the Kinect has some fundamental limitations.”
When he mentions “latency issues” on the Kinect he’s talking about the time between the user taking some action and the point where the game knows about it. The Kinect has to analyze incoming images, identify the human being(s), extrapolate what position their body is in, and compare that to the last several frames to understand how the user is moving. That sort of image processing takes time. The Kinect 2.0 supposedly will have a 60ms latency while the original Kinect hd a latency of 90ms. For reference, I think your typical bargain wireless mouse has a latency time of about 20ms or so, meaning that even the new and improved Kinect is still three time slower than the slowest mice.
But this strikes me as being a bit beside the point. The input lag is pretty bad in a technical sense, but it’s not the real problem in my mind. The gesture itself is going to be the real slowdown. You can’t just wiggle a finger and have the Kinect understand what you want. You need to make broad, obvious body motions. Those take a long time to perform. Compare the time to wave your arm over your head compared to the time it takes to make a little micro-movement with your fingertips on a mouse. We can haggle over the 60ms input lag of the Kinect, but the real problem is probably on the scale of 200ms to 500ms. And that’s assuming the sensor registers every gesture perfectly, which it doesn’t.
This doesn’t mean the Kinect is worthless, but it does mean you’re limited as to what sorts of games to can use it for. It’s a major contributor to the Xbox One price tag, and it’s just not all that useful as a generalized gaming peripheral. It’s like having every console ship with a Guitar Hero instrument. Nice, if you’re into it. But not everyone is into it even though everyone pays for it. (And those people might just get the $100 cheaper PS4.)
23:00 “It has a lot of the messiness of Linux […] but there’s also some of the magic of it.”
He’s talking about developing for Android phones, and how the system has Linux underneath it. As always, Linux is a double-edge sword. In this specific case, he’s talking about having the source code handy when things go wrong.
How it works is this:
There are many layers of software between the game you’re writing and the actual hardware that runs it. If you want mouse input, you ask the operating system, the operating system asks the device driver, which gives you the state of the keyboard, the mouse, the graphics card, the sound system, or whatever else you need. And when I say “operating system”, keep in mind that the OS is probably a few levels deep, all by itself.
So when something crashes or fails to work as documented, advertised, or expected, then hopefully the problem is there in your code. But on some occasions – particularly on young and fast-changing platforms like mobiles – the problem is on one of those layers below your program. If you’re on a proprietary operating system like Windows, then you’re out of luck. You can’t “see” those lower levels. They’re just blocks of machine code talking to other blocks of machine code. Maybe there’s a bug in your code. Maybe the layer below you is working as designed, but the documentation is wrong. Maybe the layer below you has a bug in it that nobody’s run into before. That’s normally extremely unlikely if you’re doing something commonplace. But if you’re doing something outlandish – like cutting-edge game development that pushes the device to its limits – then you may run into problems and situations that the designers never anticipated.
But on Linux, you’ve got the source code. When a problem happens you can “step into” someone else’s code. “Step into” here being programmer talk for when your development tools show you the exact line of code that’s being run right now, and allowing you to run the program a line at a time. If something happens in one of those lower layers, then you can see the code that goes with them and understand what’s really going on. You can see the difference between a bug on your end, bad documentation, or a bug on their end.
25:00 “You can have a four millisecond difference in when you wind up waking up the next time.”
I’ve never done mobile development so I’m a little out of my area of knowledge here, but what I assume he’s talking about is calling
Sleep () or the platform equivalent. If you’re developing a videogame designed to run at 30 frames per second, then you have 33 milliseconds per frame to work with. That means 33ms to process user input, make sound effects, update the state of the game, and draw the entire scene. If you take more than 33ms then you’ll have dropped frames, which makes the game feel stutter-y and uneven. (Usually only a concern with fast-paced games.)
But if you happen to take less than 33ms, then what do you do with the leftover time? If you finish everything and you still have 10ms left over, you don’t want to begin a new frame. This can lead to uneven framerates in the other direction, and can also needlessly devour CPU cycles drawing frames that the user will never see. (Which would also be a waste of battery life on a handheld, but I don’t know if it’s enough to matter.) So what you do is call
Sleep (n), where n in the number of milliseconds you want your program to be dormant. You’re telling the operating system that your program should stop running, and that it should start it again in n milliseconds.
The problem he’s having is that sometimes the OS wakes the program up late. You tell the OS to wake you up in 3ms, and it doesn’t actually get around to resuming the game until 7ms later, making you 4ms late in starting the next frame. If you’re trying to run a game at 30fps, that’s really annoying. If you’re hardcore and want to run at 60fps, that’s downright scary. That’s like setting your alarm to go off at 6am, knowing that it could go off anywhere between 6am and noon.
A few seconds later Carmack mentions that this problem is probably not going to be solved by an intrepid programmer crawling down into the guts of the operating system and finding out why the OS is so sloppy about this. The problem will likely be solved by hardware improvements that just absorb the inefficiencies causing this.
29:00 “We are fundamentally creativity-bound.”
To be clear, by “creativity bound” I’m sure he’s saying “we are bound by creativity” and not “we are bound FOR creativity”. The intended meaning might be missed by non-programmers because it’s kind of a programmer thing to talk in terms of being “CPU bound”, “pipeline bound”, or “throughput bound” when describing which part of a system is limiting the performance of the whole.
Here he’s saying that further visual improvements will be driven more by what artists can do than by how many polygons we can draw. I also want to point out that I said the same thing before, and it feels pretty good to have my assertion supported by Carmack.
Spec Ops: The Line
A videogame that judges its audience, criticizes its genre, and hates its premise. How did this thing get made?
The Mistakes DOOM Didn't Make
How did this game avoid all the usual stupidity that ruins remakes of classic titles?
The Best of 2019
I called 2019 "The Year of corporate Dystopia". Here is a list of the games I thought were interesting or worth talking about that year.
Linux vs. Windows
Finally, the age-old debate has been settled.
The Best of 2015
My picks for what was important, awesome, or worth talking about in 2015.