C++ is the main language of game development. This is changing slowly as indies embrace other languages, but in the AAA space C++ is still overwhelmingly dominant. C++ is descended from – and is very similar to – the language C. First created in 1972, C is just one year younger than I am. It was devised for the world of the 1970s. It was targeted at the hardware of the 1970s, and was originally intended for writing operating systems.
This seems crazy, doesn’t it? Writing operating systems for Nixon-era mainframes is so vastly different from building AAA games in 2019 that it’s like we’re using coal-fired steam engines to go to the moon. Sure, the steam engine has been modernized a bit, but there are still conventions built into the language that don’t make a lot of sense in the world of 2019. The fact remains that somewhere underneath all those rocket engines and silver wings is a chugging steam engine.
C++ certainly has language features not available in C. C++ has classes, inheritance, operator overloading, and a bunch of other slick ways of expressing complex solutions in code. Those are nice, but none of those things uniquely address challenges faced in games programming. We could, in an alternate universe, use a different sort of language with a different set of features.
It’s not like this industry is incapable of evolution! Studios have changed game engines, and game engines have changed what graphics API they favorOn the PC side, this boils down to DirectX vs. OpenGL, with third-party candidate Vulkan landing a few recent wins.. Our tools are different, the target hardware is different, the operating systems are different, and the performance challenges have changed numerous times. Rendering technology has gone through at least two major revolutions. First there was the jump from software rendering to using dedicated graphics hardware, and then another jump when we added the ability to program that graphics hardware using shaders. Over the last 30 years we’ve changed every single thing about game development except the language!
So Why are we still using this language?
Some people claim that we’re stuck with this language due to industry inertia. We use this language because we’ve always used this language. We use it because that’s what the programmers study, and programmers study it because that’s what everyone uses. Four years ago I made the case that the dominance of C++ is due to the following factors:
- Lots of libraries. There are tons and tons of C++ toolkits, libraries, and code snippets floating around in the wild. Do you need a sound library? Font loading? Access to rendering hardware? Support for gaming controllers? You don’t need to write that code yourself, because there’s a really good chance that someone else has already solved the problem and provided the source code for free. Of course, adding their code to your project is often a lot harder than it ought to be, but spending six hours pulling out your hair playing “dependency scavenger hunt” is faster than writing everything from scratch, even if it is a dumb miserable way to spend an evening.
- Lots of programmers. Since C++ is the big important language, everyone learns it. Which makes it easy to hire people to work on your project.
- Lots of help. Yes, answers to forum questions often take the form of abusive condescension and nerd peacocking, but at least a C++ programmer can get their question answered after their tag-team humiliation. If you’re using one of the more obscure languages, then you might not get any answer at allYou’ll still get mocked, though. Mostly by jackasses asking, “Why didn’t you use C?”.
- No dominant alternative. It would be one thing if there was another language out there to play Pepsi to C++ Coke, or could be the Apple to the C++ Windows. But there’s no clear contender. Java is good for some tasks, Python is good for others, but none of the challengers works as a broad general-purpose language. And that’s fine. There’s lot of value in specialization. But that focus helps drive the C++ feedback loop of ubiquity.
I’m still confident that’s all true, but after four years I’d like to argue with my past self and suggest that this industry inertia can’t be the full reason for why C++ is so deeply entrenched.
The Lie of Simplicity
PC hardware is usually presented as a processor and a pile of memory. When a program is run, the processor makes changes to the contents of the memory, and you get some sort of output. On this site, I describe programs this way all the time. Sadly, this is a gross over-simplification. To understand why C++ is still dominant, we need to look at how the hardware is really constructed and what it’s really doing.
What’s actually going on inside of that humming box is that you’ve got a whole bunch of processors all bundled together in a single CPU housing. We call these separate processors “cores”. Those cores don’t make changes to memory directly. Instead, blocks of memory must be copied to a smaller pool of memory called the L2 cache. From there it’s copied to an even smaller pool of memory called the L1 cache. This L1 cache is actually inside that CPU housing with the cores. This is the only memory that the cores can manipulate directly. If the processor makes some changes to memory, then the altered block is copied back out through the layers and is stored in main memory.
Let’s say you’re a core. The L1 cache is your tiny workbench right in front of you. You can examine bits of memory on the bench. You can compare them, perform arithmetic, and you can make changes to the contents of the memory. Sometimes you need a chunk of memory that you don’t see in front of you. When this happens, then hopefully what you need is stored in the little shed just outside, which is your L2 cache. If the required item isn’t stored there, then you need to jump in your forklift and trundle all the way to the other side of the campus. You need to drive all the way to the particular warehouse that has the stuff you need. Go inside, find the pallet that holds the data you’re looking for, and drive it all the way back to your shed. Then take the items you need off the pallet and put them on your workbench so you can get back to work.
This is just the tip of the iceberg. There’s also some parallelism that you can take advantage of if you understand the hardware. A single core can handle multiple operations at the same time, provided you structure your operations properly. If you have the variables A and B and you need to modify them independently of each other, then it’s much faster to order your code so that you change A, then B, then A again, then B again. If you simply modify A twice and followed by B twice, then you’ll miss out on some of the potential performance gains.
Disclaimer: I’ve never done any real programming at this level. Above is how it’s been described to me by people with more knowledge of programming close to the metal. Even the above is a pretty big simplification of what’s going on, but we’re basically at the edge of my knowledge at this point and I hesitate to add more.
Of course, maybe the compiler will help you out and re-order your operations. Maybe it won’t. Do you know how to make sure it does the right thing? Do you know how to check? Do you know what kind of performance gains you’re chasing and if it’s worth your time?
This thing where you need to drive to the warehouse is called a “cache miss”, and it can have an immense impact on performance. If you want more detail, this article has some great information on when you’ll run into a cache miss and should be approachable to non-coders. I’ve never been able to find any hard numbers on the overall cost of a cache miss, but developer Mike ActonFormerly of Insomniac games. Currently working at Unity. throws around the figure of “200 processor cycles”. That’s on the PlayStation 4 hardware, but I’m willing to bet that’s in the same ballpark as the rival consoles and the PC. 200 processor cycles is crazy expensive, and means a cache miss is one of the most expensive things that can happen to your program.
And it’s completely invisible in the code!
Can’t the Compiler do it for Me?
I’ll be honest, I don’t enjoy messing with this stuff. I like thinking about the hardware as a simplistic CPU and a magical pool of memory. Worrying about the size of the L1 cache and the fetch timing is when programming stops being fun and starts feeling like accounting. It’s hard and annoying and adds an unwelcome layer of complexity to code. I tend to think of this business with cache limits as “intrusive” and I’d rather let the compiler handle it for me. In fact, during my Good Robot series you can see me advocating a linked list without giving any thought to how every single entry is likely to trigger a cache missTo be fair, I was currently being distracted by something even slower.. That’s silly, and blunders like that would get me bounced out the door of a serious AAA studios engine teamUnless it was Bethesda Softworks, where they’d probably put me in charge of engine optimization for Gamebryo..
I’m not the only one averse to thinking about the actual physical limitations of the hardware. If you poke around you’ll see coders being pennywise and pound foolish with processor cycles. You’ll see advice like:
- If you don’t care about precision, then you could cast this float to an int before this operation because that will be faster.
- Woah there, buddy! That arctangent operation looks mighty expensive. That square root looks pretty scary too.
- Rather than doing this comparison 6 times in a row, you should do it once and store the result in a bool. It’ll be faster!
In the right context, all of these might be reasonable advice. But often coders will obsess over this stuff when their real performance problems are coming from their failure to manage their memory. I am tremendously guilty in this area, and I am not an outlier by any measure.
Getting Back to C++
I think stuff like this is why C++ is so dominant. Newer languages act like they want to protect you from having to think about the hardware. Don’t use pointers, they’re dangerous. Don’t worry about the layout of data in memory or how big it might be. Just trust Friend Compiler to handle it for you. Don’t worry about the cost of allocating memory or memory fragmentation.
As someone who hates worrying about the hardware, I really appreciate this. In the overwhelming majority of use cases, the programmer should not need to waste their time obsessing over minuscule little 64 kilobyte chunks of memory like it’s 1988. Computers have tons of power these days and programmer hours are not cheap. Hiding the cache behind layers of abstraction makes economic sense.
It’s great to be shielded from all of that terrifying complexity, unless it happens to be your job to worry about hardware. Most areas of the code don’t need to think about optimizing the levels of cache usage, but if you do need to worry about it, then you really, really need to worry about it. If I’ve got 5 space marine objects and 6 space bugs in the scene and those are the only active objects in the game, then I do things the easy way. But if I’ve got ten thousand particles, two hundred bullets, 2,048 map zones, five hundred physics objects, and a hundred enemies in the scene at the same time, then I really need to think about how the data is being processed. If I’ve got enormous objects in memory – like texture maps or large collections of polygons – then I need to think about how often that data is being manipulated, copied, changed, and compared. If I’ve got 10,000 particles flying around the scene or I’m doing physics collisions between a lot of different objects, then doing things the Right Way™ in memory can make the difference between running the game gracefully and completely tanking the framerate.
I think this is why a lot of the newer languages haven’t gained much traction in the deep end of AAA gamedev. They make life easier for the 90% of the job where you’re doing straightforward things that aren’t serious performance concerns, but they leave you helpless when you come up against that last 10% of the job where you need direct control over where and how things are placed in memory.
A few of the upstart languages do have these features. In particular, D and Rust both seem to have a lot of supporters who claim the languages are just fine for high-end gamedev. Other people claim they don’t offer enough, or in the right way. I’m not nearly qualified to weigh in on that argument. I’ve read about both languages, but trying to learn a language by reading about it is like learning to drive by watching Top Gear. The learning is in the doing, and I haven’t done enough with these languages to offer any meaningful analysis.
Also, Rust has been “nearly ready” for game development for years now. I don’t know what the holdup is, but I suspect the problem isn’t that Rust is just too suitable for gamedev.
Still, the point remains that any language intended to surpass C++ in the realm of games is going to need to match or exceed C++ in its ability to optimize very small but important things.
 On the PC side, this boils down to DirectX vs. OpenGL, with third-party candidate Vulkan landing a few recent wins.
 You’ll still get mocked, though. Mostly by jackasses asking, “Why didn’t you use C?”
 Formerly of Insomniac games. Currently working at Unity.
 To be fair, I was currently being distracted by something even slower.
 Unless it was Bethesda Softworks, where they’d probably put me in charge of engine optimization for Gamebryo.
Diablo III Retrospective
We were so upset by the server problems and real money auction that we overlooked just how terrible everything else is.
The Middle Ages
Would you have survived in the middle ages?
Quakecon 2012 Annotated
An interesting but technically dense talk about gaming technology. I translate it for the non-coders.
DM of the Rings
Both a celebration and an evisceration of tabletop roleplaying games, by twisting the Lord of the Rings films into a D&D game.
Spec Ops: The Line
A videogame that judges its audience, criticizes its genre, and hates its premise. How did this thing get made?