Programming Vexations Part 9: The Problem With Engines

By Shamus Posted Thursday Nov 7, 2019

Filed under: Programming 77 comments

In the previous entry I talked about the lack of game-specific types and features in C++, and how this leads to library proliferation, compatibility problems, and a massive duplication of effort. The idea was that a language designed for games ought to contain types that are common to all games. Several people argued in the comments that you shouldn’t add these sorts of things to the language itself, but rather provide them through the standard library.

This leads into a side argument over whether or not we should consider the “standard library” to be part of the language, which is one of those questions like, “Is the bun part of the hot dog?” where everyone thinks the answer is obvious, and are then horrified to discover another group of people who think the opposite answer is the obviously correct one. So then they take turns hitting each other in the face with the dictionary.

Welcome to the internet, I guess.

But since we’re here, I might as well sort this out for people who work in sensible careers rather than becoming programmers.

If you’re a programmer, then you might think of a language this way:

At the center is the language, its syntax, functionality, and the basic types it supports. Then we have the standard library. It is, essentially, a bunch of code written in that language. The standard library is specifically designed to prevent the “everyone needs to make their own” problem I described last week.

For example, the C language doesn’t have a built-in way to print strings of text to the screen. It does provide a way to output a single character at a time. You could write a bunch of code to take a string and shove it out to the console one character at a time, but since outputting strings to the console was so common in the early days, this idea was implemented in the standard library. You can see it in the classic Hello World example code:

1
2
3
4
5
6
7
#include 
 
int main()
{
   printf("Hello, World!");
   return 0;
}

That first line tells the compiler to bring in “stdio.h”. That’s short for “Standard Input-Output”Fun fact: For my first year or so in C, I thought it was called “studio.h”. I was slightly confused when I finally typed it in manually rather than copy-pasting it from my previous project, and discovered that it was in fact NOT “studio”. If I had a time machine, I’d use it to send 1990 Shamus a proper book on the language so he didn’t have to muddle through so much of the language in the dark.. We then use that on line 5 when we call “printf” to output a string.

I should point out that for a developer, the line between “core language” and “standard library” is more academic than practical. The language ships with the standard library, basically everyone uses it, and so rank and file programmers just think of it as “part of the language”.

From here the line gets blurry. Out beyond the standard library we have “common libraries”. In the C++ world this would be something like the Boost library. This stuff is a little more sophisticated and not quite as universal as the standard library, but it still covers a lot of common things, with the aim of giving the language ecosystem one good implementation of a feature rather than having a thousand half-baked incompatible ones competing for mindshare as with my VectorBox vs. DasVektor example from last time.

As you move outward into the realm of “everything else”, libraries get more specific to particular domains, and less common in their usage.

I should stress that all of this is how a common programmer would view things. From the standpoint of someone developing and maintaining the language, the world probably looks more like this:

They don’t care what libraries are common and they don’t want their beautiful language getting blamed for the shortcomings of all those terrible libraries floating around in the wild.

All of this is to agree with people in the comments who said that Vectors, matrices, and other game-specific types ought to go in the standard library and not in the core language. I agree, I just didn’t think it was worth making that distinction. And now that recklessness on my part has led to this over-explanation.

I ended the last entry with a promise that we’d talk about the drawbacks of using a game engine to turn C++ into a language for game development. So let’s do that now…

Start () Your Engines

I should note that the term “engine” doesn’t have a formal definition. There’s no structural difference between the external libraries we talked about last week and a proper game engine. If a library is really large and is focused on a particular problem, then at some point people will start calling it an engine rather than a library.

Last week I proposed starting with raw C++ and then trying to construct a game using libraries. One for window management, another for networking, then sound, image manipulation, rendering, physics, and so on. Essentially we’re trying to build a game engine piecemeal. This is a huge pain in the ass because all those disparate libraries were made in isolation and weren’t designed to inter-operate. They’ll have redundant code and lots of annoying little incompatibilities.

Finding good libraries is hard, integrating them with your project is harder, and getting them to work together is a long and thankless task. This is how I operated on my projects over the years. I got away with it because my projects were always small in scope and not intended for commercial release. Even at that limited scale, getting all the parts working together was often a headache. You end up with this frankenstein project that feels like it’s held together with duct tape and positive thinking.

Rather than building an engine out of spare parts like an episode of Scrapheap Challenge, why don’t we just use a game engine?

Yeah, Why DON’T We Just Use a Game Engine?


Link (YouTube)

Jon Blow isn’t a fan of using commercial game engines. You can hear him discuss the problem in this talk. As someone who does a lot of work with procedural content, I have a lot of the same concerns. I often find myself fighting against traditional game engines because they’re usually built around a few basic assumptions. Those assumptions will funnel developers into certain genres. That’s great if you’re using an existing design for your game, but it can be restrictive if you’re doing something experimental or unconventional.

I feel like we’re due for another Terrible Car Analogy™: When it comes to mechanical engines, there’s no such thing as an “all-purpose engine”. The engine to drive a lawnmower is fundamentally different from the engine to move a car, which is fundamentally different from an engine for an airplane, which is different from an engine you’d use to generate electricity. You can’t just buy a “generic engine” and then adapt it to one of the tasks above. At least, not if you expect it to do a good job.

The same is true for game engines. Good luck trying to make Minecraft or No Man’s Sky with the Unreal Engine. Good luck trying to make DOOM 2016 or The Last of Us with Unity. I’m sure it’s possible, but it’s not the best tool for the job and you’ll end up fighting the engine the whole way.

Maybe one engine is good for making shooters. The entire engine is built around the idea of loading a premade level by a designer. You can’t even gain access to any rendering features until you load a level. That’s great if you’re making Shoot Guy V: The Shootening, but it’s a dealbreaker if you’re trying to make a dungeon crawler with procedurally generated maps. Another engine sort of allows for procedurally generated spaces, but the engine assumes you’re doing photorealism and the entire shader path is designed around that idea. There’s no way to get it to do line-art style graphics without a lot of really ugly hacks, when it would be easy to set up this sort of thing outside of the engine.

Shot from Unreal Engine 4. (2015) It's amazing how close we can get to photorealism when nothing needs to be animated and there aren't any mammals in the shot.
Shot from Unreal Engine 4. (2015) It's amazing how close we can get to photorealism when nothing needs to be animated and there aren't any mammals in the shot.

This is what life is like in C++. You can use an existing engine with all the big problems solved for you, but it also shoves you into a cookie-cutter design paradigm. Or you go all the way down to the metal and implement your own solution for physics, audio, particles, AI, animation, rendering, input, pathing, collision, culling, camera, LoS checking, lighting, particles, HUD, and networking. If you try to import libraries for all of those things then you’ll end up in a nightmare because the language itself is lacking Utility Belt features that all of these libraries will need.

I’m a medium-sized fan of UnityNotwithstanding the appalling documentation., but I maintain there are downsides to using engines that we should be willing to think about.

While engines are cheaper than they’ve ever been, they can still be pretty expensive. The versioning system that engines use make it difficult to revisit a project decades later and get the game running on the new generation of machines. You probably won’t be able to take your source code that uses Unity 2.0 and build it using whatever version of Unity exists in 2035And today’s version might not compile for a dozen different reasons.. A lot can happen in 20 years. What if Unity gets bought out or goes under? What if the docs vanish? What if legacy versions are removed?

The Fragility of Legacy Code

Why would anyone bother with an old movie when you can just watch something newer with better special effects?
Why would anyone bother with an old movie when you can just watch something newer with better special effects?

Some people roll their eyes at these concerns, as if playing 20 year old games is an outlandish idea. I think this hobby suffers from an unfortunate lack of respect for its own history. Movies and television had the same problem. Nobody realized how much those early creations would matter and they weren’t always careful about preserving things. As a result, a lot of really historically important work is just gone. TV and film eventually learned their lesson, but video games aren’t quite there yet.

The Matrix turned 20 this year. Imagine if The Matrix was somehow “too old to watch” on modern hardware!

Maybe your particular version of Unity will still be available in 20 years and maybe your license will allow you to do a re-release and maybe it will build properly using the IDE of 2039, but that’s a lot of uncertainty for something as valuable as a commercial game.

Like Blow, I have this sense that the industry has spent the last 10-15 years building a generation of games that are going to be hard to preserve. You can’t just look back at the 90s and say, “GoG got Duke Nukem 3D running on modern machines, so I’m sure we’ll be able to get Doom 2016 running on the machines of 2036″. Our tool chains keep getting more complicated, and there’s a real possibility that our projects will end up tied to a particular setup of hardware, operating system, and drivers that won’t exist in 20 years. Just because we can emulate a Nintendo 64 doesn’t mean we’ll be able to emulate a Windows 10 machine or a PlayStation 4. The newer machines are astronomically more complex, and Gordon Moore isn’t as helpful as he used to be.

You can build a Ford Model T in your garage using replica parts, but you can’t build a replica 2005 Ford FusionBoom! Two Terrible Car Analogies™ in one entry!. Machines are getting more complicated, and at some point that complication becomes too much of an obstacle to hobbyist projects.

This isn’t to say that using a game engine is a bad thing or that you shouldn’t do it. Using a game engine can save you a ton of work. A lot of indie projects would simply not exist without the likes of Unreal and Unity doing the heavy lifting. I’m not suggesting that games would be better if we went back to the bad old days of everyone making their own engine completely from scratch. I’m just saying that using a commercial game engine introduces new points of potential failure that wouldn’t otherwise exist.

I think it’s worth experimenting to see if we can find a compromise between the bare-bones nature of C++ and the monolithic infrastructure of a modern engine. Maybe there’s a sweet spot between these two. If the language can offer you the first half of the engine (data types, standardized ways of talking to the hardware) and allow you to make the other half of the engine (managing data and sending it to the hardware) then maybe that will give us most of the time-saving power of an engine while also offering the relative safety and portability of a homebrew system.

I honestly don’t know. All I can say is that a hybrid system like this would suit my workflow, and I’m probably not the only one.

A Language Specifically for Games

But probably not THIS language.
But probably not THIS language.

A language designed for games ought to provide elements common to all games, but leave everything else to the designer. The situation we’re in with C++ is that game engines are too monolithic / restrictive, while the language itself is too barebones. What we need is a tool designed for the job at hand, and one that will give us expressiveBeing able to express very complex ideas in a small section of readable code. code without sacrificing performance. That’s hard to achieve in a generic language, but becomes much more doable if the language designer knows you’re going to be solving videogame problems and not (say) writing operating systems or device drivers.

Last week I complained about the problems of implementing Utility Belt features like vectors, matrices, bounding boxes, and other game-specific data types. Keep in mind that the Utility Belt isn’t the only thing you need. That was just the most obvious example out of many. It’s not clear to me where you should draw the line between features provided by the language, and things the developer should be expected to do for themselves. Creating the window used for gameplay? Talking to the graphics hardware? Loading image files? Audio? You could make a good case for or against providing any of these things as standard libraries to accompany a language.

As an ideal to shoot for, I suppose I’d suggest including as many things as possible without trapping the developer in any particular design. I realize that’s pretty vague, but I think it works pretty well as a guiding principle. In any case, I feel comfortable saying that we could easily be doing better than we are right now with C++. If we jump to a new language that also doesn’t include Utility Belt stuff, then we’ll be dragging a lot of our C++ restrictions and headaches with us.

 

Footnotes:

[1] Fun fact: For my first year or so in C, I thought it was called “studio.h”. I was slightly confused when I finally typed it in manually rather than copy-pasting it from my previous project, and discovered that it was in fact NOT “studio”. If I had a time machine, I’d use it to send 1990 Shamus a proper book on the language so he didn’t have to muddle through so much of the language in the dark.

[2] Notwithstanding the appalling documentation.

[3] And today’s version might not compile for a dozen different reasons.

[4] Boom! Two Terrible Car Analogies™ in one entry!

[5] Being able to express very complex ideas in a small section of readable code.



From The Archives:
 

77 thoughts on “Programming Vexations Part 9: The Problem With Engines

  1. ElementalAlchemist says:

    you can’t build a replica 2005 Ford Fusion

    You certainly could, if properly motivated and proficient in the necessary skills. What you couldn’t do is just easily buy all the parts off-the-shelf. But I guess the analogy was Terrible™ after all.

    1. Daimbert says:

      Well, you couldn’t do it for the Model T either (and there would be far more parts available off-the-shelf for the Fusion than for the Model T). But because for the Fusion there are a lot more parts and things are far more complicated it’d be a lot harder to gather up and manufacture all the parts for the Fusion than it would be for the Model T.

      1. The Rocketeer says:

        Model T’s are the easiest, most accessible classic cars to buy and reconstruct from scratch and for cheap. The cars were amazingly ubiquitous for an incredible stretch, and even today you can buy good quality Model T’s for cheap, and there are just heaps of replacement parts out there. Maybe the only easier classic cars to reconstruct are busted ass old Volkswagens.

  2. Mephane says:

    Like Blow, I have this sense that the industry has spent the last 10-15 years building a generation of games that are going to be hard to preserve.

    In this regard, there are two elephants in the room and a third is about to enter, each of which is a bigger problem in this regard: DRM, “live service” game design (which often ties into but is not the same as DRM), and soon the first streaming-only game releases.

    1. Thomas says:

      The live service thing has been an issue for a while with MMOs and multiplayer games. YouTubers trying to revisit old games have already had problems trying to recreate multiplayer experiences.

      Especially if they used GameSpy for their servers

    2. ElementalAlchemist says:

      soon the first streaming-only game releases

      Publishers must be licking their lips at the prospect of that. After decades of trying to kill off used game sales and piracy, and pushing the idea of licensing rather than outright ownership, their utopia is finally in sight.

      1. Gabriel says:

        I just want to see how they blame lack of sales on pirates.

      2. The Puzzler says:

        MMOs, Free To Play, Steam… Games that can’t be bought second hand without paying the original publisher are already a large chunk of the market. One more kind probably won’t make much of a difference.

        1. Mokap says:

          Steam games are able to be pirated (if they lack other DRM, and even, that’s cracked often) so not being able to buy them if steam suddenly dies tomorrow isn’t really a problem assuming you have the right hardware. Of course, pirating a game is bad normally, but if you’re physically unable to buy it I don’t see an issue with it.

  3. EwgB says:

    There’s no structural difference between the external libraries we talked about last week and a proper game engine. If a library is really large and is focused on a particular problem, then at some point people will start calling it an engine rather than a library.

    I would disagree with that characterization. An engine (or more broadly a framework) is more than just a large library. The fundamental difference is the management of control flow. With a library it is controlled by your application code. You manage the game loop, calling the library when needed: call sound library to play sound, graphics library to render stuff, image library to load textures etc. In an engine/framwork the engine controls the execution flow, calling you code when it deems necessary instead of the other way round. So the game loop runs in the engine, and every time something happens that needs your code, it is called by the engine. For example in Unity, you create an object in the game, like an enemy, and tell the engine “Hey, call this function on creation, and this one every tick, and this one when it hits another object”. There of course ambiguous cases, but that is the general idea.

    1. Daimbert says:

      Which kinda means that with an engine YOUR CODE is the library that the engine is trying to integrate with.

      1. Leeward says:

        “Hey, I wrote this library to make Unity into a game with rockets and little green men.”

        Maybe we need to talk about mods. KSP is a Unity mod. MechJeb is a KSP mod. KOS scripts are mods for KOS, which is a mod for KSP, which is a mod for Unity. How deep does it go?

        1. PeteTimesSix says:

          I can do one layer deeper: At one point I wrote a mod to draw your current vessel (VesselViewer) for a mod that added a bunch of functionality to the IVA view (RasterPropMonitor) of KSP that runs on Unity. It allowed for adding in additional modes as mods (no-one ever did to my knowledge, but to be fair VV was a hilariously unoptimised mess).

          Then I wrote a mod for my mod (an IVA part selection and interaction interface), which brings us to five layers deep.

          Can we go deeper?

          1. Leeward says:

            I think this is where we need to stop and ask ourselves: Should we go deeper?

            1. DGM says:

              If you haven’t unleashed a Balrog yet, you haven’t gone deep enough,

    2. Paul Spooner says:

      That’s a fascinating insight which brings to mind something else Shamus has complained about in the past. I think it was something like “software hubris”, where the way a library is written assumes that you need the library’s permission to do things. Instead of being a useful tool that you can work with on whatever size project you want, the engine is more like a workshop that you have to work inside, and is designed for a general range of projects. As far as the terrible car analogy goes, a game “engine” is more like a garage or a hanger. You can’t fit an airplane in a garage, and even if you could there aren’t any cranes, and the tools and spare parts are all wrong. Likewise, you might be able to work on your car in an airplane hanger, but you’ll waste a lot of time walking everywhere, the spare parts are still all wrong, and there’s no auto-lift.

      1. EwgB says:

        Not an original observation of mine, that’s what was taught in my CS education in first semester. And also what it says on the Wikipedia as I just looked up:

        Frameworks have key distinguishing features that separate them from normal libraries:
        * inversion of control: In a framework, unlike in libraries or in standard user applications, the overall program’s flow of control is not dictated by the caller, but by the framework.[1]
        * extensibility: A user can extend the framework – usually by selective overriding – or programmers can add specialized user code to provide specific functionality.
        * non-modifiable framework code: The framework code, in general, is not supposed to be modified, while accepting user-implemented extensions. In other words, users can extend the framework, but cannot modify its code.

        Which also doesn’t mean frameworks are a worse or better than libraries, they’re just a different type of tool for a different job.

  4. Gargamel Le Noir says:

    Typolice : Everything Ellse (in both graphics)

  5. Robert Conley says:

    My experience with computer program languages spans from the mainframes, workstations, and 8-bit PCs of the early 80 to the present. My view this that from a logical standpoint any category like core, standard library, common library is an arbitrary distinction. One made by the people who develop and later use the language. The standard library is the standard library because a lot of people needed useful things done outside of the initial K&R specification of C. It continued in C++ because of C but was cleaned an formalized as a “thing” that is an important element of C++. While in C the standard language was only found in most implementation.

    Moving on, I had the pleasure (or not) in coding Cobol-68, older Fortran, and other older languages. Most of these were rigid in the layout and structure of the program. Data had to go here, procedural code when there, and any I/O went here. The benefit of 2nd (or 3rd) wave langauges (like C, C++, Basic, Pascal) is that they were much more flexible about where they can be which made for cleaner code. As well as made things like external libraries lot easier to deal with (standard or not).

    However behind the scene is all still boiled down to the same assembly that Cobol, Fortran, and the other early languages used. What improved is the abstraction and more flexible algorithms for compilers. A point that was driven home when I had to design and document a 4 bit CPU for my semester final for my Computer Architecture class. In CPU design, assembly is just an abstraction of microcode which is baked into the CPU and manipulates the circuit.

    So what this had to do with the issue of programming language and gaming? It seems to me that the first games of the 80s and early 90s, were the “assembly” phase. With programming languages (and sometimes actual assembly) being used directly.

    Now we are in the “early programming phase”, where the “assembly language” of using programming languages has been replace by gaming engines. But not at the beginning of the phase but closer to the end. Gaming Engines have somewhat rigid structures like many programming languages of circa 1970. And programmer chafe at this either reverting back to assembly, or hopefully writing a better engine (or gaming programming language).

    It not like there isn’t pieces around that are not accesible. For example OpenGL. What would it take to create a compiler that spits out binary code much of which is translated into OpenGL. However the language this compiler abstracts OpenGL the same way a C or C++ abstracts Assembly.

    Maybe it would be a good thing to read up on the history of C’s development and then apply that same methodology to abstracting graphic and game programming.

    1. Echo Tango says:

      binary code much of which is translated into OpenGL

      I think you’ve mixed up something here either in the explanation, or understanding how languages and compilers work. OpenGL is a higher-level API that then gets translated to machine language for particular graphics hardware. You wouldn’t take a different high-level language, turn it into machine code (binary – the lowest level), and then go back up to OpenGL (high-level). I mean, someone could do that, but it wouldn’t help at all.

      1. Robert Conley says:

        What OpenGL is a library that is linked in. The calls are to OpenGL function are translated into a series of assembly instruction.

        What one can do is come up with a easier or more useful syntax that is is used by the compiler in a preprocessor step. This step translate the syntax into a series of OpenGL calls. Which is then compiled normally and then linked to the library.

        1. Echo Tango says:

          We have things like that – higher-level libraries, frameworks, and game engines. Under the hood, Unity, the Doom engine, etc, are all making use of OpenGL, DirectX, etc, so that programmers don’t need to use OpenGL / DirectX to put triangles and pixels onto the screen, but place characters, animations, cameras…

          1. Robert Conley says:

            But they are not good enough hence Shamus’ complaints. And they are just that libraries bolted on top of a pre-existing programming language. My point is that you make up your game programming language then write a preprocessor, compiler, linker that take that code and spits out a binary linked to a proven graphics library.

  6. Thomas Adamson says:

    Terrible car analogy is right . Assuming piston engines they’re all basically variations of a four-stroke engine.

    A lawn mower is a single cylinder fourstroke (straight configuration). A car is typically a 6 cylinder V configuration four stroke (A V6 engine) but more powerful V8 or V12 car engines are used in musclecars and supercars and aircraft V12 engines are fundamentally similar.

    There are radial engines (especially for aircraft) but they still involve the principle of piston cylinders.

    Diesel engines aren’t that different, except they use mechanical compression as ignition instead of a sparkplug.

    The Wankle Rotory engine is fundamentally different. As are gas driven turbines. But then you’re comparing things that are about as alike to piston-motors as a steam-engine is to a waterwheel.

    1. Leeward says:

      The last lawnmower I used had a 2-stroke engine. Aircraft and generators mostly use turbines, which are not variations on the piston theme. Some aircraft use ramjets, which are more totally different.

      A car can be built that runs with a 40HP boat motor, but it would be terrible. There’s all that extra water sealing expense for nothing, and it’s got this funky shape that makes it hard to fit in the hood.

      I think the car analogy is not as terrible as you claim. Sure, they’re all the same class of thing, and they all operate on the same fundamental principles, but you don’t really want to use one unmodified to do the job of another. There are bits that were built for purpose and while they could technically be substituted for each other, no engineer in their right mind would.

      1. Kylroy says:

        And more to the point, I believe all these engines are purpose-built by aircraft, automotive, mower etc. companies. It’s not like there’s a “piston engine factory” that makes basic engines that are later customized to specific purposes.

        1. Paul Spooner says:

          Right! Boeing buys their engines from a turbine engine company which also makes turbines for power generators. But they don’t just use power generator turbines. Likewise, many trucks have diesel engines built for them by companies that also build diesel engines for cargo ships, but they don’t just buy a generic “diesel engine” and then try to fit it under the hood. Engines for mechanical designs are nearly always purpose-built, which is basically what Jon is suggesting we do with games.

  7. Leeward says:

    …for a developer, the line between “core language” and “standard library” is more academic than practical.

    I’m sorry for my part in this digression. I really didn’t mean to drag the actual articles into my pedantry.

    That said, here’s more of it: The vast majority of computers in the world are embedded. Think about the number of electronic devices you own. Sure, there’s the laptop and the phone, desktop, and maybe a gaming console or 6. But then there’s the microwave, the oven, the refrigerator, your power meter, your car, that picture frame, your wifi router, wireless phone charger, TV remote…you get the idea.

    Embedded devices draw the line between language and standard library in a very practical way. If I only have 1K of RAM in my light switch controller, there’s no way I’m loading in libc.

    Anyway, I know that you meant games programmers and college students and web developers. I really didn’t mean to divert the article; just add a slightly broader perspective to it.

    1. Paul Spooner says:

      I think it may be more relevant than you give credit. If you’re pushing the cutting edge of game programming, you don’t want to be loading libraries designed for web servers for your multiplayer network code, or video editing libraries for your display code. If you are pushing hard enough, everything eventually becomes a performance-impacting concern.

      1. Leeward says:

        That’s fair, though I think people working on cutting-edge games tend to think more about run-time performance than RAM consumption. Sure, cache lines might matter, but loading a library into RAM at bootup even though you’re only going to use one function from it doesn’t cost much when you have gigabytes of RAM. On most systems I work on, it’s the opposite. I’d rather consume 100 times as many clock cycles if it can save me a few words of the rare kind of memory.

        Still, I can imagine a game where a program’s load time matters and where dynamic linking isn’t a thing.

    2. DerJungerLudendorff says:

      Good point.
      But that also kinda reinforces the difference between standard libraries and other libraries: You immediately jumped to using libc, because it is well known and reliable. So that is what we use, unless we have strong reasons not to (like very limited resources).
      Only after the standard library failed do we look for alternatives.

    3. Richard says:

      I still disagree with you.
      I guess you’re arguing that only reserved keywords are part of the ‘language’. Personally, I’d say that’s simply the syntax, not the language itself.
      Syntax is very important, but if you don’t use any of the common words and phrases, you’re not really speaking English.

      If I don’t have the possibility of using std::vector, or std::unique_ptr, then I’m not using C++11/14/17 (20).
      I’m using an older language – perhaps a related one, but not ‘modern’ C++.

      Yes, I can re-implement my own version of (eg) std::vector or std::unique_ptr.

      Many people implemented their own version of “uint32_t” as well – would you argue integers of a particular size are not part of the langauge? If not, where is the line?

      C++ was originally created as a set of preprocessor macros to feed into a C compiler.
      These days, a lot of ‘new’ languages are effectively the same idea, yet feeding a C++ toolchain (often LLVM/clang)

      To join the digression:
      libc is a particular implementation of the standard library. It’s neither the best, nor the worst and certainly not the only.

      When I’m working on an embedded system, I don’t use any of the monolithic implementations of the standard library intended for use on desktop systems.
      I’ll use a toolchain and standard library one that’s specifically designed for use in embedded systems, that ruthlessly strip out all the features I didn’t (want to) use.
      Some of these libraries/toolchains allow me to state that language features X, Y and Z are too expensive and should be disabled, tell it the types of speed/space tradeoff it should use for specific language features, and even that sacrificing “correctness” is ok sometimes.
      – Maybe floating point can have really large errors in some circumstances, std::vector has a smaller-than-usual size limit, and all exceptions should instantly terminate.
      (So the compiler doesn’t need to generate anything to unwind a stack)

      1. Leeward says:

        It’s good to know we still disagree. It may be that my view is more C-centric, where yours is more C++-based. The STL is just a bunch of headers with no runtime cost other than the environment. The new operator is an actual language feature, not a function in a ubiquitous library.

        Incidentally, libc isn’t a particular implementation. I think maybe you misread it as glibc or something. If I had to pick one to have been talking about, it would probably be newlib, but I don’t.

        I’m not going to defend my position, but I will clarify it a bit: it’s the syntax and the grammar. All the bits of C that you can use without including a header from the standard library.

        C++ may need its definition of language expanded to include the STL, but probably not since it’s possible to write the STL in C++.

        Then again, I am one of those people who like to experiment with languages and I may own the GitHub repository for one where I’m not its only user. So maybe the characterization is accurate.

        1. Richard says:

          Ah, yes. Sorry, you meant “libc” as in “the C standard library”.

          From your viewpoint, that means “malloc”, “free” (and friends) are not part of C.

          The reason I think that’s incorrect is that is denies the existence of any of the C standards – C99 and C11 didn’t change the syntax and grammar, they expanded the C standard library (and deprecated/removed some parts of it).

          1. Leeward says:

            That’s not true. C99 added complex and boolean types, variable length arrays, variadic macros, designated initializers, and // comments.

            C11 added static assert, _Generic, _Noreturn, and anonymous structs and unions.

            Those standards did make changes to the standard library too, but they certainly weren’t exclusively changes to it.

            No, I don’t consider malloc to be necessary for writing C. The entire Linux kernel gets by without it.

            Incidentally, it looks like I’m including things like stdbool.h and stdint.h in the language. They add things that look like new keywords but in a way that doesn’t break old code. They don’t take up any space at runtime, even if you use them.

    4. Blake says:

      “Embedded devices draw the line between language and standard library in a very practical way. If I only have 1K of RAM in my light switch controller, there’s no way I’m loading in libc.”

      That assumes that using part of the standard library means linking in all of it, which simply doesn’t have to be true.
      For example using c++ standard library features such as ‘tuple’ or ‘optional’ shouldn’t require any extra code to be linked in and are implementing features that could definitely have been part of the language instead.
      And don’t even get me started on std::forward which really really should have been a language keyword and not a library solution – both for debug performance as well as everyones sanity.

      Having said all that, I haven’t coded any light switches, smallest I’ve gone is the ESP32 which have like half a meg of RAM to play with so having a 100KB binary isn’t such a big deal.

  8. Ninety-Three says:

    Nobody realized how much those early creations would matter and they weren’t always careful about preserving things.

    I don’t think the archivist mentality is comparable between video and games. We’ve permanently lost some episodes of old TV shows like Doctor Who because the the station taped over the recordings: that’s a problem that could’ve been avoided by buying a couple hundred dollars of dedicated “permanent record” tapes and locking them in a box somewhere. If there’s some fundamental shift in architecture or rendering pipelines, making DOOM 2016 work on the computers of 2035 is going to be hard. It’s going to take hundreds if not thousands of hours of work from skilled professionals, and it’s not clear who’s going to pay for that.

    If people thought there was money to be made doing this, the archivists wouldn’t be worried. The reason GOG 2035 isn’t expected to buy the rights to DOOM 2016 and fix it up to run on future computers is that we think they’re not going to sell many copies of it, so paying enough programmer salaries to update the game would be a money-losing venture. Even if Id Software of 2016 could somehow anticipate the technology of the future and build their games in some custom engine designed to age well, that’s still asking them to expend a lot of effort for what is basically a vanity project: they’re not going to sell many copies of DOOM in 2035 either.

    It’ll be a shame if I can’t play Dark Souls thirty years later the way I can still play Lemmings, but what are you going to do? I mean that literally: if you were King of Videogame Archival what would you do about this? The problem exists because of chipsets and rendering pipelines and so on that are shaped by forces with accounting in the billions of dollars, this isn’t like erasing old movies where you could solve the problem by simply telling people of the past “hey, we’re gonna want that later”. The idea that modern games are going to break in a few decades is the result of a known tradeoff people are making: we could build a future-proofed version of Unity, but it would take effort and money, and the market would rather get cheaper videogames today than harder-to-build videogames that last. That’s not a mistake people are making, it’s a preference. If the King of Videogames told me that funding a future-proofing venture would require a $20 tax on all videogames, I’d tell him no. Playing Dark Souls in 2035 would be nice, but there’s some pricetag at which we have to admit the money would be better spent on other projects.

    1. Leeward says:

      I think the thing to do is to build good emulators. If computers of 2035 are still classical Von Neumann machines, odds are good that there will be a chain of emulators that will let you run today’s video games on that hardware. My modern 64-bit computer can still run 8086 code. It’s plausible that we won’t, but if we’re running Windows 95 games now, we’ll probably be running Windows 10 games in 20 years.

      Apple is a counterexample though. Their computers all ran on PPC, and now the run on X86-64. Rosetta support was dropped with the 2011 release of the OS, and Rosetta was never as featureful as their 68k emulator for PPC. There’s an argument to be made that at some point Intel or AMD (or D-Wave) will come out with something that’s better than Itanium by enough to make a transition compelling and we’ll finally ditch the x86 instruction set and all the software that’s been written for it.

      I doubt it, though. Apple’s market share in 2005 was tiny. Their users complained, but they were mostly brand loyalists. If Apple did a big architecture shift today, they’d lose users in much larger numbers.

      1. Ninety-Three says:

        In theory you should be able to fix everything with emulation, but in practice I have never seen an emulator that actually performed identically to the hardware it was pretending to be. My Nintendo 64 emulator still has major graphical glitches on some games, and everyone acknowledges that making modern stuff work is going to be harder than a decades-old console. I’m sure there will be Windows XP emulators in 2035, but I’d be shocked if they worked well enough to give us all the old games, and there’s a decent chance we won’t even get most games.

        In general, graphics is the big area I expect to break. A program whose only output is strings to the command line could easily be working a hundred years from now, but it seems like a nightmare to perfectly emulate all the fiddly bullshit in the graphics pipeline of modern 3D games. A lot of that code ends up designed for efficiency, so it relies on detecting what specific graphics card and drivers you have, then deploying some finnicky optimizations that would have undefined behaviour on any other environment. I don’t envy the future people tasked with emulating that increasingly complex stack.

        1. Richard says:

          Emulators are a very tough one.

          There’s a region in computing history where it’s currently impossible to run the software on any modern hardware.
          In x86, that’s around the Windows 95/98 era.

          Older than that is “good to very good” (6502 can be run in a browser!).

          The reason is that games ran in ‘real mode’ back then – directly driving the physical hardware.
          There wasn’t an operating system acting as a middleman, and so the obvious compatibility layer approach of “pretend to be the operating system” simply cannot work.

          The only way to emulate those is to simulate the actual hardware they ran on – complete with all its data races, strange timing foibles and odd, undocumented features.

          But we can’t do it very fast.
          A modern, top-flight PC can run DOSBox roughly as fast as a Pentium III at 500Mhz.
          (That might even be the best we will ever do, because those machines were single-core)

          And despite x86 being pretty well specified and having millions of examples for analysis – and in some cases, even the actual CPU die masks – DOSBox still isn’t a fully accurate emulation. It’s about 95-98% accurate.

          1. John says:

            There’s a region in computing history where it’s currently impossible to run the software on any modern hardware. In x86, that’s around the Windows 95/98 era.

            What exactly are you claiming here, that we can’t emulate a PC running Windows 95 or 98 on modern hardware or that we can’t run Windows 95 or 98 software on modern hardware? I don’t know much about the former, but I myself did the latter just last month with both Windows 10 and Linux/Wine. It didn’t quite run perfectly, I admit, but it ran pretty well all things considered.

        2. tmtvl says:

          Well, RPCS3 exists, so I think everything can be… “emulated”. Thing is, emulation isn’t as important as compatibility. Putting something that translates system calls can make stuff work good enough.

    2. parkenf says:

      The analogy is better than you think. Doctor Who wasn’t erased out of carelessness. Doctor Who was erased because tape and film was really expensive, and had to be re-used. The existing “art” (Doctor Who) was considered not worth the expense of preserving. That’s precisely what Shamus is saying, and what you’re saying now – “it’s not clear who’s going to pay for that” – who was going to pay for new tapes for the BBC? It wasn’t clear then why those programmes needed preserving, now that we know better why are we making the same mistakes again?

  9. Syal says:

    which is one of those questions like, “Is the bun part of the hot dog?”

    Or “Do pigs have butts”, and how much buttcheek is required to “have a butt”.

    1. Zaxares says:

      Well, of COURSE the bun is part of the hot dog. If you didn’t have the bun, you just have a sausage, not a hot dog. :P

  10. Simplex says:

    “Maybe one engine (link to Frostbite) is good for making shooters.”

    Based on multiple issues with Battlefield V, even this is doubtful.

    1. shoeboxjeddy says:

      Frostbite isn’t at fault for BFV, DICE is. Well… DICE made Frostbite, so I guess in a way it’s correct to say Frostbite is at fault, but targeting the root cause makes more sense. People forget (somehow?) but the Battlefield series is a LONG history of ridiculously buggy games. Like, there was a glitch in Battlefield 2’s DLC expansion that crashed the entire server 100% of the time if you used a certain weapon on a certain vehicle. The games are also extremely ambitious and rather unique in scope compared to competing products, which is why the bumbling is often accepted as part of the deal.

  11. Olivier FAURE says:

    For example, the C language doesn’t have a built-in way to print strings of text to the screen. It does provide a way to output a single character at a time. You could write a bunch of code to take a string and shove it out to the console one character at a time, but since outputting strings to the console was so common in the early days, this idea was implemented in the standard library. You can see it in the classic Hello World example code:

    I know this doesn’t matter and this is a single paragraph giving context to the whole article, but there are more innacuracies in this quote than in the entire “Vexations” series so far.

    For the curious: the core C language doesn’t have any way to print characters at all (because “print characters on a standard terminal-like output” isn’t a basic operation in the same way “add two numbers and store the sum” is). Different operating systems each have their own system libraries (and, on a core level, something called “system calls”) with mostly-similar ways to print strings. Besides Windows, most systems usually implement something called the POSIX standard, which is a list of standard system and utility functions derived from the POSIX operating system.

    To spare you from having to write different code for every platform, libc (aka the C standard library) includes a list of basic system functions lifted from POSIX (eg, where POSIX has “read”, libc has “fread”); however, libc only includes features that can be implemented in every mainstream OS out there, so it is considerably more restricted than either Windows or POSIX libraries.

    It also includes utility functions like printf that aren’t based on OS features, but provide a wrapper around IO functions (which are themselves wrappers around OS system functions, which are themselves wrappers around system calls).

    One interesting thing to note is that libc isn’t fundamentally a piece of C code. Rather, it’s a list of functions that operating systems must provide to be able to run C programs (and, realistically, programs in any existing language ever made). Linux even has multiple competing libc implementations, notably glibc and musl.

  12. The Puzzler says:

    Re: “Creating the window used for gameplay? Talking to the graphics hardware? Loading image files? Audio?”
    It seems to me that most of these are going to need platform-specific implementations. Is that the sort of thing we can have in a standard library?

    1. Richard says:

      In reality, every langauge requires platform-specific implementations.
      Even “load a number from memory, add one and store the result back” is actually platform-specific!
      (And often context-specific. What type of memory, what’s the next thing to happen to that number…)

      So what we really mean is a common API that you can use and all ‘compilers’ for that langauge can turn that into whatever sets of bytes-on-disk cause the API to be invoked with the proper arguments.

      Until Apple threw their toys out the pram, “talking to the graphics hardware” did have a single API across all platforms.
      Sadly when they decided to deprecate OpenGL they refused to join Vulkan, instead decided to invent their own, incompatible API.

      1. Echo Tango says:

        Dangit, Apple, we had some standards to use. You did so well joining everyone with USB-C…

  13. Chad Miller says:

    Re:

    I should stress that all of this is how a common programmer would view things. From the standpoint of someone developing and maintaining the language, the world probably looks more like this:

    I wonder if this is language/community dependent. I get the exact opposite impression from, say, Python. Python has features that exist solely to make life easier for the maintainers of some of the most popular libraries (e.g. the fact that the Ellipsis constant … exists is for NumPy’s sake) and some languages have been promoted to the standard library due to ubiquity (e.g. pip and virtualenv). The language has long been advertised as “batteries included” for this exact reason.

    1. psychicprogrammer says:

      And a lot of C++ features are being made to make library development easier and better.

  14. krellen says:

    Imagine if The Matrix was somehow “too old to watch” on modern hardware!

    My now 4-year-old computer cannot read the Matrix DVDs I own. I rewatched it a few months ago and was fortunate to still have my old TV and DVD/VCR player.

    1. Richard says:

      Sadly this is true of many actual Bluray players too.

      DRM truly is the gift that keeps on taking.

      1. Echo Tango says:

        Good thing only honest customers are thwarted, and the people pirating this stuff can keep playing the same old video files, even when more modern codecs are used on newer films…

    2. Echo Tango says:

      See also actual film movies, or VHS, etc. This is continually a problem for archival libraries/museums/groups.

      1. Philadelphus says:

        We should go back to inscribing our code on clay tablets, those things have a proven usable life of thousands of years!

        1. John says:

          Yeah, but you first you have to store the clay tablets in the great library of Ashurbanipal in Nineveh, then you have to burn down the city as you overthrow the Neo-Assyrian Empire, and then finally you have to leave the ruins undisturbed for a few thousand years. It’s a proven strategy, sure, but not a very practical one. I personally favor inscribing the code on rocky cliff faces in mountain passes. It’s more accessible that way. It also has a certain “look upon my works, ye mighty” quality that I appreciate.

          1. Syal says:

            If you don’t have a mountain pass, you can use spray paint on any building wall.

        2. Thomas says:

          You joke, but people have been developing a laser engraving technique to store data in silicon tablets so that it will last centuries without needing replacement

    3. The Puzzler says:

      I no longer own anything that can handle any form of physical media. Tapes, CDs, DVDs, all useless to me.

      But that doesn’t mean I couldn’t watch The Matrix pretty easily if I wanted using my available hardware.

      On the other hand, if I wanted to show someone the game I wrote for the Amiga, I’d find it almost impossible.

    4. Liam says:

      I can download and play a high definition copy of the matrix faster than I can find the bluray on my shelf, wait for the playstation to boot, wait for the playstation firmware update to complete, wait for the playstation bluray player firmware update to complete, wait for the disc to load and get through the seemingly endless piracy warnings etc.

      This is actually based on experience; My sister gave us a DVD of some kid’s show for my son to watch at Christmas. By the time I got the thing to actually play onscreen, my son had already watched it on youtube and fallen asleep.

  15. maxoverdrive says:

    Godot engine: FOSS, none of the ‘my engine license is going away’ or ‘cant’ make it work because no source code in 2036′ concerns!

    Also, being able to dig into the internals is SUPER helpful when you want to know how things _actually_ work….

    1. SeekerOfThePath says:

      I would be very interested if Shamus could make an article comparing development of Good Robot the way he and the team did it versus what he would imagine it would be like with Godot.

      I discovered Godot few days ago thanks to an article on slashdot. I’ve been going through tutorials in the evenings, and I like it so far, even the homebrewed GDScript language.

      When I compare it to other engines… They say Game Maker Studio is resource-hungry and unstable. And as for Unity/Unreal/Crysis, I think they are best for the genres they were originally made for – 3D shooters.

  16. Decius says:

    One of the distinctions not being made is the difference between a game programmer, a game designer, and a game maker.

    Often those three roles, among others, are performed by one person, so it can be hard to distinguish between the three- Deciding how enemies will bank grenades at where the player is going is design, figuring out a way to guess where the player is going and calculate the available bank shots to that location with the fuse constraints is programming, and putting that code into an executable is game making.

    For a programming language+libraries to make the step from design to programming easy, it needs a certain set of characteristics, generally related to being general purpose in nature and being easier to implement novel ideas.

    For a programming language+libraries to make it easy to make a game, it needs a competing set of characteristics, generally related to being able to easily execute ideas that are already common.

  17. parken says:

    “[C] does provide a way to output a single character at a time”

    Does it? Surely that’s a machine function not a language function?

    1. parkenf says:

      Well… eventually. I’d still dispute that remark, see http://blog.hostilefork.com/where-printf-rubber-meets-road/

      1. Shamus says:

        That was quite a journey. Thanks for sharing.

      2. Decius says:

        Depending on if you consider the terminal buffer to be the end state, or if you think that displaying the terminal buffer on a monitor is the end of printf.

  18. Kyte says:

    I’m still not convinced you need an entirely new language for all that stuff.

    Take C#, for example (because it’s what I know best). C# now exists as the language for 8+ different ecosystems: .NET Framework (the old Windows framework), Mono, .NET Core (a newer, cross-platform ecosystem), Xamarin (in Android, iOS and Mac flavors), UWP, Unity as well as any other environment that implements the standard that’s called, appropriately enough, .NET Standard.
    You could take C#, implement a gaming-oriented ecosystem that adheres to .NET Standard the way Unity does (but less extensive) and that’d solve most issues without forcing people to learn a new, untested language or have to deal with a new, untested compiler, while still providing relatively simple access to external libraries and even allow programmers to drop platform-specific code if needed.
    In Xamarin a lot of libraries are implemented in three-part form: the common .NET standard part, that acts as the interface to your, and two libraries for iOS and Android that directly interact with native Android and iOS libraries for implementing the functionality. If you don’t actually need platform-specific features, then you just use a single .NET standard project.

    A lot of those schemes could be leveraged into something less monolithic than Unity.

  19. Tuck says:

    I’m not sure why you think it will be surprising to emulate Windows 10 when Windows 10 can already run on a virtual machine.

    1. Shamus says:

      It’s been a while since I’ve looked at a Windows VM, but in my experience it all works great until you try to do something with the graphics card. Then you’ve got the virtual driver talking to the virtual card, which has to go through the actual driver to get to the actual card.

      Even ignoring performance problems (which we might assume will be less of a problem in the future) the results are somewhere between janky and unworkable.

      1. pseudonym says:

        That’s why there is pci-passthrough nowadays. Instead of emulating a graphics card you pass control of the pci bus it is on directly to the vm. A disadvantage is that the host can not use it anymore.

        This works very well for systems that have a processor with IGP and a dedicated graphics card. For example you can run linux on the IGP and then use pci-passthrough to pass the graphics card to a windows VM with almost no performance loss. This is a less invasive way of installing windows on your machine than dual booting.

        A nice summary of the howto of pci passthrough on linux was made by linux unplugged (a podcast) and can be found here: https://linuxunplugged.com/308 . Links are provided in the show notes so it is not necessary to listen to the podcast (although I can recommend that).

    2. Decius says:

      Emulating Windows 10 on a motherboard that doesn’t natively support two-dimensional display modes on a 256-bit processor that doesn’t distinguish between processes per se (or whatever we have in 30 years that we can’t predict now) is much harder than doing it right now,

  20. kdansky says:

    The standard library for C++ is nearly as much part of the language as the core functions themselves. The standards committee writes the specs, and any compiler that does not support it would not be considered standards-compliant. A lot of old-school developers think differently, and believe that all libraries are equal. That is not the case.

    Yes, one could re-implement std::unique_ptr if they wanted, but that does not mean it’s not special. A bunch of language features were added just so the standard library could use it, such as move-operators, “auto” and more.

    C++ without its standard library is not modern C++.

  21. Draklaw says:

    I really feel that your argument about game preservation is backward. I have a hard time imagining that it is easier to preserve a game with a custom game engine than one using a well established one.

    Let’s take an example: Good Robot. Imagine that it no longer work on a PC in 20 years. Who will fix it ? That could be you, but imagine windows no longer support openGL and you need to port it on Vulkan or something similar. I doubt you will take the time. Fans could try to find a workaround, but chances are it will require a fair amount of work for just one game, and it was not a big hit so it reduce the chances that someone do it. If it was made with unity, there is a good chance that someone figure how to run any Unity game using the specific version you used. Or maybe Microsoft will test compatibility of newer version of Windows against Unity, because so many games use it.

    So overall, I feel like commercial game engines improve the chances your game will be playable in the future.

    Also, I feel like you see commercial engines a more monolithic as they are. This is particularly true for Unreal engine where you have access to the code. Sure, there is a steep learning curve to hack into the internals of the engine, but it is still way less work than writing a engine from scratch. And it is not obvious that it will be easier to learn your game focused language. The reality is that game engines are complicated and you can’t completely hide this complexity. It’s just that, like everyone else, you are more comfortable with your own complexity than one designed by someone else.

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply to tmtvl Cancel reply

Your email address will not be published.