Experienced Points: Why Video Games Need Their Own Programming Language

By Shamus
on Dec 9, 2014
Filed under:
Column

Nitpick shield: Games don’t “need” a language, but such a thing would be useful. I’m not complaining about the person who titled this article. I had the same problem yesterday when I said GTA V was “Banned” instead of more correctly stating it was “un-stocked from certain retailers in one country in response to an internet petition”. It’s really hard to cram complex ideas into pithy article titles. I’m okay with a bit of conceptual slop as long as it still conveys the basic idea. The only downside is the prevalence of people who argue with article titles without reading the article. Those people make me sad.

ANYWAY.

My column this week talks about the fact that we use C++ for making AAA games, and why that’s strange and un-optimal. The conversation stems from this video:


Link (YouTube)

That’s a two-hour talk by developer Jon Blow about why he thinks game development needs a new language, why the existing languages don’t quite cut it, and a few things he thinks the new language ought to do. It’s pretty heavy-duty in terms of technical jargon, so if you’re not a coder I don’t know if you’ll get much out of it.

My column attempts to explain the mess we’re in and how we got here, and is aimed at non-technical people who can’t follow what Blow has to say.

I have a half-written post where I go over Blow’s video point-by-point, annotating it for non-coders. I’ll finish it one of these days.

And finally, I’m going to start taking reader questions in my columns. I’m looking for questions about programming. Stuff like, “Why do we have to keep updating our drivers for existing graphics cards?” or “Why do consoles still use checkpoint saves?” or “Why do Skyrim and Minecraft have thousands of mods, but most games have none?”

Note that these questions are for the Escapist audience, so try to keep questions relevant to that. Don’t ask about stuff like Good Robot, because most of them have never heard of it. Don’t ask overly technical stuff (“What do you think of how C++ handles exceptions?”) because that’s going to be too big and complex a discussion for The Escapist. You can ask about non-programming stuff, but it should probably be focused on videogames in some way. If you’ve got a question for the column, you can send it to askshamus@gmail.com. If you send it anywhere else I won’t know it’s for the column and I’ll probably put it in the Diecast pile. Please bear with me, this job is confusing.

Enjoyed this post? Please share!


A Hundred!20202017Many comments. 177, if you're a stickler

From the Archives:

  1. Hal says:

    Dear Shamus,

    How do you type with boxing gloves on?

  2. The Schwarz says:

    FYI, Civilization 4 was written mostly in Python.

    • Phill says:

      The core game engine was written mostly in c++. There was however a lot of scripting exposed to the players which used python. ( And if I recall correctly they eventually released the SDK which exposed a fair amount of C++ code to players to so they could pay around with some of the game dlls.) But the main game code was absolutely not written in python.

      • Tizzy says:

        Indeed. Python is very cool , but I doubt you could make a AAA game using only Python. Nor should one try to.

        There is nothing wrong with letting each language shine where its strengths are, and I think it’s really cool to see how nowadays you can see routinely big projects that integrate code in several languages.

        Back when I was a lad, games were written in assembly language!

        • Max says:

          Challenge Accepted

          • Muspel says:

            The primary issue is that Python, as an interpreted language, is very, VERY slow compared to a compiled language. Especially a relatively low-level language like C.

            And AAA games are largely defined by their high-end requirements– great graphics, large environments, et cetera. Lots of things that will put a strain on a computer’s ability to maintain solid framerates or avoid horrendous loading times.

            Python is primarily useful when you’re doing things where what matters is the amount of time it takes to produce a program, not how much time it takes to run it, because it’s a lot easier to write code in Python than it is in C or C++.

            As an example, let’s say that you need a program that can search your company’s database for something and cross-reference it with something else. If it’s a check that needs to be done once a day, it really doesn’t matter if the program takes ten seconds instead of a tenth of a second, and if your programmer can write the Python script in half the time, you’ve probably saved money.

            But if you’re writing a game engine, there are lots of things that would need to calculated every frame, which generally means somewhere between 30 and 60 times per second. At that point, speed starts to matter a LOT. Having a per-frame process that takes .5 seconds instead of .05 will cut your framerate by a factor of ten.

            • Tizzy says:

              I could not have said it better. :-)

            • Zak McKracken says:

              There’s one aspect missing: Python is used in scientific computing a lot, and there are entire projects using just that … and a bunch of libraries containing properly optimised code for everything that approaches heavy lifting.

              You could probably create a finite element code in python these days. As long as you took care to use the existing libraries for matrix and vector maths, you wouldn’t do half bad.

              In a game context, this means you could probably really put all the draing, screen-updating, physics-simulating stuff in proper, highly optimized C++-libraries but the organisational stuff (level layout, mission goals, event triggers, when to spawn new enemies, when to level up …) into Python. That’d make it super-easy to rearrange maps, scripted events, probably quite a few more things.

              Heck, massively parallelvisualisation software for immense datasets has been written mostly in Python, and that’s all about dealing with limited RAM and CPU/GPU cycles in the face of gigantic datasets. The trick is singling the really performance-critical bits out and move them into libraries.

          • Zak McKracken says:

            Wait, challenge accepted for writing a game in Python or in Assembler?

            I still remember the times of tzping program listings from magayines into a hex editor, even learned the basics of assembler once but really… on today’s PCs with variable hardware, drivers, whathaveyou … you you’d probabably die trying just to figure out what the hardware is let alone adressing it.
            … then again, for a small and well-determined system like a RasPi — that might actually work, though it’d be still a lot more complicated than back in the day.

        • Nixitur says:

          Uhhh, I really don’t like Python. I always feel like I have to know the entire implementation of a function to know whether it will accept a certain object as an input. And the enforced whitespace and lack of {} just messes with my perception of control flow.

          I suppose that’s just a matter of getting used to it, but I seriously can’t fathom how anyone can deal with duck typing. Look, I just want to know exactly what functions the object has to support to be compatible with this function, that’s all.
          In Java, I’d just look at it and say “Oh, it’s the interface XYZ. So, if I want the function to work with my class, I have to implement functions f and g and if I forget that, I’m gonna get an error at compilation time.” and I know all I need to know just from looking at the documentation of that one function instead of having to read through the entire freaking page.

          Oh yeah, that’s another thing that baffles me. Who the hell thought that just cramming all classes that are vaguely similar on one page was a good idea, easily readable or simple to understand?

          Every time I look at the Python documentation, I get slightly miffed. I don’t understand why everybody loves this language and says that it’s so easy to learn and that it’s documentation is so great.

          • Zak McKracken says:

            Obvious XKCD reference:
            https://xkcd.com/353/

            …that’s why.

            Seriously, there are things that are just stupidly easy to do in Python that I’d have no idea how to accomplish in other languages. Removing variable declarations and brackets around code blocks just speeds up writing code, and is visually cleaner. One of the goals was to speed up programming and improve readability, so that’s accomplished.

            With great power comes great responsibility, so there are a number of hacks that are possible but very risky. I learned to code on C64 BASIC, then turbopascal, later Fortran and bits of (non-object) C. Going from there to Python was a huge step but once you’ve got the habits that allow you to breathe in this world, you never want to go back. At least I don’t.

            …then of course it’s a question of using the right tool. So me doing proper computing with a light GUI is just fine since there are loads of libraries for everything I need. It completely replaced Matlab for me, and for most other people I know. For other things there may not be the same infrastructure, though I’m still impressed any time I start looking for some Python libraries that could help me out in some way…

          • Zak McKracken says:

            oh, and if you want to have declared data types, you can always use numpy arrays with typed fields. The types ans size of an array are immutable and you can even give them names. So you could basically replace a dictionary with an np.array. Also, vector maths is super easy (it’ll even use your CPU’s SIMD commands), and you can feed most formulas an array of numbers instead of a single number, and it’ll give you the array of results. Doing any faster in C or Java will require you to write LOADS of code, and know very well what you’re doing.

  3. Alan says:

    I love Blow’s games, especially Myst, although I’m still not clear why he changed the name for the upcoming re-release.

    (I’ll stop telling the joke when Blow proves it’s unfair. He’s failed to do so for 4 years now.)

    • The Rocketeer says:

      Shot in the dark, but… Are you Alan Broad?

    • Ravens Cry says:

      I admit, I don’t get the joke. Explain?

      • Lachlan the Mad says:

        Blow’s new game, The Witness (currently seems to be in that last 10% of the work which takes up the other 90% of the time) is being billed as a spiritual successor to Myst. They look really similar.

      • Alan says:

        Blow is hyping The Witness as innovative, but to this point everything he has described or shown looks like Myst, or at least RealMyst.

        • Lachlan the Mad says:

          The main difference that I can see is that The Witness uses a consistent puzzle language. In Myst, the puzzles within an Age usually followed a theme, but aside from a few minor details there was never a unifying language. In The Witness, every single puzzle is interacted with by messing about with mazes, which admittedly sounds dull. Still, you know how Portal begins by teaching you the very basic rules of portals and then expands it into physics-defying glory? Best-case scenario is that The Witness begins by teaching you “start at entrance, find exit” and then will expand it into mind-puzzling beauty.

  4. Alan says:

    One of the advantages of C is that it’s the programming language just about every other programming language can (relatively) easily interface (link) with. So if you want to write a nifty new library that as many people as possible can use, C is a pretty good choice. Of course, if all of your libraries are in C, why mess around with another language when you could just stick with C or C++ and eliminate the hassle of integrating with another language?

    I think there is a lot to be said for embedding a second language for non-CPU intensive work. Lua seems to be popular, as it’s relatively small and easy to embed.

    People who find programming languages interesting might be interested in Inform 7. It’s a programming language designed to write text adventures in. It’s really well tuned to that specific problem space. I wonder what a programming language tuned to 1st and 3rd person shooters would look like.

  5. Phill says:

    There is one big downside to having a language designed and used for games programming: you create a barrier between games coders and everyone else. If you know C, C++ or java ( and is not that hard to be competent in all three), and you are at some kind of senior level then you’re not going to find out so easy to switch into games when you have no experience of . And vice versa, experienced games programmers will be ghettoised and harder to employ in any other jobs.

    This is bad for flexible career prospects, and I suspect particularly bad for games programmer salaries, which are already significantly below the levels of the rest of the industry.

    I imagine a lot of programmers would still want to code games in C++ if only to keep their career options open.

  6. Piflik says:

    This might sound arrogant and might be a bit controversial, but I have to disagree with your notion that the experienced programmers would be newbies, if you switched to a new programming language. A good programmer should be able to learn a new language in a couple of weeks. Especially an experienced one.

    First of all, programming languages are fairly similar in both syntax and semantic (discounting functional languages like Haskell, or esoteric languages like Arnold C). Secondly, a programmer doesn’t so much write Code, he implements algorithms. And these algorithms are independent of language.

    Rewriting existing libraries are a real hassle, though…

    • guy says:

      When you need to make an optimal implementation of an algorithm, that is not nearly so independent of language.

      • Ayegill says:

        As long as the language is still imperative(So not a corner case like Haskell), and doesn’t hide a lot of complexity from the programmer(which you really don’t want if you’re that worried about performance), I’d say most of the relevant skills should still translate.

        I’m not saying people will be up to speed after messing with a new language within a weeked, but I think we’re talking weeks or months to use it proficiently in the relevant domain, rather than years and years.

        • Ingvar M says:

          There are interesting corner cases when you switch between almost-similar languages (say C++ and Java, two languages I am somewhat familiar with) where things that are perfectly fine in one will have a high chance of performing atrociously in the other.

          If we look at C++ vs Java, the cost of heap-allocating and then rapidly deallocating data is bordering on expensive in C++, but is essentially free in Java. On the other hand, stack-allocations followed by deallocations in C++ are essentially free.

      • Peter H. Coffin says:

        But knowing how to recognize and build an optimal implementation is language-independent. That’s design work. That’s writing. That’s not plugging together prewritten chunks and fiddling with the API to make it work. Because if the latter is what’s (and you do end up then having a language dependence) then it doesn’t matter if chunks you’re pasting are in your head, or coming from the team’s code library or being cribbed from a web page somewhere. The only reason they’re “optimal” is because someone told you they were, and maybe you tested a couple in a particular set of circumstances that may have no bearing on what’s actually fast. It’s bordering on cargo-cult programming. And frankly, it shouldn’t be in your head because you want to build the right kind of thing into the language in the first place. Because it’s a waste of time to know that “Oh, I have this kind of data so a selectsort will be faster than quicksort most of the time” and implement it instead of just calling a built-in selectsort feature.

    • Phill says:

      Two things.

      Firstly, yes an experienced programmer who has learned more than one language in the past will likely pick up the basics of a new language pretty quickly. But to really be an expert in a language – to be a team or project lead – you have to know a language better than that. I know c++ pretty well. I managed to get stuff to compile and run in java with no prior knowledge of the language in a trivial amount of time. Which is fine for simple projects. But the hidden problem is that I am designing java code as though it were c++. Different languages have different philosophies, subtly different expectations of how to do things. Unless you understand those, you will probably find yourself fighting against the language by doing something the ‘wrong’ way at some point. (And all the C++ experience in the world isn’t going to be much use if you get the job of writing php for the web interface to the game dumped on you instead…)

      Secondly, and more practically, the first pass of filtering in the job application process often involved brain-dead comparison of bullet-point skills lists on a CV (or resume if you are American) versus the bullet-point list of job requirements by someone who has no understanding of how significant the differences are. Particularly when employment agencies get in on the business. That creates a wall of stupidity separating programmers experienced in different languages. An artificial wall to be sure, but even artifical walls reduce the movement between groups. Which would be fine if all groups were equally in demand – but games programmers are pretty poorly paid already because there is an excess of supply for them compared to other programming jobs.

    • silver Harloe says:

      Learning a new language: 1-2 weeks

      Learning the idioms, the ins and outs of the libraries, all the little gotchas where two pieces of code that seem to do the same thing have different performance characteristics, etc: years.

      When I went from being a Perl5 programmer to a Java programmer, it was really easy to pick up the syntax and write basic code. That doesn’t mean any of it was *good* and 5 years later I wanted to hit myself for all the things I did wrong when I started using Java.

      When I started using PHP and Javascript for my latest job, same thing: the code was there and it worked, but I want to fire me-from-7-years-ago for gross incompetence. I wrote functions for things that I now know are in the libraries. I used some Java-like structures that I now know had better ways to be written in PHP. I worried about the memory or speed costs of things that were slow or hoggish in Java, and not at all about the costs of things that were efficient in Java but slow or hoggish in PHP or Javascript. I avoided callback functions because the reflection library in Java made them painful to write in 2001, but they are easy to write in Javascript. When I found a library function that did what I wanted, I was happy, even if it was annoying to use, and then later I’d find there was an even better fit library function I should’ve used instead. I used some Design Patterns that only exist in C++ and Java because they are so strongly typed, and are quickly replaced by arrays in PHP and Javascript.

      The list goes on, but basically: it was trivial to learn the languages, but experience really does make you a better programmer in a particular language.

      Plus there is what happens when you need to read someone else’s code in a new language. If their code works, it can be fairly straightforward, but if it doesn’t, and you have to figure out why… Well, what I can say is: debugging time goes down dramatically with experience in the particular language you’re using. What might take a few hours to debug when you have the “1-2 weeks of exposure” can take you a minute or two when you have years of experience with that particular language.

      • Shamus says:

        PHP is such a strange creature. I’ve been using it for years and I still think of myself as a novice.

        I think part of the problem is that you can’t tell of well you’re doing. Is this code fast? Am I using a lot of memory? I have no idea. I can’t tell. Which means that once code gives good output I’m done working on it, because I have no way to measure further improvements.

        • Julian says:

          PHP is such a byzantine mess of a language that I seriously doubt anybody actually knows it.

        • Chris Davies says:

          PHP was someone’s compilers 101 homework that inexplicably got popular. It’s not surprising it’s almost impossible to know whether you’re doing the right thing, it’s such a bizarre and amateurish language in general. I remember right up to the end of the PHP 4 era, being constantly annoyed assignment expressions weren’t rvalues, except of course in the special case of the prefix and postfix increments. I can’t even imagine what the grammar for the language looks like at that point.

          That’s not even mentioning the half-arsed modules that come with it. I remember the OpenSSL module being some ultra-thin wrappers around a random assortment of crypto functions. No attempt had been made to make the calls more “PHP like”, and of course being anywhere near complete was out of the question. Quality control was non-existent.

          Worse, the half-arsed attitude filters down to projects written in it. Consider mediawiki, the only web application so completely broken that the managed to make it CPU bound. Every attempt they’ve ever made to fix what they laughingly describe as a parser has been abandoned and instead they rely on ever more byzantine caching architectures to mask the problem. And this thing runs one of the most popular sites on the internet. Insanity. If they just devoted some of their constant begging drive money to fixing the thing, maybe they could scrap like 80% of the CPUs they need to run the fucking thing.

      • I catch myself all the time, I dig through some old code (sometimes posted on the net) and I see how archaic the way I did it is.

        The way I code has evolved as I do evolve.
        I’ve slanted towards always predefining things, set things up as much as possible before entering a loop, I’m not afraid to use extra memory at startup if it means I can re-use it later (instead of wasteful allocate/free calls all the time).

        When I close my program I only close handles the OS documentation state I should close the rest I do not, my programs takes less than a second to start and less than a second to quit.
        The GUI is in it’s own thread so interacting with it will not halt the program otherwise.

        Checking if a new version is available is in it’s own thread that way waiting for a slow server won’t halt the program.

        My code looks tighter and shorter, and I often find myself tweaking improving it all the time.

        Instead of relying on the OS version I now feature test, instead of just assuming libraries are available I dynamically load them which means I can actually show a useful error to the user if something is wrong.

        For example by doing few simple changes I managed to make compiling WebP library in such a way that it can be used n Windows 2000 and later possible (instead of just Windows XP SP3 and later), the upcoming v0.4.3 of WebP should have a “legacy” option available.

        I hate bloat, but I’m not afraid to use some disk space or memory if it speeds up performance.

        I’m also a huge hypocrite, I cringe all the time when I see my old code as I broke pretty much all my current “rules” when coding.

        My mantra for coding these days are “Tight, Simple, Efficient.”
        I’m sure a few years from now I’ll think the me of today sucks.

    • If you can “read” C like syntax then you can read most languages.

      I first started with Basic on 64 and Atari XE, later AmigaE and C a sliver of ASM on Amiga and HTML then later PureBasic and C and a dash of ASM on PC and PHP, HTML/CSS/Javascript.

      ASM and variants Basic have familiarities, C/CSS/Javascript/PHP have familiarities.
      I’m usually able to do a pretty good implementation even in languages I do not understand well because I enjoy striping code/functions down to their core functionality. If code is hidden in a class somewhere I prefer to just get it and break it down into the parts I need. That’s how I learn and understand the code used.

      • Ingvar M says:

        I’ve ended up having to cram Ruby, Erlang and Haskell to the point where I could judge if the code I was given by an interview candidate was close to or far from a correct implementation of the interview question.

        It’s not that much of a problem, as it turns out.

        And, yes, if you (generic you, here) ever end up having a coding interview with me, I’ll choose a problem guided towards showing mastery of whatever of “C”, “Python”, “Go” or “Common Lisp” that you’ve said is your prime language, but leave the actual choice of language up to you.

        • Seems like a fair test as the person would most likely prefer or feel more comfortable with one of them in particular.
          I might however (if interviewing) ask to see the example ion both the language(s) Id’ be hiring for and the preferred language of the interviewee.
          That way I could gauge how much “catching up” they need (if any) for the language they’ll be working with.

          It would be odd though if one is hiring for a C coder and they prefer Python. But I guess you could send a memo to HR and say you got this guy that’s shitty at C but amazing at Python and hopefully a Python project is ongoing that might benefit from such a person.

          Now if the job allows some leeway in what language a project is coded. (heck it might be using dll’s (Window) or so (Linux) in which case C and other languages can be intermixed with no issues (with some care).

          • Ingvar M says:

            I’ve written C++, Go, Python and bash for work (and some JavaScript) and all of it has been peer-reviewed before being checked in. I think I mostly wrote ksh and C during my interview process, but it’s been a while.

            We typically take the approach that with a few weeks, style guides and extensive peer reviewing, the syntax, semantics and idiosyncrasies of a language can be rapidly acquired.

            • silver Harloe says:

              Yes, having a mentor with a ton of experience can speed up the acquisition process, but I still maintain that learning a new language, and learning it WELL are different matters and the latter takes a lot more time. Even with mentoring, it’s possible to get into traps of “good knowledge acquired badly” – where you know A is preferable to B, but can’t tell anyone why.

              If you just tell your shop, “and today we’re using Java,” they will be greatly slowed down and write terrible code – maybe if you hire an expert, provided that expert has the right personality to get along with your team dynamics, you can reduce the time and make better code…

              Overall, my point is that the sentiment that “any good programmer can write in any language” has a lot of caveats of the kind management doesn’t usually take into account.

              My other point would be that experience matters. Young me might reply “you just want to feel like the time investment was worthwhile,” but older me knows young me was a fool. Experience makes a lot of difference. Even in the mentoring example, it works because the mentors have experience.

              • Ingvar M says:

                Yep, mentoring and guidance are the key points here.

                Normally, we tend towards being generous with “this works, but Y would be more readable/efficient/idiomatic” and as long as someone can motivate why what they wrote is better than anything else, I’m happy to let it pass (last was “this is a new file, while all functionality is trivial, I’d like to see either ‘unit tests’ or ‘a short essay on why this will never need unit tests'”, and I would genuinely had been happy with either).

                Because once you have another file, it WILL acquire more functionality, building on what’s already there, so you probably do want to have some tests.

  7. Joakim says:

    This is unrelated to the column, but the other week I wrote a question for this column, directed at one of your other contact mail addresses(the one listed at your “author” page). I just wonder if mails sent there are still considered/read through for the Experienced Points column?

  8. SlothfulCobra says:

    In your column, you mention that one of the things that makes C still very useful to videogames is the demand for more high-powered graphics. It seems that there are so many things in the videogame industry that all go back to the graphics arms-race, but whenever I look back at what games look the best, great art design/direction still beats technical graphical proficiency every time. There are still games coming out new that don’t look as good as the best games that came out a decade earlier.

    There’s been a surge of indy games that maintain more “retro” aesthetics recently in order to make dealing with graphics easier so they can focus on art design, why haven’t any AAA games tried scaling back on technical aspects in order to focus on design?

    • MadTinkerer says:

      I can answer this one. The answer is that Marketing doesn’t ever care about design. I’m not just being cynical here: the way the companies are structured is that Marketing makes decisions for Marketing reasons and if someone in Marketing has a background in design, only then does a decision get made partly for design reasons.

      This is, for example, the reason behind the awful state of Final Fantasy XIII. Final Fantasy XII is a great game to play. Huge playable world, better cities than Skyrim (You bet your ass I mean it.), wonderful combat system that ties into simplified custom AI programming, great class system where you build your own classes, and an unfinished story.

      So yeah, Final Fantasy XII has exactly two flaws: the stupid hidden treasure chest thing that keeps you from getting one of the best weapons, and the fact that the story ends around the equivalent of the second third of a trilogy. That last part of a story was made into a game, but you need a DS to play it. So Marketing decided that for XIII, they needed to rework the milestones so that telling a complete story was the highest priority and all of the game parts would be secondary concerns.

      That is the reason why Final Fantasy XIII is such an atrocious game. Because of Marketing not caring about how to make a good Final Fantasy game. FF12 may not be a complete story, but it’s still a much, much, much better game than XIII. Squeenix have learned their lesson, but the lesson they learned is that their market doesn’t want that specific terrible idea, not that they should think of things from a design standpoint to see if a good marketing idea is a terrible design idea. Because then they made XIII-2 and XIII-3, which are not nearly as awful as XIII but still pale in comparison to XII’s gameplay, world design, and accessibility.

      And all of them have wonderful graphics, sound, and music.

      AAA publishers take design for granted and will always take design for granted. Because they are AAA publishers.

    • Reason that C is so popular is because there is so many examples, books, sites about it and you’d be hard pressed to find a computing platform that does not have some form of C compiler for it (by C I mean “C” and not “C++”).

      The “C syntax” will probably stick around forever.

      • CJ Kerr says:

        > (by C I mean “C” and not “C++”)

        Honestly I’m hard pressed to think of a >=32-bit architecture which lacks a C++ implementation. There are even a few 16-bit processors with good C++ support. So in the gaming space it’s probably safe to assume that C++ is available.

        But yes, there are working C compilers for almost everything. The embedded space will keep C alive for a long time yet.

      • silver Harloe says:

        much ado is made about other languages… but guess what language their compilers/interpreters/environments are written in?

        if your guess sounds like the ocean, you’d win most of the time, and you’d understand the language’s real staying power

  9. MadTinkerer says:

    The first three Jak and Daxter game engines were written in Common LISP. When the guy who wrote the engine left the company, he was the only person who knew LISP, so they dropped everything he had been working on and started working in C++ like everyone else.

    This fact is particularly impressive because I still don’t know how the heck he wrote something in Allegro CL that compiled to work on the PS2. I really wish more was written on this.

    • Ayegill says:

      IIRC, it was actually a custom language called GOAL(Game Oriented Assembly Lisp), which compiled to playstation assembly. The compiler for this language was written in Common Lisp.

      Edit, found the link:
      http://en.wikipedia.org/wiki/Game_Oriented_Assembly_Lisp

    • Cybron says:

      That’s insane. I have to look that up.

      Okay, having looked it up, it isn’t actually done in LISP – it’s done in some variation of LISP they developed that functions as an imperative language. That’s infinitely more sane than actual LISP.

      Still pretty interesting, though.

      • Ingvar M says:

        There’s nothing about Lisp that stops you from being imperative… It’s actually easier to write obnoxiously imperative code in Common Lisp than it is in C++, bu no one ever does, because why would you want to (the thing to look up, should you want, is “tagbody”, which is an essentially pure imperative subset with goto).

        That game engine was also used for the first three Ratchet & Clank games.

  10. Dragmire says:

    Thanks for the vid link, I have friends going through game dev right now who’ll likely find it compelling.

    The “Ask-a-Shamus” sounds like a neat idea. I wonder where the line is before a technical question becomes too complex.

    For instance, is it too much to ask something like, “Why is it so difficult for PC games to be relatively bug free and stable at least across the same operating system” or, “What is it about peoples’ PC hardware that make games difficult to function in the exact same way for all people, where bugs and bug fixes will affect different setups differently”? Bug related questions are probably going to be popular based on recent releases like Ass C: Unity

    This covers things like how hardware interacts with software, drivers, compatibility, OS updates and so on.

  11. TehShrike says:

    Tangentially related, I’ve been reading this book and enjoying it a ton so far: Game Programming Patterns.

    So far I’ve read the first third of the book, where he goes over several programming patterns from the GoF, and explains them in the context of game development.

    I haven’t started writing a game yet, but they gave me a good understanding of the patterns, and in a way more interesting way than the dry theoretical text of the original book. The author is a good writer.

    I’m reading the chapter on finite state machines right now. Would definitely recommend.

  12. Tizzy says:

    So, cpu cycles and memory are plentiful, and C is overly geared towards conserving those.

    Yet, games require terrifyingly powerful PCs and still run slow as molasses.

    How do you reconcile these two points?

    • Shamus says:

      By “plentiful” I mean by the standards of when C was designed. The memory saved by manual string management is completely trivial today and doesn’t help us meet our current performance needs. (And yes, you don’t need to do manual string manipulation these days, but I didn’t want to get into the C vs. C++ business.) It’s not that C is OVERLY geared towards performance, it’s that a lot of its quirks are due to design decisions that made more sense in 1972 than 2014.

      • Arkady says:

        The actual memory space saved by C-style strings isn’t much, that’s true.

        But memory allocations from the heap are still expensive: especially in multithreaded stuff. (You’re pretty much guaranteed a cache miss and some kind of mutex lock/unlock or some other overhead to stop two threads allocating the same memory at the same time to different things.)

        C-style strings allow programmers to allocate from the stack instead and allows them to tightly control the number of heap allocations. That is their real advantage these days.

      • Tom H. says:

        Actually, you *do* need to do manual string manipulation. See the recent discussion on chromium-dev@chromium.org where the Chrome browser is doing tens of thousands of std::string allocations and deallocations for every character you type in the search box. This has a real, serious performance impact. (At least, we think so – just measuring the performance of a 10-million-line-of-code interactive program is pushing the state of the art.)

    • guy says:

      There’s no such thing as too many CPU cycles to use them all up. But if you have a language that shaves a couple bytes off each object and objects are kilobytes in size, that’s not a terribly important issue. The other side of the coin is that if your language is too complicated, your code might take twice as long or even asymptotically longer than it should and you simply never notice. And C opens up the possibility of various bizarre errors from manipulating memory directly. You can add a letter to a number and not have the compiler tell you you’re an idiot, if you muck up your pointers. Which is sometimes good but is bad when you don’t want to do that.

      • “f you have a language that shaves a couple bytes off each object and objects are kilobytes in size, that’s not a terribly important issue”

        Actually the effect is cumulative.
        A modern game has potentially millions of objects (by objects I do not mean 2D object, but anything, like a decimal number or a yes no state flag), and if a few bytes are wasted here and there and CPU cycles are gobbled up yonder and beyonder then that translates to not reaching 60FPS or to blasting past 4GB in memory use.

        If a object is no longer used it is usually freed (garbage collector/routine to free up resources), if a object is created and freed several times per second then that has an impact.

        You can’t add memory resource to a object oriented language, you can do such to a game engine though.

        Personally I’d like to see something like C but way more strict/restricting, maybe a subset.

        As to the issue of games running like crap. That’s because you got objects and classes calling other classes or methods and you got wrappers for other wrappers and.
        And most games use frameworks, some of them made to work with the game engines others hacked to work.
        Some use old game engines with a graphical “spruce up” but the engine itself is otherwise old and dated and barely holding together.

        Some people look down on re-factoring (redoing the whole thing from scratch) but me I see it as needed because over time, regardless of how well meaning or strict one are, cruft tends to stick to it and create a real mess.

        You can only add so many layers of paint before you need to consider stripping off all the old paint and sand the darn thing.

        • guy says:

          A modern game has potentially millions of objects (by objects I do not mean 2D object, but anything, like a decimal number or a yes no state flag),

          See, I’m a java native and we call those guys primitives. You really shouldn’t touch memory allocation for those directly; everything expects them to be certain sizes and dealing with variable size ones is going to introduce a whole host of problems.

          Objects are generally variably sized (an instance may be fixed size but you can have different ones of different sizes) and tend to be much larger. If there’s a way to dependably reduce their overhead without changing their performance, that sounds like a job for the compiler. Sure, you might be able to shave off some more space doing things by hand when you know things the compiler can’t see, but getting that working takes time away from other things you could be doing, and it’s likely that making sure the algorithms are correct and efficient will be more useful. Overlook a single instance of non-tail recursion because you’re saving on per-object overhead and you can probably wave all of those memory savings goodbye.

          Granted, I’m assuming the compiler is good enough that it won’t pointlessly waste memory. Also incidentally Java doesn’t optimize tail recursion because it breaks Java’s exception handling, which involves proceeding back up the call stack and obviously doesn’t work so well if frames get reallocated.

          • Zukhramm says:

            Actually the reason Java doesn’t have tail-call optimization is that certain security related code was counting stack frames to know what to allow. This has recently been removed and while that’s not a promise of TCO, the big block against it is now gone.

            Also, for Java 10 there are plans for value objects, user defined objects that behave like primitives, good memory layout in arrays, not pointer jumps.

            Unfortunately Java 10 is planned for 2018, which feels pretty far away right now.

    • Phill says:

      As programs get ever larger and more complex (and they are vastly more complex than they used to be), trying to keep your code relatively bug free becomes harder and harder. Heavily optimised code tends to be an unmaintainable disaster area. It’s fine (more or less) if you can write it once and forget about it, but in the real world that is very rarely an option.

      (Plus it takes a very long time to really optimise code, and the cost-benefit analysis rarely favours it except for serious performance bottlenecks).

      In practice, writing code that can be understood quickly by the other 15 people who might have to fiddle with it, that can be adpated to changing requirements or rewritten to fix some tangentially related bug is much more important. Actually optimising code properly basically means locking down any further changes and then spending a few months on it (and hoping that the bugs you introduce doing this aren’t life threatening). If you’ve noticed, most games tend to be released these days when they still have pretty obvious bugs in them; the idea of persuading marketing / publishers to wait a few more weeks while the critical bugs are fixed, then a few more months for the other significant bugs, and then a few more months again to work on the performance is a bit of a non-starter these days.

      • With the exception of CD Projekt Red, the guys making The Witcher 3 which was postponed since the devs wanted more time.
        I’m pretty sure the money guy (and to some extent the marketing guy) over there is shedding a few tears, but damn, respect for doing it right guys.

        I call this the Deus Ex school (the devs of the original Deus Ex was said to have reached gold, the game was finished and ready for release, then “Hey! Why don’t we spend 6 more months polishing it?” and they did so. Now Deus Ex was not bug free or without issues, but these 6 extra months probably turned that game into the landmark it is due to that extra time.

        Also, The Witcher 3 is the last in a trilogy, and my guess is that the devs want to make sure that it ends in a way (or ways rather) that they and the players will be happy with.

        Interestingly enough the moved release will move The Witcher 3 away from the GTA V PC release, something I’m sure Rockstar don’t mind.

        Rockstar (at least for their GTA games) seem to take their time when working on games too, I’m kind of glad they aren’t rushing the PC release of GTA V.

  13. Shamus a language (or should i say languages) for making games has existed for decades.

    But for various reasons they’ve been ignored. Either because they are proprietary or the industry ignored them or saw them as “toy” languages.

    For 2D (and later 3D) there was AMOS and Blitz Basic.
    Both was BASIC familiar languages with a strong focus on multimedia and games, you could make a game with just a few lines of code.

    They where marketed towards the consumer that wanted to make games rather than the professional, though there where quite a few games made professionally using them.

    For 1D (aka Text Adventure / Interactive Fiction) games there are many languages, TADS, Inform, ADRIFT, Quest, ALAN, and plenty more.

    The issue is that at a certain point you end up at a cross road, a generic game programming language is just too… generic, there is a reason why game engines exists as these are custom game languages if you will. Some even have WYSIWYG (What You See is What You Get) editors allowing you to manipulate the game world in realtime in 3D.

    Instead of writing a game in a programming language you are programming a game engine.

    I suggested to AMD many years ago that they should look into putting a game engine in hardware (or parts of a engine) allowing a game to essentially “upload” the gameworld to the hardware, this was many years ago and I see no signs that they are working on anything close to that yet (hence me stating it in public now, it’s not like I’m ruining any secret plans).

    My suggestion to fix the issue of complexity in making games is the following.

    A industry standard open source game engine, royalty and license fee free. (MIT / BSD license?) The project is guided by industry experts and any improvement to DirectX and OpenGL andMantle and whatever Intel, AMD, Nvidia and Microsoft and the rest come up with are included in it.

    This engine will serve as a “reference” engine, and developers/companies using it are encourage to share the improvements with the project.

    The way the whole thing is financed are by donation/sponsorship by hardware manufacturer (build it and they shall come principle) so there exist a reference implementation of an engine that makes use of their hardware features.
    In addition those working on the project could do consultancy (for a fee) for companies that need coding help to implement/do certain things (several Open Source based companies do this today).

    Learning curve would drop too when getting developer jobs as many other engines would be based on the reference one so there would be familiarity.
    The reference engine would also be multi-platform.

    I think this would work better than a game programming language (which has been done but abandoned/ignored in the past by the industry), I can not recall ever hearing of a reference game engine though.

    • ET says:

      Those examples are all languages specifically trying to solve very high-level problems, which are all different for different types of games. How do I make a text adventure? How do I make a platformer?

      Blow’s recommendations are for a language, which solves problems, common to nearly every game. Easier memory management that’s still efficient. Eliminating architectures, library pieces, or even syntax in the language, which are unused or seldom-used in games, but are in other languages/libraries/etc because they make sense for other types of programming. Faster compilation. Stuff like that.

      Also of note, is that the examples you give, solve problems which are difficult for non-programmers, and part of your daily job if you’re a programmer. I’m not trying to bash them, or sound condescending there. They’re really great tools for teaching, or for getting people interested in programming. However, once you have learned/made yourself a programmer, there’s better tools to use, which solve the bigger problems you now face.

      • “Blow’s recommendations are for a language, which solves problems, common to nearly every game. Easier memory management that’s still efficient. Eliminating architectures, library pieces, or even syntax in the language, which are unused or seldom-used in games, but are in other languages/libraries/etc because they make sense for other types of programming. Faster compilation. Stuff like that.”

        A subset of C/C++ is all that is needed. Heck you could probably use a C compiler of sorts (Clang?)
        Just apply a pre-processors that ensures the C subset is followed (and spits out an error if not).

        The issue is your statement “common to nearly every game”, if you think just FPS games then a game engine is ideal (Unreal, Cryengine, etc), for Point and click games something like Wintermute is ideal.

        What is needed is a pre-processor and a standard library (kind of like a std clib) where you can just add sub-libs with the functionality you need.
        Ironically a opensource game engine in some ways.

        But the type of games that exists are so wast that you’d need a generic language to cater to them all.

        There are also frameworks (SDL) that are sort of a generic game engine.

        The issue with C/C++ is that it’s been added to and changed over decades, the ideal is if there was only one way to do things. By enforcing a subset of C any debugging is much easier, heck if you combine a subset of C/C++ with a gameengine framework then that is the ideal.

        You will never be able to create a programming language that makes it easy to make games without also restricting the flexibility in the type of games you can make, and as soon as a developer need to start doing game code in another language just because the main language is unable to do certain things then you got a whole new mess on your hands.

        A lot of the junk in C/C++ is in the standard library of C, sure there is some cruft in the compiler itself and the actual language but there is a lot defined in the standard library (C89, C99, C++08 C++10 or is it C++11? and upcoming C++14) and so on.
        These try to fix things by adding/removing/changing things related to the standard library.

        Another issue though is the definition of a game.
        A RISK like game is mostly just a map and a database the disconnect codewise between something like that an say Call Of Duty is huge. You do not need dynamic music in a RISK game nor worry about LOD and Occlusion or shadows and shading or lighting.

        Also note that EPIC and Mozilla (I think, can’t recall) managed to get Unreal engine running in a web browser using Javascript and HTML5 stuff.

        Ideally a game developer should not have to worry about memory and file access and controller support and things like that. The game engine developer on the other hand do have to worry about stuff like that.
        A game develeoper should only worry about creating and piecing the content together.

        If you want the cost of making games go down then you must have a game engine, and if it’s modular so you can add/remove pieces you need or don’t need then that is ideal.

        If you need to code the game from scratch using a “game language” then that is going to cost a lot and the development time is going to be huge.

        Now if a game language is used to create a game engine, then game devs will not be using the game language but instead the game engine, the engine may have semantics that go in a different direction from the language it’s written in. At which point the language is irrelevant just like today.

        A opensource modular game engine would allow development time and costs to plummet compared to today. C/C++ would be just convenient for such an engine as would any other common language that is available on most platforms.

        A modular game engine / C game library can abstract away file and memory access and provide ready made solutions for common problems.
        It also does not help that you have Mantle/DirectX/OpenGL to choose between. Ideally there should only be one and that would automatically map to whatever is considered the native graphics API on the platform.
        Same with audio with DirectSound/OpenAL-Soft (the original OpenAL is dead in the water).

        Frameworks like SDL mitigate some of it but you still have to faff around with graphics and audio calls and memory and other things anyway.
        A “game programming language” will not automatically allow you to do plug’n’play programming of games (there are actual clik’n’program game developer software out there and the games you can make are rather limited).

        A open source modular game engine could be designed such that you could “swap” out the 3D graphics engine with a 2D one (figuratively speaking) a 3D audio engine with the 2D audio engine and so on.

        For example I got some local code here that allows me to just swap a .dll and I’ll be able to support Xbox 360 compatible controller, PS4 controller or WASD, all of it is abstracted away, the player just choose their controller/layout and the only thing the game sees is that “forward” has the value 1.0 (but that value may originate from a button, analog stick, trigger a foot pedal or the “W” key on a keyboard).

        Stuff like this belongs in a library or game engine not in the programming language itself.
        If you thought C was “slow” to compile image when the compiler has to juggle all this stuff for ALL games.
        Want to compile a Solitaire game? Sure just wait while the compiles compiles the game world.
        The player only sees a green screen and some cards but underneath is a full blown 3D engine?

        Now if you say that a game language should not have the engine included then we are no longer talking about a game language at all, just a generic programming language.

        Better attempts at better generic programming languages happen all the time. C, C++, Pascal, Fortran, Cobol, Delphi, BASIC (and the umpteen inspired ones that are the same but not really), AmigaE, PureBasic, Python, Javascript, PHP, all this was designed because anther language did not suit whatever design was needed.

        And sometimes a language was made because making your own compiler is kind of cool.

        Another issue is that a lot of games usea a server client model and you can see this even in single player games.
        I’ve actually had a game fail because the “client” could not talk to the “server”. I’ve seen rubberbanding/lag cause my character to be warped back a few steps just like in a MMO….but in a single player game.

        Should this server client model be part of a game programming language?

        At the core a programming language is just IF THEN ELSE and a few other keywords.
        Start adding things like keyboard input, file I/O, networking etc and it’s a language with a standard library or modules.
        Add a huge amount of these together and you got a full engine.

        • ET says:

          Sorry, I should have clarified this earlier. Blow’s hypothetical language is made for game engine developers, not game developers, to use your words. They’re the ones doing all the resource-heavy programming*, who need a new language. Scripting languages and libraries already exist for the higher-level stuff, which is generally resource-light.

          * Graphics engines, AI engines, etc

  14. NotDog says:

    Another thing is that, for reasons I can’t grasp, C/C++ are virtually the only languages that can compile to an actual binary. Every other major language like Java, C#, Python, or Ruby requires the user to have some runtime engine installed. Not to mention languages that can only create programs for a certain platform (C#) or needs to be made with commercial tools (whatever Adobe is selling for Flash and Flash’s descendants).

    This actually makes me upset. When writing apps I don’t want my users to have to install any additional crap to get my software to run, but C is bare bones to a fault while C++ is an ugly, ugly language.

    • guy says:

      Porting issues, mostly. There’s lots of fiddly stuff you may have to mess with to get things to compile for a given OS. Meanwhile, Java’s runtime environment means your code should work just fine on any computer with no changes whatsoever.

      • Kian says:

        Technically speaking, C++ programs do require the C++ runtime to run. The important bit being, the C++ runtime is tiny compared to Java’s runtime. For programs built with Visual Studio, for example, the computer that runs it would need the appropriate Visual C++ Redistributable Runtime.

        Programs built with gcc or clang will also have their own versions, which generally are already installed in the system. And version issues are generally easier to solve (as opposed to Java that only has a single version easily available).

        But since all the C++ runtime does is set up the environment before handing execution to the main function, it feels as if there was no runtime at all.

        • ET says:

          You sure you’re not confusing C++ with C#? Last time I checked, C++ was definitely compiled before-hand into machine code. A reference would be nice. :)

          C# is compiled into an intermediate language, and then compiled just-in-time into the native machine language, by the interpreter, which is distributed by Microsoft.

          • Kian says:

            References: http://support.microsoft.com/kb/2019667
            Of note: “VC++ Redistributable Packages:

            The Microsoft Visual C++ 2013 Redistributable Package installs runtime components of Visual C++ Libraries required to run applications developed with Visual C++ 2013 on a computer that does not have Visual C++ 2013 installed.”

            If you are on Windows, go to “Add or Remove Programs” (it called “Programs and Features” in Win7) and scroll to “Microsoft Visual C++” and you’ll find every version of the Visual C++ runtime that your system has installed.

            Even though C++ is compiled to machine language (eventually), you still need the runtime. It’s just that the runtime is more of a primer than a running environment the way Java or .Net are. That’s the difference between managed and unmanaged.

            Which is why I clarified that this is technically speaking, but that in common parlance there is no runtime. If you hand someone a C++ executable, however, and they don’t have the runtime installed, the executable alone can’t run.

            • Richard says:

              The C++ Runtimes are simply a set of libraries that contain the code which every single C++ program on a given platform needs, and common functions that most of them do.

              They include the ‘boilerplate’ code like memory allocators, library loaders, the core functionality needed to talk to the operating system, the C++ standard library etc. The MS Visual C++ ones also have manifest checks to ensure the right version gets loaded etc (Side-by-side assemblies)

              The whole lot really could be statically-linked inside your binary if you wanted – however, this is usually only done for embedded systems because on ‘normal’ systems it simply makes your binaries larger for no benefit.
              (I’m currently working on 32-bit ARM system where I’m compiling the entire (RT)OS into my code.)

              The Visual C++ ones also allow Microsoft to patch your program when they find faults in the implementation of the Visual C++ standard libraries.
              – If you take a look in Add/Remove you’ll see a variety of patches to the VC++ runtimes you have installed.

              The theory is that all programs need those functions so there’s no point in a given PC having more than one copy. In practice a lot of installers just drop them in the program’s folder so you have many copies, but it’s worth a try.

            • ET says:

              Microsoft Visual C++ is C++ written in an IDE, whose compiler didn’t finish compiling. That’s Microsoft’s choice, and not inherent in the language. C++ itself is a compiled language. :)

              • Kian says:

                GCC in linux does the same thing. You need libstdc++ to run C++ programs compiled with it: https://gcc.gnu.org/wiki/Libstdc++
                clang also has it’s own libc++

                So, if the three major C++ compilers, on the three major platforms, all have runtime libraries C++ programs need to link against to run, I think it’s fair to say that C++ has a runtime.

                It’s a small, minimal runtime, basically a bunch of functions that are too large to inline and standard setup code to get your program up and running.

                Otherwise, I’d like to hear what you think a runtime is and why what all the compilers call a runtime is not a runtime.

                • ET says:

                  Well crap. I thought gcc compiled and linked everything into one file, which could execute on its own, without anything else needed. (Except if you were relying on DLLs in your program.) Now I need to check its docs for a flag to do such a thing. :P

                  • Kian says:

                    You can tell gcc to link statically. Although there are some technical issues you have to be aware of. For example, you might have issues with your dlls if they link to a dynamic version of the library.

        • “C” does not technically need a c runtime. If you are thinking of msvcrt.dll then that is not really needed. It’s got functions like malloc and so on in it for portability/compatibility of originally Windows own system code.

          Those are just wrappers for HeapAlloc and similar OS APIs.
          Every process on windows automatically have kernel32.dll loaded.

          The minimum you need in a C program is the ability to call GetModuleHandle on kernel32.dll then use GetProcAdress to do feature testing, no other dependencies needed.

          From there you can simply get to all the APIs and stuff you could possibly need or dream of.

          Also note that C does not compile directly to binary (machine code I assume you mean), it goes through a assembler first.

          Some language > Assembler > Machine Code,
          or in some cases:
          Some language > C > Assembler > Machine Code
          which is kind of weird when you think about it for too long.

          Oh and the main() function, you can totally make your own and tell the linker to point to that directly, the argv and argc stuff is again a holdover from portability, Windows API got special calls to retrieve that stuff so you do not need argv and argc (most likely they are just empty anyway).

          A “C” exe can get freakishly small this way, a few KB in size and no dependencies at all.

          • Kian says:

            To be clear, I’ve been speaking about the C++ runtime, not C’s. C also has a runtime, but it’s even tinier than C++ and often linked into the executable directly.

            Yes, the runtime is super simple stuff. I said this. But that doesn’t make it not a runtime. It’s not a managed byte-code interpreter, but “technically speaking”, that is, using the correct technical terms, it is a runtime. Which is why the dll is called “MicroSoft Visual C RunTime”.

    • WTF! I was supposed to reply to Kian here on the C runtime stuff but something went wonky.

    • Adso says:

      C and C++ are probably the most well-known ones, but there are many other languages that compile down to native code. Many of the more popular functional languages are generally compiled (such as SML, OCaml, Haskell, and Common Lisp) and various languages which want to function in the same space as C and C++ do as well (such as Go, D, and Rust.) None of these are quite as popular is C or Python, but they are all languages that are used in production in various places, so they’re not ENTIRELY obscure.

    • Kyte says:

      It’s the difference between static linking and dynamic linking, plus DLL Hell.
      Java, C#, etc. runtimes include both the core libraries you’d need and the runtime that everything works on. The runtime is generally not particularly big, but it still does a fair amount of work.
      C and C++ do indeed have a runtime (Ever heard of MVCRT?), but it’s so small it typically comes shipped with Windows or with the game itself. Then all the libraries the game depends on come bundled with the game as separate DLLs (dynamic linking) or as part of the executable itself (static linking). This is a valid way to do things, but it leads to the famous DLL Hell problem in the former case and makes it impossible to update libraries without releasing a new version of the executable, in the latter case.

      Fundamentally the difference is that C#, Java, etc. try to be platform-independent, so you need a translation layer between their bytecode and the machine. This is the runtime.

      • NotDog says:

        My point was that if I release something written in, say, Java, everyone will have to download the Java Runtime Engine before they can use my software.

        This is not something that’s needed for a compiled language like C and C++. Though I have noticed that Steam launches the Visual C++ installer a lot whenever I start a game for the first time…

        • Duffy says:

          But if you hand that C/C++ Windows compiled executable to someone using a Mac it won’t do anything. Interpreted languages (which Java is) were specifically designed to solve portability problems by adding a middle layer to the process. It’s very much done for a reason.

          • NotDog says:

            You’d have to port and compile your code to have different executables for different platforms. It’s a burden on the developer, but not on the user like asking the user to install a third party engine would be.

            • Duffy says:

              Well that’s an important decision to make when choosing the language and tools to use on a project. Sometimes it’s better to burden the user than the developer and sometimes it’s vice-versa.

              Depending on the size and scope of the project it might be prohibitively expensive (in either time and/or money) to port your code for another compiler, but it’s trivial and easy for a user to install the machine specific interpreter.

              If the application is fairly small porting your code is much more manageable and you can have the luxury of creating custom executables for all the desired target Operating Systems.

              Things get even more complicated when you get into web hosted solutions. Those that have an interpreted language backend allows you a lot of flexibility in what server configurations you can use. Depending on your product that could be very important.

              What you seem to really be pointing out to me is that developers should insure they use the right tool for the job.

            • guy says:

              It’s still an issue on numerous levels.

              First, it’s much more convenient for everyone if you only have to distribute one executable. Would you want to have to make sure you’re buying the Mac disk instead of the windows disk? And it would mean the developers have to compile it on numerous different platforms, which in turn requires that they have them, which can be an issue for smaller developers.

              And that’s assuming the compiler handles everything for them. If they have to directly interact with system functionality to get a display window and such, they’ll have to actually rewrite and retest the code for every platform, which can rapidly get expensive.

              Or they can write everything in Java and never deal with any of that. And then the user goes and gets a third party engine once and they can run everything written in Java just fine.

              The downside is basically that the JRE has to do all the platform-dependent stuff on the fly, so it is slower. They have various fancy tricks to speed it up, but it’s still never going to match compiling it into the platform machine code ahead of time.

    • Mephane says:

      C is bare bones to a fault while C++ is an ugly, ugly language

      This statement is puzzling. You do know the languages, so you should know that you can totally write code that looks like C in C++, while still using features like RAII instead of manual resource handling.

      • NotDog says:

        The only real thing C is missing, besides clean memory management, are standard libraries for collections. C++ has those libraries, but you have to employ the ugliest syntax possible if you ever want to use lists.

    • Kian says:

      I don’t think I saw an answer to your complaint, why you need a runtime environment installed in languages such as Java or C#.

      There are a number of reasons. Essentially, these languages offer complex facilities. For garbage collection to work, for example, the runtime needs to be aware of what your program is doing. Garbage collection is an entire other program that runs under your program and is updating references and moving your memory around for you.

      To give you network services, the runtime needs to talk to the OS on your behalf and implement the entire network stack. To create windows it has to have an entire rendering component that handles that.

      So say you wanted to have a stand-alone executable. Every program written in that language would have to duplicate all those facilities, and then compete for the resources. A simple “Hello World!” would have to load all sorts of things that you don’t really need but that the language establishes should be there. You could cut back on some of that by doing static analysis, but that makes the compilation longer and more complex.

      By having a common runtime for all the applications, you cut down on the size of the executables, simplify compilation (or remove it altogether), and you can take advantage of economies of scale on shared resources. For example, if you have many Java programs running on a single machine, the garbage collector can probably manage the lot more effectively than if each one had it’s own independent garbage collection.

      Those are some of the reasons I imagine you need to have a runtime in managed languages.

  15. Zukhramm says:

    How do you tell a C++ programmer and a C programmer apart? If they don’t make any distinction between C and C++, they’re a C++ programmer!

  16. Nyctef says:

    I thought writing the game engine in C and then using a higher-level scripting language on top was the rule rather than the exception these days, although that could well be mistaken. WoW has Lua, EVE uses a Python variant, and so on.

    I think my favourite scripting language I’ve seen recently is Limit Theory’s LTSL (unfortunately the forums seem to be down at the moment, but it’s easily google-able). It’s a LISP variant, but where parentheses can be replaced with indentation, so it ends up looking a lot cleaner and easier to read than LISP while still being easy to parse and interpret. It looks like he’s using it to write shader-like programs, so I assume it’s relatively fast.

    • C (or C++) executables/libraries and LUA to handle the scripting is probably one of the most common combos.
      Now I have no actually done any LUA scripting yet but if I was to recommend anything I’d recommend a C and LUA combo.

      Though some seem to prefer Python over LUA so you may see that with some game engines.

      Both C and LUA has a huge amount of books/references/examples on the net so as a programmer you are rarely stuck if you should sit there without a mentor.

    • TehShrike says:

      This is my understanding as well.

      This is even what Unity does: they give you their monstrous engine written in C++, and you script all your own code in a .NET language.

      • WILL says:

        Until they release their new compiler (whatever the fancy name is), the performance of scripting isn’t particularly great, however. Moving transforms around has a surprising overhead.

        • Kyte says:

          The main problem is that Unity is still using an incredibly old version of the Mono compiler, due to licensing issues. It’s missing a million and one features, bugfixes and optimizations that both Microsoft’s official .NET compiler+runtime and Mono’s own have had for ages.

    • Alexander The 1st says:

      “I thought writing the game engine in C and then using a higher-level scripting language on top was the rule rather than the exception these days, although that could well be mistaken. WoW has Lua, EVE uses a Python variant, and so on.”

      WoW and EVE are PC games though – if you go onto consoles, the higher-level scripting components are probably re-written, or used with a Lua/Python compiler that compiles down to C/C++, to then compile down to machine code for the console.

      I mean, that’s my immediate guess – after all, IIRC, Minecraft on XBox was re-written to C#, IIRC – in part because you can use C# there.

      Or in other words, the reason you use C/C++ as a backbone is because it’s the one part you’re probably going to need to do the least amount of work to get up and running, before even doing platform-specific changes to get the non-working parts up and going fine.

  17. Daimbert says:

    I think the issue is that the various components of a game — or any major piece of software, for that matter — have different purposes and so have different requirements, and if you try to make one language that can fulfill all of those requirements you end up with a language that won’t do any of them all that well. The biggest problem is that software projects all focus on having one language doing everything, and it’s often the coolest one that wins, or the one that most people know that wins. But that usually means that the language does some things really, really well, and some things really poorly.

    I’ve worked on multiple projects that heavily used different languages for different parts of it. I had one with a Java GUI, a C legacy GUI, and C/C++ servers. I had another one with a Javascript/HTML front end and a C++ back end. One of the big gripes about some other projects was that they insisted on using one language — again, usually the new and cool one — for everything, and it couldn’t do everything well. One advantage that C/C++ has is through various libraries and mechanisms it can generally do pretty much everything reasonably well … even interact with other languages (Java might be better at that than C/C++ though).

    So writing one language for gaming won’t work because USING only one language for gaming is probably a bad idea. What you want are multiple languages all used in their appropriate places in the software.

  18. Exasperation says:

    Shamus, re: the following tweet:

    “These Victorias Secret ads make me wonder if somewhere out there, someone is working on some kinda “real-time photoshopping” for live video.”

    People have been working on this sort of thing for a while. Here’s a video from 4 years ago.

    • Make you wonder what tech is capable now 4 years later.
      That video evidence in court “The court can clearly see the accused is not as fat as the man in the video”.
      Odd that CSI or one of the other cop shows haven’t touched on “photoshopped” video yet as a episode plot.

      They are certainly good at catching faked audio. “See if I isolate these frequencies, there’s a cut/change in the background noise here”. Really now? You can download room noise recordings on the net, then simply filter out the room noise in the original and replace it with the fake room noise, voila any “splice” will sound seamless. Unless… they got a database of room typical noise in their lab computer…shit I think I gave the writers an idea now.

      But yeah, that Baywatch muscle boost was creepy. That being said it was not high fidelity video, but with improve algorithms and modern hardware I doubt that is an issue now.
      Is the new Terminator movie going to use some tech similar to this for Arnold’s T-800, i wonder?!

  19. Cuthalion says:

    I’m mostly tired of the elitism I see where people are like, “Java!?!?!? Have you lost your mind? If you really want to make a game, you should be using C++.”

    Not to mention that practically every game job posting wants a load of C++ experience.

    So, boo C++ out personal spite.

    • Kyte says:

      The main problem is that Java was made for line of business applications. It’s very heavy and cumbersome for gaming applications. It’s why Minecraft uses so many resources.

  20. WILL says:

    I’ve talked to a few programmers from Ubisoft when they visited my university. They develop all their engines in C (maybe C++, not sure) but they have an inhouse scripting language on top of that. The scripting language isn’t interpreted though, it’s not C# to Unity’s C++ engine, it’s probably much more efficient.

    Seems like the way to go.

    • Kyte says:

      Note that C# is not interpreted, it’s JIT-compiled.
      More importantly, I wouldn’t trust any homebrewed solutions to be more efficient than dedicated counterparts. Ubisoft makes games, not compilers or interpreters, after all.

  21. Kian says:

    Full disclosure: I’m a bit of a C++ fanboy. It’s my job and it’s my hobby. So take what I say with a pinch of salt.

    I don’t agree that video games need a special language just for them, and I think it makes sense to use C++ for them. More so than some newer, hotter languages. Although game companies need to leverage the strengths of C++ properly to be able to take advantage of it to the fullest.

    First of all, video games are complex. As Carmack put it, video game programming is more complex than rocket science. In your typical AAA game you have a real time simulation of the player plus several AI, using many complex systems (weapons, vehicles, animations, etc), in huge areas that can be explored in three dimensions, feeding data to a real time rendering engine with photo-realistic graphics and surround sound, which has to page in and out gigs of data from storage, while communicating over the net to synchronize with a server that’s trying to keep dozens of other machines up to date. And all this has to run in uncountable combinations of processor, motherboard, memory, hard disk, audio, video and network components of varying quality.

    Complex problems require complex solutions. Simplifying the problem often comes down to creating higher level abstractions, and abstractions by their very nature are “leaky”. That is, they don’t apply everywhere all the time. They have edge cases and exceptions. An abstraction that perfectly models the underlying problem would be just as complex as the underlying problem itself.

    The way abstractions work is that you sacrifice flexibility for ease of use, and live with the fact that solving corner cases will expose all that complexity that you wanted to hide. And you better hope those corner cases aren’t right in the middle of your program’s domain, because your abstraction hobbled you for dealing with that kind of problem.

    And considering the breadth of domains that a modern game has to tackle (networking, processing, AI, rendering, etc), you don’t have a lot of room for abstraction. What works for networking is not going to work for rendering.

    And this is exactly why I think C++ is so good (and in particular, perfectly suited to game programming). C++ is meant to help you create abstractions. Instead of having a language that tries to abstract making games, deciding your trade-offs for you and making it impossible to do anything about it, you have the tool to create the abstractions you need, for each domain you have to tackle. All of which can talk to each other in a single system.

    Another way this is often put is that C++ is meant to write libraries (which is in essence what being a systems language means). Which exposes another strength of C++. C++ is meant to be easy for the user. The way it does that is that it loads the complexity on the library implementer.

    This is the typical example of “Hello world” in C++:

    #include <iostream>
    int main()
    {
      std::cout << "Hello world!";
      return 0;
    }

    This is pretty easy to follow, right? I think even someone that isn’t a programmer can tell that “std::cout <<" prints things to the console. And yet "std::cout <<" is an example of a specialized template of an overloaded operator. There are a few more qualifiers that apply and I can't remember. And yet, while the person that wrote "std::cout" had to think of all that and what all those words mean, the user can simply use it.

    Of course, this requires that libraries be clearly designed and that they have consistent and clear interfaces. C++ enables you to write clear interfaces, but it doesn't prevent you from writing terrible interfaces. But the same can be said of any language, you can write ugly code in any language.

    Which leads to one of the trade-offs C++ had to do, which has earned it it's reputation for being so complex and overloaded, but that should also help game development. C++ is meant to be backwards compatible. This means that code written in the 90's (nearly twenty five years ago) can still compile, or at most require some trivial clean up to work with a more recent standard. This is a great win for companies, because it means that all their legacy code will still work and compile with new tools. And game companies should have a lot of library code that they depend on (back-end code shouldn't be rewritten every other year, for example, and even game engines benefit more from incremental improvements over rewrites).

    One downside of backwards compatibility is that you have to live with your mistakes. And there have been some of those. Keep in mind, C++ was a pioneer in many language features that were later picked up and polished by newer languages. They didn't always get things right the first time. And when you've made a commitment to backwards compatibility, the first time is the only go you get. Breaking changes have to be really worth it. This includes mistakes that C made, so that they kept compatibility with it.

    The other downside is cruft. As you come up with new ways of doing things, you have to still support the old way of doing things. Which makes the language seem overloaded and complex. After all, it's the new language, plus the previous iteration, plus the one before that.

    In conclusion, C++ is awesome, and Shamus is a slanderer and a liar! (just kidding).

    • Zukhramm says:

      I’m not really sure how that is an advantage of C++. Pretty much all languages let you create abstractions, and at least Blow’s proposed language is not one with a bunch of game-specific abstractions built in.

      • Kian says:

        I’d need you to clear up what “that” refers to specifically. I’m guessing the use of abstractions?

        To clarify, saying that something is an advantage of the language does not mean that no other language shares that advantage, it highlights that not all do. That said, C++ is good at it because of two main reasons I can think of right now: It allows the implementer access down from individual bits and memory locations up to complex high level structures, and it allows the implementer to hide all that complexity behind a clear interface so the user doesn’t need to worry about it.

        Few other languages give you that kind of flexibility. Anecdote time: a guy that interviewed prospects for Facebook described how a common trend among Java programmers was that when asked to implement a list (single or double linked) in a language of their choice, they were usually stumped. There are a number of reasons why this may be so, and I don’t have experience with Java to say why exactly, but it may have to do with Java being too high level and hiding memory management.

        In exchange, you get a lot of work already done for you. Making a network service from scratch in Java is trivial, you just use the extensive network facilities already included in the language. In C++, there is no language support for networking. You need to use a library (either third party or whatever your host OS provides), or build it from scratch yourself. Which reminds me of a quote by Sagan: “If you wish to make an apple pie from scratch, you must first invent the universe”. C++ is a bit like that.

        But if you need a network service that behaves differently from how the protocols Java provides you do, you’re going to have to do a lot of weird things to wrangle the system to perform as you need it to. In C++, you have access to the guts already.

        Another advantage I just remembered that C++ has (over C in this instance) is that it is much more expressive, thanks to templates. Generic programming, being able to create forms that apply to different types in a type-safe manner, is huge for making powerful abstractions.

        • guy says:

          Anecdote time: a guy that interviewed prospects for Facebook described how a common trend among Java programmers was that when asked to implement a list (single or double linked) in a language of their choice, they were usually stumped. There are a number of reasons why this may be so, and I don’t have experience with Java to say why exactly, but it may have to do with Java being too high level and hiding memory management.

          Actually, it’s probably because java.util.LinkedList already exists. Making your own in Java is actually pretty easy.

          Basically, Java has primitives and objects. int, double, boolean, char, and such are primitives. They get allocated on the stack, and when you pass them as parameters their values are copied over. Objects have a reference allocated on the stack and their contents allocated on the heap, and when you pass them as parameters the references are copied over. You can’t interact with the referenced data except by calling methods of that object. This lets you do a linked list pretty trivially by making a class that can hold a reference to objects of the same class, then storing a reference to the head of the list, with something to change that reference if you’re inserting or removing at the head. Add insert, remove, and traverse methods, and you’re done. Java will allocate appropriately-sized chunks of memory and do garbage collection for you. Since what you’re actually passing are just references, any change to the data will change it everywhere it’s referenced. Why yes, this does sometimes end very badly. But it’s also pretty handy.

    • Purple Library Guy says:

      “I think even someone that isn’t a programmer can tell that “std::cout <<" prints things to the console."

      Just for the record, I wouldn't have guessed that in a million years. For all I know it defines a new sexually transmitted disease.

      • Fnord says:

        I mean, you might make a case that C++ hello world is a little simpler than the Java version, because of the required object-orientation stuff. But if you think that it’s simpler or easier for “even a non-programmer” to understand than a Python (or Ruby or similar) hello world, you’re blinded by your language preference.

        • Kian says:

          I’m not making a point about “Hello world” being simpler than in other languages. That’s a silly and pointless argument to have. What I hoped to highlight is that even though many people dislike C++ because it is “overly complicated”, there is thought put into how that complexity is allocated.

          In the case of C++, it falls on the implementer of a class or library, not on the user. “Hello world” is an example of this principle. The std::cout “<<" operator would be complex to write: You'd need to know about overloading, templates, streams, locales, inheritance and a bunch of other things. But despite all the complexity behind it, using it is so simple that it's one of the first things you learn. And not only that, you can even extend its functionality for your own types without needing to know about all that stuff.

          What this means is that on a large project, you need only a handful of people that are "experts", and the rest can use the abstractions the experts create to be productive.

  22. Blake says:

    I watched that video months ago and within a day could think of heaps of instances of code I’d written in my past 7 years in the industry, that would’ve been quicker and simpler with some of those changes.

    I can’t remember the exact points from the video, but I wrote a list of other things I’d like to see improved upon (or at least discussed too). Things such as:

    alloca() actually being properly supported instead of kinda sorta supported on most platforms by using funky assembly under the hood.

    Better support for platform specific implementations of files, currently I have to go through a bunch of visual studio property pages and flag files I don’t want to compile on particular configurations and adding new platforms is a massive pain. There has to be a better way of doing this.

    Have an interface keyword that works similar to virtual but works at compile time in a manner similar to templates so that we don’t need vtable lookups. (I can’t think exactly how to work it under the hood, but I’m sure with some time that someone could think of a decent way to do it).

    Make function pointer and data member pointer syntax nicer, I know a lot of experienced programmers who still have to look it up every time they use them.

    Adding functionality to define initial values of data members at declaration rather than in each of the constructors (which can lead to lots of code duplication and the odd error when someone hasn’t NULL initialised a pointer they thought they had).

    Multiple return values from functions. It’s a lot clearer than having a single return then a bunch of out arguments.

    And obviously everything he said about being able to tell the compiler which members to automatically delete or ref count would make things easier to read and cut down on unnoticed memory leaks and such.
    People could even discuss whether or not we need certain entrenched standards like case sensitivity. I’ve seen functions that had both a ‘filename’ and a ‘fileName’ variable declared at different points and only had one updated when someone changed stuff later and so on.
    I figure case sensitivity was probably originally for compiler speed but I think the difference now would be so negligable that it’d be worth trying to not have it and see if it was a problem.

    Ultimately I agree with the premise that we could build a better general purpose language that simplifies things games programmers do every day, and that people should definitely keep the conversation going.

    • I tend to pass structures to a function and return a error code (any changed/data returned is done to the structure), that way all functions return a success (0 zero) or failure (a non-zero value/error code).

      And in cases where speed is a issue I don’t use a function at all and instead inline the code that would be in the function.

      But yeah calling conventions was odd especially for a while. On x86 Windows you have STDCALL, FASTCALL and CDECL.
      Luckily with x64 there is only FASTCALL, so linking with dynamic libraries or calling other code won’t be a minefield (I.e: Why the heck does this keep crashing.).

    • Kian says:

      Just wanted to chime in that C++ can return multiple values from a function, of any types you want. You can either return a struct, which can be a little annoying, or you can return a tuple (which are in essence ad-hoc structs). And to avoid the annoyance of having to write the template definition to declare the variable at the calling site, you have the auto keyword.

      So you declare the functions like so:

      std::tuple<double, char, std::string> get_student(int id);

      And use it like this:
      auto student = get_student(0);

      And now student is a variable of type std::tuple<double, char, std::string>, only without all the typing.

      Tuple reference: http://en.cppreference.com/w/cpp/utility/tuple

    • Duffy says:

      As someone who predominately works with SQL the case sensitivity thing always screws me up for a bit when I switch back to some front end coding for a bit.

    • guy says:

      Case sensitivity can be nice. For instance it lets you declare List list and suchlike.

      But yes, if you’re using case sensitivity to distinguish between two things and it is possible to compile a program when you mix them up in a scenario where that changes functionality you have a problem. It’s not an issue in Java for the specific scenario I described because either you’re calling a static method (which will behave identically in either case) or it won’t compile if you use List.

    • Richard says:

      The first part is asking for a better IDE, or possibly something better than make/nmake.

      And I’d agree, Microsoft Visual Studio is a poor IDE for multi-platform programming – this isn’t a surprise though, because it was originally written to encourage people to program on and for the Windows platform – and only the Windows platform.

      The realisation that “Ah, now there are several Microsoft platforms” came later, and is part of the Visual Studio technical debt.

      There are other IDEs that handle multiplatform better and will happily use the MSVC++ compiler (and GCC, Clang et al) if you want, as well as replacements for make/nmake that have an entirely different set of limitations.

  23. Adso says:

    My personal take on Blow’s video was that he was too quick to dismiss Rust. Rust is a new language being developed at Mozilla which is focused on being a modern C or C++ equivalent, so it compiles to native code, requires no runtime, and does not have garbage collection. It also makes it very easy to call C from Rust and vice versa.

    Rust also has an unusual but quite powerful system where all memory management is determined by the compiler according to simple rules, so errors like segfaults or memory leaks are impossible. The mechanism used takes a bit of learning, but it’s not really complicated—just unusual. (There is an ‘escape hatch’ by which you can violate these rules, which might be necessary in some low-level code, but it’s designed as a last resort.)

    Blow mentions Rust but dislikes it because it’s a “big-idea language” and also has too strong an emphasis on safety and correctness, which he suspects would be too strong an impedance mismatch. I would nevertheless argue that it’s one of the more promising languages for future game development.

    • Zukhramm says:

      Blow’s argument against Rust was that segfaults and memory leaks weren’t a big deal so protecting against them just puts unnecessary constraints on the rest of the language/program.

      • Purple Library Guy says:

        He argued that memory leaks aren’t a big deal? I’m no programmer, but every damn time I hear about an update in some open source application I’m interested in, there’s talk about how they fixed a bunch of memory leaks. Every time I hear complaints about how some program is a hog that slows down your system, the allegation is that there’s a bunch of memory leaks. How can memory leaks not be a big deal, especially in programs like huge hastily-written games?

  24. Mephane says:

    My biggest fear if any new language might come up is that it likely will be garbage-collected and not support RAII. Yes, automatic garbage collection is a good thing except for the cases where you want to couple certain functionality with a clearly defined lifetime of an object.

    (For the record, I consider RAII the single most important feature that C++ brought to the table.)

    (I’d also settle for reference counting as a substitute as you can use it like RAII when you never hand out a reference or pointed to anything outside the scope to which you want to tie a resource.)

    • CJ Kerr says:

      I think it’s fair to argue that control over the GC is one of the features that would be necessary in any new language intended for game development.

      IIRC, it’s amongst the reasons presented by Blow for rejecting the existing C++ “successors”.

    • Kian says:

      I don’t think garbage collection is good, frankly. RAII is hands down the superior alternative. To repeat a point I heard in a talk by Stroustrup, garbage collection only manages memory. RAII is a general method for handling any kind of resource, be it memory, file handles, or whatever else your system needs to handle.

      • Zukhramm says:

        He doesn’t argue for GC but in his video what Jonathan Blow uses to argue against RAII is that other resources occupy a very small part of development while the vast majority is about memory, thus a generalized tool for a generic resource is less useful than specific methods for handling memory.

        • Kian says:

          I’ll have to listen to it when I can. Still, RAII is amazing and if he argues against it, then I can’t help but think he is wrong.

          What RAII basically means is that when you acquire a resource, you have to assign the responsibility of cleaning it up to someone. And whoever you assigned that responsibility to will clear the resource when it’s done with it (if you chose a shared responsibility with reference counting, the last one to use it will clean up, etc). This is a design phase issue, and in fact it helps write clearer code because it means you had to put some thought into it before you start typing.

          It also provides deterministic handling of resources, which helps debug performance issues and make guarantees about the behavior of the program. Which is very useful when you need to make sure your frame is done within 1/60th of a second to avoid stuttering. I’ll have to listen to why he dislikes RAII. I’ve never heard of anyone that argued against it, and I can’t think of a reason why anyone would dislike it.

          • Duffy says:

            The only problem scenario for RAII that I can think of (working from high level knowledge of the concepts versus knowing how they are implemented) is if you really need to manage your memory footprint very precisely inside a limited scope. Having finer control over when the memory is taken and given back could be important if you are struggling to stay within some size constraint. RAII could make that sort of process unintuitive if you needed to do it, but it would depend how the RAII works under the hood which I will again admit I have absolutely no idea to the specifics of. I would think GC is probably an even bigger problem if you were working inside such constraints.

            • Kian says:

              If you have really tight constraints, RAII can also help you manage your resources effectively. You just engineer the solution differently. RAII gives you absolute control over when you grab it and when you release it. It’s up to the implementer to make the right choice given the appropriate tools. If you are working in such a system, I think its reasonable to expect the implementer to be an expert in the field and you have tools like emplace new and custom allocators to manage your memory however you want to. RAII is an approach to solving a kind of problems, not a fixed solution itself.

              And if it really doesn’t fit, that’s cool. You can fall back to pure C-style at no cost from within the language and not use it, and if you need to be even more specific you can fall back to assembly directly. Although the realm of people that need to do these things should be vanishingly small.

          • Zukhramm says:

            I’m not using C++ so I don’t know much about it, but his argument seemed to be that there’s not much need for it at all. Using non-memory resources is such a small part of his programs, and one that doesn’t cause a lot of problems, so what’s the point of having some specific feature for handling those? The only resource he cares about handling is memory.

            Not that these aren’t my argument but what I’ve heard form Blow’s videos.

            • Mephane says:

              Files are probably the most commonly accessed resource right after memory. What would you rather do, put a std::fstream in your function scope or design the entire function around the fact that you must, at all times, call a function to close it before returning from the function. Exception safety and handle leaks become a serious risk without RAII.

              • Zukhramm says:

                Being the second most common doesn’t matter much if the second is way behind the first.

                I don’t really see much problem with having to manually close. Sure it’s nice not to, but it’s not a major issue. Especially if you’re in a language without exceptions.

          • Ingvar M says:

            There are interesting corner cases where the C++ allocation/deallocation model simply doesn’t work (which pushes you back into manual new/delete or not-sufficiently-intelligent reference counting (these cases tend towards being where having circularity in your data structures is useful and falling back on weak references really isn’t a solution, since that can end up with parts of your data being deallocated before it’s actually out of use)).

            In, say, Common Lisp, the “with-” idiom is frequently used for resources that need to be finished off somehow, in a predictable way, at a predictable time.

            So if you need to open a file (and ensure it’s closed once the code block is done), you do something like:

            (with-open-file (the-file “and the name goes here”)
            ;; Insert code here
            )

            And when the execution passes back out of the block, the file will be flushed and closed. There’s, now, a similar thing in Python (although implemented in a completely different way).

            Also, I thought the basic idea behind RAII was that you should never “first allocate, then initialize” but rather you should do both at once. The dealocation of the allocated object is (again as far as I understand it) completely orthogonal to RAII.

            • Kian says:

              What you described about Common Lisp is literally what C++ has always done. RAII stresses initialization with allocation because in C++ deallocation can then happen automatically without further issue. Match the lifetime of the resource to the lifetime of the object and you know exactly when it is acquired and when it is freed. A more detailed description is here: http://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization but to summarize:

              C++ objects (classes, structs) have what are called Constructors and Destructors. You can specify what they each do. When you declare a variable of some type, the constructor for that variable is called. When that variable goes out of scope, the destructor is called. This is guaranteed to ALWAYS happen, even if you throw an exception. The stack unwinds, and destructors get called along the way, freeing your resources.

              Your lisp example, in C++, looks like this:

              #include <fstream>
              void SomeFunction()
              {
                std::fstream theFile ("filename"); // We open a file named "filename"
                theFile << "Let's stick some text into the file.";
              }
              // Bam, theFile goes out of scope, destructor is called, file is flushed and handle released.

              This is the proper way to handle resources in C++. Exception safe, automatic, clean. At least, it’s this way for automatic storage. When you need something that has to leave the scope, you should use smart pointers. The greatest feature that C++11 brought is “move semantics”, which enabled the language to have effective smart pointers. But explaining that wouldn’t really fit in a short explanation.

              • Ingvar M says:

                You know, I actually know C++ and I still find the allocation/deallocation of C++ less than ideal. Give me predictable GC every day and let me have a construct that closes/deallocates things early if that makes semantic sense.

                But, then, I’m used to being able to return a closure closing over local variables and in C++ that requires quite a dance (or an intermediate object).

                • Kian says:

                  I’m not sure what you mean by “a construct that closes/deallocates things early if that makes semantic sense.” If you want to limit the lifetime of an object, you can use an enclosed scope:

                  #include <fstream>
                  void SomeFunction()
                  {
                    int someVariable = 0;
                    // Do some work
                    {
                      // Open scope, someVariable is visible here
                      std::fstream theFile ("filename"); // Declare a variable
                                                         // in the enclosed scope
                      // Do stuff with it
                    }
                    // Exited the scope, theFile destructor is called
                    // Do whatever else you need
                    // theFile doesn't exist here, someVariable does.
                  }

                  Or give the class a method that cleans up the resources if you need it done early. For example, you can call std::fstream::close() at any time to close the file.

                  I don’t see how you could ever have a predictable GC. As far as I understand, that’s impossible by definition. On the other hand, reference counted pointers behave kind of like that.

                  • Duffy says:

                    That random extra scope is very misleading, knowing it’s function in this case relies on an inference of some possibly non-standard underlying rules. Until this discussion it never occurred to me to use scope like that. I would categorize it as clumsy, but at least it’s fairly simple.

                    • Kian says:

                      How is it misleading? That is the whole point of using a block within an enclosing scope. Automatic variables have scoped lifetime, and you can create scopes within scopes to handle that lifetime as your needs dictate. It’s part of the fundamental syntax of the language.

                      As far as I know, scope rules have always been part of the language. The behavior is clear and well defined according to the language specification. It’s an example of using a language feature for the exact purpose it was defined for.

                      We’re not even talking about an obscure language feature. Scope rules are as fundamental as knowing you have to declare a variable before using it. Things only exist within the curly braces they were declared in, whether those curly braces belong to a function, an if clause, a while or for loop, etc.

  25. Smejki says:

    It might be not that the people argue with the title but rather simply don’t like when title is not in line with reality or the points in the article itself. Because these practices are mainly used by bad journalists as means to draw attention to otherwise standard text (particularly popular in tabloids). ;-)
    Anyway good article and thank you for the video link. I’ll take a look.

  26. kdansky says:

    The problem with Blow’s video is that he makes it abundantly clear in the first few slides that he has not done his research. D fills his requirements to the dot (RAII, optional GC, modern syntax, fast compilation, great performance), and then has a few super useful features on top of that such as easy compatibility with C and C++ libraries, powerful meta-programming and a good community.

    But he ignores all that, sprout some nonsense about how D is “too similar to C++” without ever specifying what that’s supposed to mean, and then goes on and creates a language without the least care in the world for important problems. That’s how Javascript was made, and look how horrible that turned out.

    The only things bad about D are that it isn’t finished (but more so than his abomination), and that it’s quite annoying to google for “D”.

  27. Purple Library Guy says:

    So might it be useful to do some work in Vala or something like that?
    Vala seems like a nice compromise in that it’s apparently a relatively easy-to-work-with kind of language, but then it generates C code (which in turn then gets compiled).
    So like, if you wanted to do some tweaks that can only be done in actual C, you could mess with the generated code or write pieces directly in C. But most of the work could be done in the easier Vala language.

  28. Wide And Nerdy says:

    You could write weeks of columns on mods and mod support and I’d read them.

    Looking forward to this. I’m a web developer who can’t even credibly say he understands javascript* (so much time fiddling with making things work in Sharepoint) JQuery makes me lazy.

    *Mainly its all the browser specific stuff I don’t get though I’ll admit I haven’t found a use for creating my own objects yet which I know means I’m doing something wrong given the complexity of my projects.

  29. Kian says:

    Relevant and topical: a talk by a Ubisoft Montreal C++ developer at CppCon 2014 a bit before Unity came out.
    https://www.youtube.com/watch?v=qYN6eduU06s

    One of the slides:

    “Big Games
    * Assassin’s Creed Unity:
    – 6.5 M C++ LOC for entire team code.
    – 9 M more C++ LOC from outside project.
    – 5 M C# LOC.”

    LOC = Lines Of Code

    Missing from the slide: * No faces.

  30. Neko says:

    The thing is that there’s nothing wrong with C++ being old, per se – it’s not as if it hasn’t been maintained for years, it’s still getting updates and improvements. I haven’t really gotten into the new standards yet but there’s all sorts of nice pointer types baked-in now that you used to need Boost for. And there’s new C and C++ libraries to do Cool Stuff popping up every day.

    I think the only “problem” with it, very generally speaking, is that as a compiled language you need to build a different binary for each platform you’re targeting, which can be a bit problematic in today’s world of various mobile devices with various differing architectures. It’s also its strength that you can come up with machine code for that specific platform, but intermediate bytecode languages can get pretty decent speeds these days without going all the way down to the metal, without sacrificing portability.

    But yeah, my point is, C++ is old and wise, not old and frail.

  31. arron says:

    I know several programming languages, mostly from both hobbyist games programming and also the science work that I do. I also have done assembly language programming for Z80, 6502, 68000 (Atari ST) and the 8086. I use mostly C++ for most of the stuff I do these days and have written games in it, but it’s got serious issues for games programming. I did think that a cleaner language like C# would be a better choice of language coupled with good support libraries and a lightweight scripting language such as Lua for mods.

    I’ve recently been looking into cross-platform programming for gaming using HaXe and the support libraries OpenFL and Flixel which takes the hard work out of maintaining multi-platform codebase. It’s not there yet as there are various issues with the platform support having incompatibility with modern versions of the platform tools and libraries. Unity does the same job but it’s Windows only and most of my machines are Linux based.

    If you could write the basic engine in a cross-platform language and then apply platform specific changes as required, this would remove the need to keep rewriting the same code in different languages for different platforms.

    One factor that is also important (and missed out from the discussion) is support libraries – ideally, you’d have one API that you would write code in and this would translate to the various platform libraries in sound or graphics (such as either OpenGL or DirectX). If a new graphics standard came out, you’d add support to the library for it and alter the API should it be required, hopefully without breaking existing code (too much). I think this is the future myself.

    I don’t think the major issue having the perfect games programming language but a integrated system that allows you to realize any game you want that runs on any platform that you might want to support.

  32. Vorpal Smilodon says:

    Duuuuude! You can’t just drop all those questions without answering them! I suspect they are planned for future articles at least? Sheesh.

    • guy says:

      1. People make better drivers that do more things and then new games use the better drivers. Also they may do the same thing better and then new games are designed to the performance constraints of the better drivers.
      2. I assume it’s easier to program. Unless you’re going to dump the entire game state to hard drive (do not do this, very soon your saves will be bigger than the executable) you need to decide what to store and what to throw away. If you have a checkpoint, you don’t need to store nearly as much world information, because it can be determined from the checkpoint used. Also, if they’re well-positioned, stuff like enemy locations and health can be thrown away without too much trouble. Remember that hilarious Skyrim shopkeeper exploit? That’s because something didn’t get saved that should have.
      3. They have very flexible and powerful toolsets available to users. Most other games don’t.

  33. gtb says:

    I’m not trying to be mean here but I really need sub-titles for this guy.

    thub-titles.

  34. Kian says:

    So I went and made a bit of time to listen to this guy, and so far (45 minutes in) I’m not impressed. I don’t share many of his opinions, but I think there are two in particular that are terrible (and he himself calls the more important part of his presentation). His view on RAII and exceptions.

    33:20 “No such thing as a resource” – I don’t know what to say about this. It’s just so wrong. But I kind of see why he thinks so, he’s too used to thinking at a low level. He can’t abstract away the computer running the software. He doesn’t see a programming language as a tool to describe a complex system, he sees it as a way to tell a computer what to do. So he doesn’t understand why RAII is so useful. He thinks the program’s job is to fill memory, he doesn’t understand that memory is merely a tool we need to represent more abstract ideas. That you have to build on abstractions to make more powerful, more useful abstractions.

    Sure, your program allocates memory, but that memory, while being a resource itself, can also represent resources that themselves have to be opened, used, and closed. For example, perhaps you have a “Treasure Chest” in your game. That chest is a resource, because you want it to be opened, only allow the player to interact with it while it’s open, and then close it, and not allow the player to use it after it’s closed. But if you only think in terms of the computer, you might not realize that that chest is a resource. So you think, “I don’t have many resources, I have three”.

    34:16 “There’s a big misunderstanding here” – Yeah, we agree on that ;)

    39:00 “RAII exists because of exceptions” – Oh, and he’s against exceptions now! “Exceptions are silly”. I’m sorry, but every thing he says is just so full of wrong and bad. But at least this is a common complaint. So, let’s address it. It’s an unfortunate fact of life that computers exist in the real world and not in an ideal state free of limitations or imperfections. Memory is finite, processing power is finite, disk space is finite, etc. So sometimes, when you attempt an operation of some type, that operation doesn’t do what you wanted it to do. An example: you try to ask for more memory but you are out.

    There are two main ways you can handle these situations. One, you report the error to the function that asked for the operation. “Sorry, no more memory”. It’s then up to the caller to handle what to do about it at that point. Two, you throw an exception, and basically fall back to wherever it is determined what to do about it. If you didn’t set any handler for it, your program dies. Oh noes.

    The irony is that the same people that complain about how any line of code could throw, will blithely ignore error codes anyway and just assume that every operation succeeded. It takes just as much effort to ignore exceptions as it does to ignore errors, and it’s better for your program to die cleanly when an exception shows up than to keep on trucking after finding an error, messing up things until you run into a crashing fault.

    Handling error codes properly, however, takes a lot more effort than it does to handle exceptions properly. Because that same line that could result in an exception would otherwise have called an operation that failed, so you need to check if it did, and then think if you can handle that error yourself, or if you need to cancel everything you were doing and return the error back to your caller, and so on down the stack. With exceptions, however, you only need to think about error handling at the point where you can actually do something about it.

    With exceptions, you can write code with the assumption that everything is peachy, and trust that if a line of code is reached, every assumption up to that point is true. With error codes, that is only true if you and everyone else involved in your project was careful about checking for and handling errors properly. So which one really frees you up to think about the problem at hand, and which one just appears to do so?

  35. DERP says:

    Define programming language, I’m pretty sure this site was written in one.

    Its like saying the English language is becoming too obfuscated lets chop it in two.

Leave a Reply

Comments are moderated and may not be posted immediately. Required fields are marked *

*
*

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun.

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>