Good Robot #42: The Framerate Unleashed

By Shamus
on Feb 9, 2016
Filed under:
Good Robot

Good Robot has a problem. It’s a strange, goofy, inexplicable problem and I’m pretty sure (60%-ish) that it’s not my fault. Here is what’s up:

Our game is capped at 60fps. That’s fine, except the cap isn’t self-imposed. Oh, I have a frame-limiter in the game, but it doesn’t do anything. If I disable it, the game is still limited to 60fps. Even if I render nothing more than a blank screen, I can’t get the framerate to go above 60. Under those conditions, the framerate should be in the thousands.

Please enjoy this animated gif, which is NOT REMOTELY running at 60fps!

Please enjoy this animated gif, which is NOT REMOTELY running at 60fps!

That’s not the problem. It’s certainly a curiosity, and it’s been on my long list of “mysterious stuff that bugs me” for a couple of years now, but it’s not really a threat to the project as a commercial product that will hopefully feed us someday. The more serious problem is that if you try to capture the game footage at all through Fraps, Bandicam, or streaming software, the framerate drops to 30fps.

Note that it drops to exactly 30fps. It’s not like the game gets bogged down drawing robots and moving laser bolts around. My code continutes to run nice and fast, but then some other system jumps in and puts on the brakes. As I said way back in part 4, the gameplay is tied directly to the framerate in Good Robot. If the framerate drops to half, then the game begins running at half speed.

This would be a major no-no in a AAA game (stuff like this is one of the reasons console ports go bad) but here I don’t think it’s a big deal. At any rate, it saves me a ton of complexity and headaches, and is one of the reasons I was able to accomplish so much on my own. And usually limits like this are a problem because they’re low. Someone ports a 30fps game to the PC and players want to run at 60fps, only to discover the collision engine / audio engine / game logic breaks at that speed. I seriously doubt there are lots of people who are going to want to under-clock the game because 30fps “feels better”. Which is to say: The game is locked at 60fps, and I doubt that’s going to cause a consumer revolt.

As I said earlier in the project, it’s pretty common to build projects using tools made by other people. Every single project does not need to re-invent the same wheels. Now, most projects use a third-party graphics / game engine. But I like to mess around with rendering on the polygon level, so I didn’t go that route. Instead, I used some smaller-scale stuff. I’m using OpenAL to handle audio. OpenGL to talk to the graphics hardware. SDL for talking to the windows gui.

That last one is kind of important. When you want to create a window (the thing you can move around, minimize, and resize on your desktop) you need to talk to Windows (the commercial operating system from Microsoft) and it usually takes a few hundred lines of super-boring boilerplate code to do that. Worse, that boilerplate code needs to be different for every target platform: Windows, Linux, Apple. This stuff is really annoying and ugly and every operating system has a slightly different way of doing things, so the logic for each platform will always be a little different from the others.

SDL fixes this by hiding all of that functionality inside of a black box. You just tell SDL, “Hey, I need a window that’s 1024 x 768, and I need that window configured so that I can render into it with OpenGL.” In about five lines of code you can accomplish what would require 100 lines if I was using the windows API directly. The reason for this simplicity is that a game doesn’t need 90% of the windows API. It doesn’t need to be able to drag-and-drop files, it doesn’t have a right-click menu, it doesn’t have a menu at the top, and it doesn’t have dozens of little sub-windows and dialog boxes like (say) a spreadsheet or Photoshop or whatever. All we want is a rectangle to put all our pretty graphics in.

For the other coders out there: My project uses SDL 1.0, because when I began this project SDL 2.0 was still young and unfinished. Two years later, it’s purportedly pretty solid. I’ve actually got another project that runs using SDL 2.0 and it’s fine. The transition wasn’t particularly confusing or painful. Having said that, we’re not planning to migrate Good Robot to SDL 2.0. The game is way too mature and too close to launch to go mucking around with sweeping changes like that.

To me this image is basically a big to-do list. Smoke is too opaque. Dropped money is too large and opaque. Explosions are too much smoke, not enough fire, and they probably linger too long.

To me this image is basically a big to-do list. Smoke is too opaque. Dropped money is too large and opaque. Explosions are too much smoke, not enough fire, and they probably linger too long.

How it works is this: After I create my window, I can begin rendering. So I draw some robots and lasers and whatever. When I’m done I say:


This tells SDL “Okay, I’m done drawing now. Show the result to the user.” Then SDL tells OpenGL to show the user the next frame of gameplay. I can then begin drawing the next frame. The problem is that SDL_GL_SwapBuffers() also seems to be “helping” me in the most unhelpful way.

If you’re trying to run your game at 60fps, then you’ve got just 16 milliseconds to finish everything. If you take 17, then you’ll miss the screen refresh. It’s a bit like missing the bus: You have to wait for the next one. Missing the bus by one minute won’t make you one minute late, it will make you N minutes late, where N is the interval between buses. In this case, you’ll be a whole frame late and your framerate will drop to (say) 30fps.

It’s actually more complicated than I’ve described it. (Isn’t it always?) But this is close enough for our purposes.

The problem is that SDL_GL_SwapBuffers() has taken it upon itself to enforce my framerate for me. Or maybe not SDL_GL_SwapBuffers() itself, but something inside of SDL_GL_SwapBuffers() is doing this. If I finish my frame in 15ms, it waits 1ms before returning control to me. If I finish in 1ms, it waits 15. No matter what I do, it won’t let me go faster than 60fps.

Which would be fine, except for the fact that the framerate goes down to 30fps when anyone tries to record or stream.

This is important. The odds are that none of the big famous streamers are going to play our game. But what if one did, and when they tried the game was awful and sluggish because it was running at half speed? It would turn a stroke of good fortune into a disaster and people would laugh at us because our 2D game runs like a butt. For small teams like ours, streaming can result in a huge boost to sales.

Wait for it...

Wait for it...

  1. So we have this problem where framerate is cut in half. This is triggered solely by external programs, over which we have no control.
  2. I’ve made many other game-type things, and this problem only happens on my projects that use SDL. My old projects are fineWell, the framerate is fine. They’re still unfinished prototypes with poor structure and lots of bugs..
  3. Whether it’s the fault of SDL or not, the problem happens inside of an SDL call where we can’t see what’s going on.
  4. This slowdown doesn’t make the game choppy. Instead, it makes the game feel slow, like constant bullet-time. That’s a fun-killer.

If worse comes to worse, I could use some kludge that will detect this case. If the game sees that the framerate is at 30fps and that SDL_GL_SwapBuffers() is devouring a ton of time, then it can simply skip drawing every other frame. This would counter the game running at half speed by forcing it to run at double. That’s a gross and clumsy solution, but it’s better than shipping a game in this condition.

Still, a better solution would be to track down the cause of the problem.

Searching the SDL docs yields nothing.

Searching for the problem via Google yields nothing.

As an experiment, I try calling wglSwapBuffers (). This is the windows-specificMeaning, it’s not directly portable to other operating systems. call. See, I stongly suspect that when I call SDL_GL_SwapBuffers() it turns around and calls wglSwapBuffers () for me. If you’ve ever heard coders talk about a “wrapper function”, then now you know what it is. It’s a function that simply calls another function. The reason for this is that it can hide the OS-specific crap for me. On windows it will call wglSwapBuffers () but on linux it will call glXSwapBuffers () I don’t have to memorize how every OS works and write special code for each one.

So, if the problem is in SDL, then going around SDL and forcing the refresh myself ought to fix the problem. Which it doesn’t.

So while looking exceptionally guilty, SDL is not our culprit. I try searching for this problem again, but looking for OpenGL answers instead of SDL answers. At the end of much head-scratching I find:

wglSwapIntervalEXT (0);

Now, I’ve never used this thing. Heck, I’ve never even heard of it. But apparently it asks OpenGL to handle the framerate cap for you. I’ve never had to use it in any of my projects, and I’m pretty sure the feature is not enabled by default. Which means you have to specifically request it. Which I haven’t.

But sure enough, if I call this function and explicitly turn OFF the OpenGL frame limiter, the game is uncapped. On my machine, I get about 150fps. I have to enable my own cap to keep it at 60fps, because otherwise the game runs at super speed. Which is amusing. Briefly.

So why do I need to turn off a feature I’ve never heard of, never encountered before, and which should be off by default? My hypothesis is that SDL turns this feature ON for some reason. (Probably when you create your window.) So you have to turn around and turn it off again if you don’t want it messing with your clock.

With this fix in place, the game is stable at 60fps, and recording / streaming no longer causes problems.

So why did capture software cut the framerate in half? I have no idea. There’s all kinds of sorcery going on down in the lower levels that I don’t like to think about or mess with. That’s where balrogs live.

My guess is that the frame-capture was hitting at just the wrong moment and preventing a clean refresh. It’s like someone taking your picture just as you’re about to get on the bus. You have to hold still for a second, and by the time they’re done you’ve missed the bus and have to wait for the next one.

Hopefully someone will stream this game so I won’t feel like this was a waste.

Enjoyed this post? Please share!


[1] Well, the framerate is fine. They’re still unfinished prototypes with poor structure and lots of bugs.

[2] Meaning, it’s not directly portable to other operating systems.

2020202012There are now 92 comments. Almost a hundred!

From the Archives:

  1. Drew says:

    “I stongly suspect….”

    Made me think of


  2. Cookyt says:

    (Without too much rendering knowledge) maybe the opengl call will tie your rendering rate to the monitor refresh rate, and using FRAPS causes you to miss that?

    Also, did you end up finding a cross-platform solution, or are you leaving in a windows-specific kludge? Seems annoying if SDL can’t manage this for you given that it’s that library’s main job.

  3. Paul Spooner says:

    Don’t worry Shamus! I’m sure a bunch of us in the comments will stream the game when it comes out! Won’t that be great! All your hard work will …
    Oh, wait…
    You meant someone IMPORTANT didn’t you?
    Um, I’ll just be over here then.

    • Wide And Nerdy ™ says:

      I mean, Campster, Superbunnyhop, PushingUpRoses.

      I’d say people at the Escapist, after all, they put Yahtzee’s game on the site (though of course Yahtzee is the big draw, aside from Honest Game Trailers). But maybe Shamus doesn’t get to play that card.

  4. =David says:

    I will gladly stream it to my three followers.

  5. Gothmog says:

    +1 for mentioning balrogs! :D

  6. Ilseroth says:

    I’ll stream it for a bit, granted I only have a few hundred followers, since I was active at the dawn of streaming (before Twitch was a thing) and stopped before it really took off. Nowadays I mostly only stream game dev stuff and the occasional speedrun when I am in the mood, but I used to stream Monster Hunter about 10-12 hours a day on average.

    I have a friend I’ll prolly gift the game to that averages around 400-500 viewers; sadly he’s probably the “biggest” streamer I am direct friends with, but if you considered gifting a copy to some of the bigger streamers they may go for it, if the game looks like something they’d like.

    One game in particular that generates a pretty loyal following and is also a 2d shooter style game is Binding of Isaac, granted the actual gameplay is pretty different but I think that if you pinpoint a few of the more popular streamers they are small enough that getting a review copy would probably be “cool” enough they would want to follow through with it.

    Only other real suggestion is, your game has a start and a finish and can be completed, seems to have a decent skill ceiling and a bit of randomnity; if you target a few high profile speed runners and offer them a copy (check out SpeedRunsLive), you could potentially get a stream that gets very dedicated to your game very quick, if they take a liking to it.

  7. Paul Spooner says:

    Shamus, I remember that modding support was a big feature early in development. With that in mind, two questions:
    Will we still be able to mod the game?
    Can you add mod support to control the frame-rate cap? Maybe even dynamically based on weapon or something?

    “they probably linger too long.” Add better terrain avoidance to guided missiles. Make guided missiles swarm (home on nearby homing missiles)

    • Shamus says:

      We have made no effort to prevent the user fro mmodding anything. And all of the game data is text files, often with some built-in basic documentation.

      The GOAL was to support modding explicitly. There’s a directory called /core and all of the files are drawn from there. The thinking would be that we could allow the user to specify a mod directory of their own. When the game needed a file, it would look first in the mod dir, and then fall back to /core if the file wasn’t found.

      This is… probably not going to happen. Which is a shame, since 95% of the PROGRAMMING work is done. But this would need a lot of testing that we just don’t have time for. There’s no point in sinking a couple of days into finishing this feature, only to find out it doesn’t work in X cases, or results in crashes for some minority of people, or leads to strange behavior.

      I’m not ruling it out for good. But as I watch explicit modding compete with other features for time, it isn’t doing very well.

      Which means that “modding” would involve modifying / replacing files in /core. It works, but is not nearly as cool / convenient. Boo.

      • Primogenitor says:

        So really you would like the community to make a tool to enable/disable/merge mods? Like Fallout/Skyrim but for GoodRobot?

      • My suggestion would be to add modding support in a later patch and put it behind a .ini config flag.

        That way a mod management tool could “enable” mods for the game by changing that flag.

        The ini flag when set to true would simply make the game engine look in the “mods” folder first for a similar named file and use that (just as you said Shamus).

        Let the modding community worry about the rest. At the most basic, all they really need is a simple “override” folder of some sorts.

        Another suggestion (and much more advanced) is to also support a mod.dll (if the ini flag is enabled) then the game will check if mod.dll is in the root folder of the game and if it is it will load it.
        This will allow a mod manager tool to install a game hook without having to do nasty injection hacks.
        Now if the game has a scripting language like say LUA or similar then executing code in the executables process memory via a injection or dll may not be needed (the scripting language ma support dll loading for example).
        But if there is no scripting language then something similar to a mod.dll would allow a safer way for modders to hook into the game’s process memory.
        Some modders out there has to hook into the game via dinput or directx dlls which wrap/calls the actual dll, things can easily go buggy when you do that (there might be slight changes in directx for example)

        • Lanthanide says:

          The problem isn’t the functionality, it’s the testing of it.

          Shamus could just slap something together, do 5 minutes of testing and then say “supports mods!” only to have a flood of people complaining that it doesn’t work.

          Then he either has to abandon the idea and disappoint everyone, or spend more time fixing it.

        • Lachlan the Mad says:

          My somewhat similar suggestion; Shamus adds a “/mods” folder to the game directory. The only file in /mods is “mods.ini”, which consists of a comment explaining how to use mods and a flag along the lines of “mods_on=false”. In order to mod any aspect of the game, you copy the file from /core to /mods, edit is as you see fit, and then change mods_on to “true”.

          When the game boots, it checks the mods_on flag. If it’s false, it just loads files out of /core. If it’s true, it loads files out of /mods; if any given file is missing from /mods, it loads from /core instead (e.g. if someone has modded the file which controls bad robots but not the file that controls weapons).

          • Alexander The 1st says:

            Thing is, what if someone tries to mod the saving capabilities to add something in, or to work with their mod for loading things in (Say, a Ghost Replay Mode), and the core picks up that the mod has been added, but the file can’t be read because it’s not already formatted for the GRM mod?

            Without the extra testing Shamus laments not having the time to really fit in, it would likely just cause the game to not load, or to crash upon loading a save file, or cause the save file to corrupt for some reason?

            That, I think, is why Shamus isn’t *explicitly* supporting mods.

            • Bethesda handles this pretty well. The Engine will simply remove invalid references from the save file. So when the game is saved again it’s without those references.

              You’ll see this when you load a save game in Skyrim and it tells you that there is stuff referenced that is now missing.

              No reason why mods that ads weapons or enemies could not work in a similar way with Good Robot.

              The game engine simply informing the user that shit is missing and it’s got no idea how to handle that so the references will be ignored and thus not saved.

          • I like that idea with the mods.ini and mods_on=false

        • Mephane says:

          Yepp, good idea. Seperate “mods” or “overrides” directory and then the game could just parse every .txt file there recursively. Yes, I know implementing and testing that is more than just “just”, but the effort would probably reasonable and the feature indeed could be provided in a later update.

          But please use something within the game’s Steam directory, don’t put into that maelstrom of madness and chaos that is C:\Users\blahblah.

          • “But please use something within the game’s Steam directory, don’t put into that maelstrom of madness and chaos that is C:\Users\blahblah.”

            Well the steam directory requires elevated access right? Which would make editing/copying files in tedious. And asking people to turn off UAC or similar is a bad advise (as that reduces system security). If the game is installed on a D: drive then that is less of an issue.

            A user folder would not be too insane.
            In fact the Chrome browser can be installed without requiring elevation.
            The issue though is the path a user would navigate to is anything but pretty.
            The users documents folder could be used I guess.
            There is also the Documents\My Games\ folder usually used for save games.

      • mewse says:

        Might be worth checking out PhysicsFS (which, in my book, has to be the most poorly named thing in the history of absolutely anything, but is really quite spiffy if you’re willing to ignore its absurdly bad name); it’s a file system interface which has nothing to do with physics.

        You give it any number of “read” directories or zip archives (with a priority level for each) and one “read/write” directory, and you then use it to read and write all your files. It just treats it all as a single filesystem. Files you write go into the “read/write” directory, files you read will come from the highest priority out of the directories you specified.

        If your file loading/saving code is going through a generic interface already, it’s pretty trivial to redirect that interface through PhysicsFS, and you then have automatic support for loading from multiple levels of “mod” directory, loading from zip files, saving into a separate user directory, etc. All without your game code needing to know about any of it. Only problem is if you use libraries which do their own file loading by talking to the OS directly. In my case (and probably yours), this will be using things like SDL_image, which doesn’t go through PhysicsFS to load its pictures, and so won’t look for those files in the alternate directories, and can’t load them out of zip archives, etc.

        But if you’re actually loading your own texture files rather than delegating that to another library, then PhysicsFS might just drop straight into your project without any bother. Took me only an afternoon to put it into my (absurdly large) project.

      • SL128 says:

        Have you considered doing a Steam beta enabling it for people who would definitely want to try it anyway?

      • Paul Spooner says:

        Awesome. I look forward to writing a Python utility to mutate the settings every time the game is launched.

  8. Though I’m sure everyone and their mother have recommendations for their favourite streamer to approach about your game, and you’ve no doubt been inundated with such already…. Here’s my shameless plug for a streamer/youtuber to which I have no connection beyond enjoying their content.

    Northernlion both regularly uploads YouTube videos and hosts 3 hour twitch streams 4 times weekly. He has covered a lot of rogue-like/rogue-like-like games in the past (a 1 hour Binding of Isaac segment kicks off every stream due to his love of the genre) so it feels there is a chance yours may be well in line with the content he produces.

  9. Ilseroth says:

    Also, you said you had another project on SDL 2.0; care to divulge a bit about it? Always cool to hear about your different projects even if they don’t become anything serious.

  10. James says:

    SwapInterval is the v-sync control function, you might have your graphics drivers set to enable v-sync by default if you’re having to turn it off.

  11. Xeorm says:

    So, as someone who has done some rendering work, what it sounds like is this:

    Normally, monitors have a refresh rate. Going over this amount wont’ do anything. By default, one of the options when rendering is to lock the framerate to the monitor’s refresh rate, which is likely 60 fps. Which is what’s happening here. Very nice if you’re, say, running a program that runs much, much quicker than 60 fps. Doing a simple a program where I had fps up the wazzoo, you could hear the GPU whine, even though it was a simple graphical program.

    Anyway, a good streaming program will read the frame directly as it’s sent to the monitor. Most things are set up so that that’ll work fine. Now, this is a bit of a guess, but there are two things I can think of that were tanking the fps. The first is the streaming service being at 30 fps making the code think the monitor refresh rate was 30 fps itself. I know mine by default is 30 fps. The other is that the streaming program is asking for the buffer, and the buffer isn’t being sent to the monitor. Don’t think that’s the case, but it’d also work. Either way…

    Bad Shamus. Tying game code to the fps. Bad. Bad bad bad.

    • I have to agree, now the game logic running at 60hz is fine but it should not be lockstepped to the refresh rate (which could be 59.97 or whatever fresh hell NTSC exists in).
      Or 50hz with PAL.

      So the game logic could run at 60hz but the FPS should be allowed to go from 15 to 144hz depending on monitor refresh rate.
      And vsync should be toggleable (with vsync on in combo with a Adaptive Sync (VESA’s Displayport standard of AMD’s Freesync) or Nvidia’s Gsync you now get variable framerate without tearing, 144hz DisplayPort Adaptive Sync monitors are finally out there now.)

      • Shamus says:

        This is a catch-all of everyone faulting the design I used:

        The trick here is that running the game at something other than 60hz introduces a massive increase in complexity. Now you’ve got to account for situations where, instead of one frame of activity, you need to process for a half-frame.

        “So skip processing if there’s not enough for a full update!”

        Okay, but then there’s no REASON to render another frame. If no processing has been done, then this frame will be identical to the last. Movement can’t be smoother unless we process user input. Moreover, how do we handle missed frames? What if the game can maintain 90fps but not 120? It will stutter between the two and the result is even worse than 60.

        “So do things like user input (that should be very lightweight) every frame and heavy things like particle updates at 60fps.”

        Ah, but now we have different parts of the system running at different speeds, which introduces all sorts of edge-cases and opportunities for bugs.

        It seems like such a trivial change. “Just unlock it!” But the truth is that it would create opportunities for bugs that might be fiendishly difficult to replicate, much less diagnose. Moreover, it would create bugs that are unique to hardware (like 120hz monitors) that I don’t have access to. It would also be difficult to future-proof this. It would suck to find out the game crashes on the 166.666hz monitors that get invented in 2018 for some unforeseeable reason.

        So yes, having a gracefully dynamic framerate would be awesome. I’d love to have that. But a small team has to know what’s important and where to spend their limited resources, and this would require a pretty major step up in program complexity and testing challenges to serve what is fundamentally a niche market. High risk, low benefit.

        • Draklaw says:

          As someone mentioned earlier, the SwapInterval controls the V-Sync. It effectively lock the framerate to a fraction of the screen refresh rate (so 60/30/20/15/12/… fps). Enabling / disabling it should be an option because depending on people preferences and hardware one might be better than the other. Particularity, V-Sync can be a problem for game with a fixed fps because the screen refresh rate is not necessarily a multiple of 60hz.

          Given that you are quite close to release the game, I agree it is not a good idea to rework the game loop to support arbitrary refresh rate; Even less because you probably have more important things to do. But it might be something you wish to patch later (just like the mod support you’re talking about above). It’s easier than you think. In fact, you already have a good starting point.

          I suggest you to read the article Fix Your Timestep ! It advocates to use a fixed timestep, just like you do. The rationale behind this is that it’s the only way to have deterministic behavior. I have seen much game that act weirdly if there is a big freeze with variable time step. It can lead to a character that go through a wall for instance, because a big hang might just teleport the character on the other side of a wall.

          The solution is to keep the timestep of the game logic fixed at 60 fps, but unlock the framerate. Of course, you need to use a trick to avoid rendering twice every frames on a 120hz screen for example, as it would be useless. Basically, you want to be able to render a frame between two “ticks”. The solution is simply to interpolate between this two states.

          It actually mean that you have to know the transform of each entity at the two last ticks, and not only at the last one. This should not be hard to add, but from what I read about Good Robot, it might mean modifying code at several places. Once you have this, what is left to do is compute a value in the range [0; 1] that indicate where you are between the two last ticks and use a linear interpolation to get the right transform in the rendering code.

          The good news is that you don’t need to change anything to the game logic to unlock the framerate. This reduce considerably the chance of introducing crazy bugs. But as I said, it is still quite a lot of work an certainly not a priority at your point. However, if you ever write an other game, I strongly suggest to get this right from the start.

          • Geebs says:

            So, bear in mind that I’m quite thick, but it seems to me that the easiest thing to do is to use a fixed time interval for physics and just have the render thread poll that state when it wants to draw a frame – the game logic should need infinitesimal amounts of memory compared to the graphics, so just making a copy of the game state to pass to the render thread shouldn’t be all that complicated, like so:

            Render thread (tied to refresh): Dear simulation thread, I would like to render a frame. We can’t safely access the same memory at the same time, so can I just have a copy of your stuff?

            Simulation thread (locked to 60 Hz): Ok, here you go.

            Render thread: Thanks!

            I have this system in a terrible, broken, endlessly updated procedural/graphics project and it seems to work – to the point that I also have a GPU erosion system that feeds back to the simulation thread by a whole bunch of pixel buffers and the two still manage to stay in sync.

            Please feel free to let me know how wrong I am – I already know that most of what I do is Bad and Wrong, but it’d be nice to be able to quantify exactly how Wrong and Bad it is…

            • Draklaw says:

              I am not sure of what you mean. Basically, storing the state of the current frame and the previous one effectively boils down to copying the state of the last frame before updating it (or to swap some pointers if you want to avoid copies…). There is nothing complicated here.

              You still need to store the states at two different frames though, because the goal is to interpolate between them. Otherwise, if you render at 600 fps a game who is simulated at 60hz, you end up rendering uselessly 10 times the exact same image between two updates.

              Thinking about it, there is an other option. If you know the position and the speed of your objects, you can also extrapolate their position… But it is approximate, so there is a risk to make animations look bad when objects follow non-linear trajectories.

              Oh, and when I wrote the above post, I was thinking about a single-thread application. Note that it doesn’t hurt to parallelize, but it is not a requirement. I am not sure that Shamus use multiple thread, and it is likely unnecessary for this kind of game.

            • “the easiest thing to do is to use a fixed time interval for physics”
              Ideally yeah, would perhaps be wrong to say physics would run at 100fps, it’s more like physics would run at (in?) 10ms slices of time, which would equal a perceived 100fps (or 100hz).

              It’s too late/close to release for Shamus to do anything now it seems but maybe a future update beta patch could experiment with some tweak (and if it tests well it could be added to a later release update).

              It’s a shame this wasn’t taken care of at an early stage, I’m pretty sure I whined about it way back when Good Robot was nothing more than one of many of Shamus’ experiments, making a finished game wasn’t even a plan back then.

              I now from experience that decoupling and doing multi-threaded pipelines with stuff that need to be synced is anything but easy. Ideally it needs to be planned early on. If not you’ll end up having to rewrite your main loop/pipeline, and you wouldn’t do that in the beta stage (usually in the alpha stage instead).

              So even if Good Robot v1.0 won’t have it, maybe v1.1 ? And if not that then maybe Shamus will have a rendering framework that does have that ability and will benefit future game projects.

              I was not as much critiquing Shamus as I was pointing out that having “input & ai & physics & render/output” all locksteped to the same framerate is a design choice tthat should be avoided. Lots of old games did that and needed hacks or tools to slow down the CPUs of people so the games would not run too fast.
              One really should not repeat that with 60hz+ monitors either.

              Monitor refresh rates above 60hz is not uncommon and it’s not just the 120hz and 144hz monitors out there. There are also 75hz and 72hz which has been around for many years now. I think I’ve seen 80hz or 85hz as well.

              There is nothing wrong with running at 60fps internally as long as vsync can be toggled on or off and/or the game works with other refresh rates.
              Either there is a frame ready or not, if not then repeat the last (depending on the renderer this may be as simple as not swapping the buffer for that frame at all).
              If the game has frames ready faster than the monitor refresh/vsync (for example it’s a 50hz monitor) then it’s actually acceptable to have frames be discarded, ideally a delta value would track his and save CPU/processing by skipping every n frame.
              Another way to handle internal framerates is to take the idea from vsync and adaptive sync and gsync and go for a multiple of the internal one. Switching from a internal 60fps to 120fps or from 60fps to 30fps will keep the timings (as they are straight multiples) I have no clue if this is what the big engines like Unreal or similar does, they might just have more complex deltas instead. (you will still need to skip or repeat frames but not at the same extent as when you only stay at a 60fps internal framerate)

        • Mephane says:

          That said, what does happen when the refresh rate doesn’t align with the frame rate? I mean, my screen is allegedly 60Hz, some games read that out to something like 59.97Hz, when I use an FPS counter they typically flicker rapidly between 59 and 60 (i.e. I never see a solid 60 in any game), so I guess indeed my refresh rate is 59.97Hz?

          Now for a game that has a dynamic refresh rate I suppose it does not matter, no one will be able to sense 0.03Hz/FPS difference. But what about either games like Good Robot locked at precisely 60Hz? I guess among your testers you have already encountered this, so this is either a non-issue or solved in some specific way?

          Similarly, I guess you have tested the game on, say a 75HZ screen or a 120Hz screen and found no issue?

          And what does this mean for console ports that are locked to 30 fps? If the actual refresh rate is 59.97Hz and not 60Hz, it cannot simply render the same frame twice, so once in a while it skips that twice-rendering? Or something else?

          • There’s a ton of stuff written out there on this. Look for Telecine (I think).
            Basically it’s a weird way where every n’th frame is either skipped or repeated so it averages out.
            Not sure if a graphics card (or it’s drivers) does the same or not.
            In some case 60hz may just be a rounding of the value 59.97.
            But if you se it varying between the two then it’s in fact using a delta of some sort.
            And using QPC (QueryPerformanceCounter) you can get more fine grained deltas than using milliseconds.

            Adaptive sync and gsync (I think gsync was first with this, and adaptive sync drivers added it later to AD cards) will instead of dropping from 60fps and 60hz to 30fps and 60hz, it will do 30fps x 2 at 60 hz. Repeating a frame costs almost nothing processing wise, and you avoid dropping the vsync rate which will now remain at a “steady” 60hz.

            A note on PAL and NTSC (in the past at least), audio in movies was either sped up or slowed down when converted between the two so a NTSC and PAL release of a movie would have different pitches in voices. Luckily those days are over (I hope).

      • Ysen says:

        PAL regions aren’t stuck at 50hz anymore, unless you’re using an old CRT display. I live in a PAL country and my PC monitor – which is about 7 years old, mind – offers a choice of 60hz or 75hz. Modern TVs and console games also run at 60hz.

        • Ingvar says:

          With the right CRT tube, you’re not even locked to 50 Hz, or a nice multiple thereof. I’ve mostly had my CRT tubes running at 72 Hz for the last 18 years and I live in PAL-land.

          • Zak McKracken says:

            Also a PAL-lander here. In 2000 I got a CRT monitor that was comfortable running at 140Hz. Most graphics cards and monitors could be set up since even before that to run at _arbitrary_ rates! As long as the graphics card was not asked to produce more pixels per second than its DAC can make, and the CRT monitor not asked to make more lines per second (HSYNC) than it supports, any number was possible.
            This meant that the monitor above was happily running at 140Hz with 640×480 (3D games with shutter glasses!) but for 800×600 it was 110Hz, and for 1024×768, 85Hz was the maximum (or somesuch).

            It’s only with LCD displays that this choice is more limited (because now you have a data bus between the devices not an analog cable). But again, even the most basic LCD monitors have always had a choice of 50 and 60Hz, afaik

  12. Richard says:

    Well yes.
    In almost all OpenGL toolkits the default behaviour of the SwapBuffers() call is to block until vsync – aside from ‘raw’ OpenGL but nobody should go there*.

    There’s no point in rendering a frame nobody will ever see – or rendering one-and-a-half frames.

    So the idea is that you have a render thread like:
    while (!game_over) { draw_everything(); SwapBuffers(); };

    And an engine thread that goes at your steady ‘tick’ and doesn’t get bogged down if you have to drop a frame here and there.

    That way the render loop is in a thread that does nothing except throw pixels at the screen. Once it’s finished drawing the frame, it waits and allows everything else (the ‘game engine’) on the machine to go ahead and do its thing.

    Except of course when your engine logic is also in the render thread. Then it doesn’t work so well. But who does that. looks askance and vaguely guilty.

    * Guess what I was doing today.

    • (nods several times) yep, decoupling game logic (AI and physics etc) from the frame rendering is key.

    • Abnaxis says:

      Isn’t that trading one coding headache for another though?

      I mean, I’m sure a coder of Shamus’ experience has done plenty of multi-threaded programming in his day, but it seems like separating threads out like this trades “timing problems” for “synchronization problems,” which are notoriously hard to track down.

      Plus, does it even solve the issue Shamus described in the article? I mean, now instead of “frame doesn’t render because thread is hanging on physics,” you have “frame doesn’t render because render thread is waiting for physics thread to finish updating resources”

      It all strikes me as a lot of effort for less code readability, more hard-to-track-down errors, and not a whole lot of benefit.

      • Richard says:

        Not really.
        I see it as explicitly reminding you what is actually happening. In my experience reminders of reality are useful.

        Take a step back and look at how graphics programming actually works in broad strokes.
        (For details, see elsewhere and some of Shamus’ earlier posts.)

        You have two computers.

        Your CPU.
        It’s got oodles of memory and has perhaps four cores that can each do very complicated things.

        Your graphics card.
        It’s got less memory and is made up of hundreds of specialised cores that are very good at pixel-pushing maths but pretty awful at anything else.
        The graphics card has two (or more) copies of the screen – the one currently being displayed and one (or more) “back buffers” that it can display instead.

        There’s a thin pipe between them (PCI-E lane)

        In your game, the CPU first sends the graphics card the textures, geometry, program code and other data.
        Each frame, some of that data is updated, and a stream of commands is sent into the pipe that the graphics card then (appears to) execute in order:

        “Draw into back buffer X” – GPU sets up an invisible buffer to draw into
        “Clear to Black” – GPU fills entire buffer with black
        “Use blue texture” – Blue texture selected
        “Use sphere mesh” – Sphere mesh selected
        “Draw” – Blue Sphere is drawn
        “Use green texture”
        “Use teapot mesh”
        “Display this buffer at next VSync” – GPU waits until vsync then displays your complete new image.
        “Tell me when you’re done”

        When the CPU says “Tell me when you’re done”, it goes to sleep and the graphics card will then wake it up when it’s finished.

        Most toolkits assume you’re doing it this way. They “Draw into back buffer X” for you, and have a single call that does “Display Buffer at vsync” and “Tell me when you’re done” in one go.
        – Often with some extra settings to decide details of how many vsyncs to skip or whether at vsync at all.
        (In some cases, deliberately using every-other vsync or similar gives a consistent, albeit lower framerate. It can also help create a “cinematic” style if that’s what you want.)

        There’s lots of other ways to do this.
        You could ask the graphics card “Are you done yet?” but not go to sleep.

        In theory you can skip “Tell me when you’re done” and just keep throwing commands into the pipe the moment you can – and hope it never fills and happens in a sane amount of time. That’s less bad than it sounds.

        You can also swap buffers right now – the GPU will then flip the buffer right then, causing “tearing” as the monitor is partway through being told about the image.
        – Eg the top half show the previous frame and the bottom half the new frame. (Other types of tearing are available)

        One approach to render thread/engine thread is to have your own triple-buffered “game state”, where the render loop simply renders whatever is complete and newest at the start, while the engine uses the other two to keep on playing (in the dark).

        Many games choose to be single threaded (CPU) and update the game state once per frame immediately before drawing it.
        This approach means that a machine may spend a lot of time with the CPU sat waiting for vsync.
        This isn’t a problem unless there are occasional frames where it doesn’t quite have enough time from vsync to vsync to update and draw – but would have if the CPU hadn’t waited.

        If it wouldn’t have had enough time regardless, then nothing is lost – it can’t do that anyway.

  13. AR+ says:

    “Whether it’s the fault of SDL or not, the problem happens inside of an SDL call where we can’t see what’s going on. ”

    Wait, what? SDL is open source. Do you just mean, “I’ve never had to read any of SDL’s internal code before and so learning how it all works on the inside for this one bug would be a huge undertaking that the project doesn’t have time for”?

    Speaking of, has there been any talk about making Good Robot open source? As some FLOSS advocates point out, that doesn’t even necessarily mean gratis nor libre, but if you really want to support modding, you can’t beat that in both power and ease of implementation.

    (Actually, that might be major post idea. “What if open source commercial video games?”)

  14. kdansky says:

    I have a 120 Hz Monitor. Considering you made the big mistake of tying gameplay to framerate in the very beginning, and that is difficult to fix, well, tough luck. That said, this isn’t hard to avoid at all, you just have to lay out your main loop slightly differently, and pass around a different delta time. But you have to know this in advance, before you’ve finished the game. Funnily enough, you would have avoided your problem entirely, because your game logic and your render thread would be separate to begin with.

    At least you picked a sane 60, and not a “cinematic” (I hope you can hear the sneer as I say that) 30 Hz.If you had chosen 30, I’d be yelling bloody murder though. Having an extra 30ms response delay on every button makes action games downright sluggish.

    • This is why your “next monitor” should be a Adaptive Sync capable monitor (I know, Nvidia has Gsync but Adaptive Sync is open and license free and part of VESA’s Displayport 1.2a standard).

      A particle explosion taxing the rendering pipeline would then only cause a FPS dip for a frame or two instead of halving the framerate for a while.

      Also, as I mentioned earlier, game logic and rendering should be decoupled, the same goes for input and rendering too.

      Example: Input could be at 100hz, game logic at 60hz and framerate at 144hz (if the gamer has a 144hz monitor and the PC is able to render that fast),

      With traditional vsync some jankinyess might be observed visually but it would feel very responsive. With a fast PC and vsync on, or with vsync off on a slow PC or if using Adaptive Sync then you’d be in gaming nirvana.

      Very few games have all 3 of input, logic, render decoupled (is it called detrippled if it’s “3” things?).

      If a mouse felt laggy in a game it’s probably due to input being in lockstep with the rendering loop somehow.

      Input can easily be put in it’s own thread.

  15. skeeto says:

    Without a locked swap interval, you’re going to lose vsync and the game is going to have nasty screen tearing problems. Virtually every OpenGL application should have a swap interval enabled. I definitely understand why you did it (I’m guilty, too), but you’ve painted yourself into a corner by coupling the game speed and video framerate. I think it’s going to have long-term consequences, exactly like those old DOS games that stopped working correctly as computers got faster. There are going to be forum posts on GOG in a decade where people are asking for help because they can’t run this classic Windows 10 era game properly on their weird future hardware. It’s running way too fast, or too slow, etc., even for their cyborg computer-brain interfaces.

  16. Thomas says:

    That second GIF is mesmerising.

  17. KingJosh says:

    So, I’m not seeing anything particularly tempting on this week’s Steam sale. But, then I thought, “I’m gonna want to play Good Robot, but I never really played SMUPS/Twin-stick shooters/whatevers!”

    So, any good games to recommend that are
    1. Newbie friendly. Good tutorial, easy-mode, stuff like that?
    2. On sale for a price that’s acceptable, even if someone just wants to try a few levels to get their feet wet with the genre?
    3. Will be OK on a reasonably modern, but not high-end, laptop. Multi-core, 2.4 Ghz, 8 Gb of RAM, integrated (AMD) graphics?
    4. Accepts an Xbox 360 controller?

    Thanks for any advice you may have!

    • Nidokoenig says:

      Beat Hazard is pretty good. You plug your music in and shoot things that come in waves allegedly formed by the music. You can also plug in podcasts if you have the audio file, but they will be a tad quiet. Worked on every POS I tried it on and is 360 controller compatible, even switches the KB+M graphics to controller ones. Your options are few enough that it doesn’t really need a tutorial, just feel out the possibilities, but I think there is one. If you get tired of the levelling mechanic, you can put on a Diecast and let the score multiplier keep climbing.

    • Echo Tango says:

      The Binding of Isaac fits all of your needs, except for being “newbie friendly”. If you don’t mind dying a bunch and/or not “finishing” the game, it’s a really fun time. Hell, the endings and story itself are basically paper-thin. The real fun just comes from playing the game, shooting dudes, and getting power-ups and loot.

      The game is actually meant to be played with an xbox controller. So much in fact, that the game doesn’t actually properly tell you the buttons if you’re using a keyboard. WSAD moves, arrows shoot, Q drops a bomb, SPACE does your top-left screen item…and I think that’s it? If you’re going to use the controller, it should be even easier to get going, since it should actually display the controls on-screen. Worst-case scenario, you can just mash buttons every once in a while. :)

    • Galad says:

      I’m gonna suggest Nuclear Throne.

      It’s newbie-friendly enough in that it gradually ramps the difficulty, and losing only motivates you to try again, and for me at least, making progress is a grand joy.

      The price is low for the amount of hours you can potentially spend on it, and for the good feels said progress in it ellicits. I also managed to get nearly to the end at the start without even knowing what I’m really doing.

      It should definitely be OK. It’s a pixelized twin stick shooter that runs at a fixed 30 fps (engine stuff)

      The ‘overwhelmingly positive’ is not accidental

      re: Shamus’ post – glad to see there’s a solution. I’ll make a suggestion to one or two of my favorite streamers. Say, Shamus, what would you say the ‘elevator pitch’ to suggest streaming of Good Robot would be?

    • Geebs says:

      I rather enjoyed Mutant Storm by PomPom Software, and of course Jeff Minter’s Llamatron. Bear in mind, the game they’re all descended from, Robotron, was designed to mercilessly kill you so none of the twin-stick shooters is exactly easy.

  18. Decius says:

    I’m not a famous streamer, but I will stream it cold for a free copy…

  19. MichaelG says:

    The whole time I was reading this, I was yelling at the screen “It’s the swap interval!” Surprised there aren’t more comments on this when you Google.

    Glad you found it.

  20. Iceman says:

    If I understand the libsdl source correctly, SDL enables WGL_EXT_swap_control on WIndows (and equivalent GLX extensions on X11) during startup. The cross platform way to fix this is call SDL_GL_SetAttribute with SDL_GL_SWAP_CONTROL. (You can find more information about how sdl invokes wglSwapIntervalEXT in src/video/wincommon/SDL_wingl.c).

    • mewse says:

      More details:

      SDL1.2 (which I assume is what Good Robot is using) provides an attribute `SDL_GL_SWAP_CONTROL`, which you can set using:

      SDL_GL_SetAttribute(SDL_GL_SWAP_CONTROL, -1); // do whatever is default for the driver
      SDL_GL_SetAttribute(SDL_GL_SWAP_CONTROL, 0); // no swap control
      SDL_GL_SetAttribute(SDL_GL_SWAP_CONTROL, 1); // cap at 60fps
      SDL_GL_SetAttribute(SDL_GL_SWAP_CONTROL, 2); // cap at 30fps
      // etc

      By default, this attribute is set to -1, which means that the driver will do whatever its default behaviour is; SDL won’t call wglSwapIntervalEXT at all.

      (You can see SDL_GL_SetAttribute’s implementation inside SDL_video.c)

      `wglSwapIntervalEXT` is only called if the value is >= 0. It’s possible that you’ve set this value to 1 somewhere in your setup code? Or alternately, it’s possible that your video driver’s default behaviour is to cap at 60fps unless a game tells it otherwise; that actually seems like a pretty common default driver behaviour, in my experience.

      Also note that this attribute must be set BEFORE you create your window and set it up for GL; changing its value after that point will not have any effect.

      On the laptop I’m typing this on, the driver won’t even obey that request; it insists on always capping at 60fps no matter what. And that’s legal per the OpenGL specification; the swap interval is only a hint to the driver about what the game would like for it to do.

  21. Da Mage says:

    Vsync has always been a problem for my programming, and you should know that Nvidia and ATI settings will OVERRIDE what you have set with wglSwapIntervalEXT if they have not been modified.

    For a big streamer I would expect them to have their settings correct, but for a layman (and I was in this boat until I started programming my game) there is a high change vsync will be turned on by default in their graphics control panel.

    • Mephane says:

      I am not a Streamer and yes I have VSync on by default. But if that is a problem for Good Robot, I know how to change the setting on a per-program basis. But I have no idea how widespread that knowledge is among gamers in general (not that it is any complicated, plus there exist most certainly step-by-step guides for this).

      • Da Mage says:

        Oh, I realise it’s easy to change…but I’m betting there are plenty out there, like I was before, that had never really opened the control panel before and don’t know it overrides the in-game settings.

  22. Vdweller says:

    Well Shamus, at least you have your website and some people know you anyway and you get to promote your game from a better position. The rest of us unheard of pleb developers go from forum to forum like medieval peddlers, posting the same crap again and again, hoping that this zero next to the “comments” label will maybe rise to one within a week.

    I’ve been developing a game for four years now, alone, and I fear every day that,in the end,no one will hive a damn. So brighten up, because most of us devs have it much worse.

  23. Timothy Coish says:

    Hey Shamus, I’ve been using SDL2 in a few games of mine. I have some code:

    //Controls VSYNC
    const int VSYNC_WITH_TEAR=-1;
    const int IMMEDIATE=0;
    const int VSYNC=1;

    …later when initializing:

    Pretty sure this function is the wrapper for the openGL function you directly called, but this may not work for SDL1.2

    Definitely worth trying:
    //wglSwapIntervalEXT (0);
    SDL_GL_SetSwapInterval(0); //or IMMEDIATE, where IMMEDIATE==0

    Having said that, since you’ve got it fixed, really who cares?

  24. RCN says:

    Hmmm… I wonder. If a streamer tries to play your game but makes the capturing software cap the FPS at 30 for whatever reason (reason 1 is that they’re a bad streamer, but maybe there’s whatever technical glitch they have to deal with), would the game be rendered at 30 FPS and thus slow down?

    Or am I being stupid and capturing software don’t really have the power to mandate the rendering power, only how fast they themselves are capturing, and only the graphic card drivers have this power?

    I’m a complete layman, just asking out of curiosity.

    • Mephane says:

      I think in that case the software would only record every second frame, not slow the game down to 30 fps. Would anyone even want to use a recording software that did otherwise?

      • RCN says:

        Yeah, but I heard this was a common fix to problems with games where the physics are mandate by the framerate. Like Interstate 76, where the sixth mission or so you have to make a jump with your road-warrior car, but the physics is tied to the framerate and in modern machines the car will only jump 1/3-1/5 of the way needed, no matter how much momentum you manage to glitch into the jump.

        I had given up on trying to figure out something that didn’t involve too much technical fiddling when I heard you could use capture software to solve this problem and cap the FPS.

  25. Zak McKracken says:

    Money drops are too large and opaque?
    I can’t see a money drop in that GIF … they don’t happen to be smoke-coloured? Or did you mean the green thingies that looked like part of the explosion?

    (Do I need new glasses? Again?)

  26. Abnaxis says:

    Out of curiosity–and coming at this from the standpoint that I have no idea how your code is structured–did you ever try leaving SDL alone, but commenting out your own code that does the frame-limiting? Could the issue be one where both you and SDL are trying to accomplish the same goal (60fps limit) but both methods are stepping over one another in odd cases?

    If the problem isn’t your hand-written code or the SDL, but rather interaction between the two runs into one of those fun programming issues: do you keep the code you’ve already written yourself, where you already know how it works; or do you keep the framework that gobs of other coders have worked on and get the benefits of readability/testing, that you only now found out automagically does the stuff your wrote in by hand?

  27. WWWebb says:

    Every new gif makes me think that Good Robot’s core gameplay loop will be finding new, cool weapons a la Diablo/Borderlands.

    That, and it took me a while to figure out that the weapon cooldown timer wasn’t a weirdly rendered shield.

    PS- I haven’t seen GR shoot rockets in a while. Is it still guns/rockets or is it now crazy-weapon/screen-bomb? Or is it just that you’re not showing guns, and all those crazy weapons with timers are shooting from GR’s rocket-hand.

  28. Vect says:

    Well, the Super Best Friends have recently gotten into streaming and this game (fast-paced indie action game) seems right up their alley. They also like to show off games like these on their channel, so there’s that. They’re probably not your poison (too much Japanese stuff), but they might have fun with it.

  29. Daemian Lucifer says:

    Hopefully someone will stream this game so I won’t feel like this was a waste.

    Send this article along with your game to TotalBiscuit(well,to be precise,whoever handles his mail)and he will most definitely stream it.

  30. Random Coder says:

    But then isn’t it generally a better practice to keep your gameplay speed independant of your framerate by instead tying all movement related code to delta time?

  31. psivamp says:

    So, I know that this is being released all over for Intel-compatible platforms.

    Is there any chance of getting an ARM release? There are some seriously beefy ARM-based systems out there. The Tegra K1-based Acer Chromebook 13 for example even has a fully-capable NVIDIA graphics stack.

    I could playtest it; heck, I would volunteer to try to get the build working — I already do cross-platform C++ development for work across Intel/ARM borders.

    I know you’re using SDL and that’s available on Ubuntu’s armhf repository.

Leave a Reply

Comments are moderated and may not be posted immediately. Required fields are marked *


Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun.

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>