Frontier Rebooted Part 6: Worst-case Scenario

By Shamus Posted Thursday Jun 12, 2014

Filed under: Programming 97 comments

When it comes to rendering, speed is everything. Well, speed and looks. I mean, you need looks. No sense in drawing things if they look terrible. So the two most important things are speed and looks. And latency. Obviously latency is important. You can’t bloody well play a game if it takes several seconds for your input to make something happen on screen because the engine is building up these massive framebuffer effects. So the three most important things are speed, looks, and latency. And compatibility. What’s the sense in writing an engine that’s only fast and pretty and on one set of hardware? That’s buying into the pointless wanking and pissing matches between hardware manufacturers, right there. So our top priorities are speed, looks, latency, and compatibility. And consistency.

Let me start over.

In your typical game, you’re generally pushing to render stuff as fast as you can. That’s still the case, obviously. But if you’re developing for VRWhich we aren’t, but for educational purposes we’re pretending we are. then you have something new to worry about: StutteringAlso known as hitching, stalling, frame-skipping, frame-dropping, and “crap”.. In the old days, having the game hiccup and pause for a tenth of a second was nothing more than a mild annoyance. Maybe the player rounds a corner and suddenly gets line-of-sight to (say) the Doomsday Cannon. But maybe the model or texture isn’t in memory yet, or they haven’t been properly packed up and made available to the GPU. So the rendering stalls for a split second while it gets the assets ready to be drawn.

Now that the cannon is in memory, it’s not a problem any more. The player can play peek-a-boo with it all day and they won’t bump into any more stutteringUnless we had to dump something ELSE out of memory to make room for the cannon. In that case, the player might have another stutter when they go look at that OTHER thing.. A bit of stuttering is a harmless nuisance. We work to prevent it, but no engine is perfect.


But in the world of VR, stuttering is no longer a nuisance. It’s a dangerous failure state. When the screens are strapped to your eyeballs and you’re moving you’re head around, having rendering skip some frames – even just one or two – can be jarring, disorienting, and even nauseating. Anything that causes physical discomfort to the user needs to be at the top of the priority list. Screw special effects. Screw graphics. We can’t afford to miss a frame. If that means the Doomsday Cannon spends half a second untextured, or unlit, or even completely missing, then so be it. As long as we keep drawing the scene.

This presents a scary problem. What if we just can’t keep up? What if we’re stuttering not because we’re waiting to cache some assets, but because the hardware literally can’t draw eveything in the given time? What if there is suddenly just too much stuff to draw? If someone is running the game on hardware that just barely meets the requirements, you might run into a scenario where some arrangement of objects pushes us over the fatal limit that will overburden the GPU, stutter, and send the user to puke city.

This becomes important when you’re dealing with huge numbers of cheap objects, like grass. In that case our worst-case scenario is one where the player is standing in the middle of a field of grass. For our hypothetical game, let’s say that this scenario is unlikely but not impossible. Which means that the engine needs to be able to handle this state gracefully.

You could make the case that it makes more sense to simply render this worst-case scenario every frame rather than allow it to surprise us. It needs to be able to handle maximum grass, so why waste precious CPU time trying to minimize it? If we need to be able to handle the worst-case anyway, then perhaps it’s safer to just make sure we’re always drawing the worst case? Once we get the game running at the desired speed, we can be confident that no matter what happens or what we need to draw, it shouldn’t harm our framerate or our poor user.

Of course, the counter-argument here is that we might want to cull grass because we’ve got mutually exclusive stuff in our world. Maybe we have trees. We need to be able to handle maximum trees. We need to be able to handle maximum grass. But there will never be a situation where you have the maximum of both at the same time. This spot of ground is either trees or grass, but not both. In which case it makes sense to do some kind of culling. Fine. But say you’ve got a few different systems playing off each other: How can you be sure there’s not some magic ratio of the three that will lead to frame skipping? You can’t test everywhere in all configurations.

I honestly don’t know. It might be better to simply apply the maximum load at all times to avoid any surprises, even if that means raising your minimum system requirements. In that case once the user has it running smoothly, they can feel safe that it will keep running smoothly and not oblige them to go mucking about with graphics settings once they have the headset on.

People are still working on this stuff and figuring out what the best practices are. It’s one of the reasons I’m so into VR right now. There’s room to learn new stuff rather than just read a white paper from someone who solved the problem fifteen years ago.

Are we a thousand words into this and still not making polygons? Apologies. I don’t think we’re going to make any polygons today. I think I need to laboriously talk around the problem for fifteen hundred words before I do something radical like write actual code.


Grass is interesting because it’s small. It usually doesn’t occupy much screen space, but if you can crouch (or it can be very tall) then there are situations where it might fill ALL Screen space. It’s usually made up of tens of thousands a really cheap triangles. However, if your game engine is aiming for photorealism then you might want to blend the edges of the grass blades. This looks nice, but can get expensive fast. It’s also strange because it’s almost never the center of attention, yet it has a massive impact on the sceneTry turning off grass in Oblivion. The game will suddenly look like Morrowind..

The problem with grass is that we need it everywhere, but we can’t actually draw it everywhere. We need to draw grass right here at the player’s feet, and we need to not draw grass on those far hills. At this distance we can just paint the hills green and it’s all good. The trick is how we transition between the two. Having grass suddenly pop in is jarring and distracting. Same goes for having it abruptly vanish again.

The standard trick is to have the grass fade in. At (say) twenty meters it’s just barely visible, but by the time you’re five meters away it’s fully opaque. The problem with that idea is that it drags us into the thorny world of alpha blending. (That’s fancy programmer talk for drawing semi-transparent surfaces.) For starters, drawing alpha-blended stuff is slower than drawing non-blended stuff. Second, you have to sort all the semi-transparent stuff you’re going to draw, so that you start with the most distant and draw the nearby stuff last. Otherwise:


Those blades of grass on the right show us what happens if you don’t draw back-to-front. The edges of the blade nicely blended with the background just fine. But then when we’re drawing the more distant grass it gets blocked from view by the wispy edges of the close grass. There’s no way to properly “insert” the far grass in between the near and far stuff. That’s like painting and needing to put some paint down between the canvas and some paint you’ve already applied.

Finally, drawing back-to-front is more expensive because we get more overdrawDrawing over the same pixel again and again. In the ideal scenario, you would draw every pixel on screen exactly once.. If we weren’t trying to blend the edges of the grass above, then it would properly obscure our view of the grass behind it, and the GPU wouldn’t need to waste time putting those pixels down.

So fading grass in looks great, but it sucks. It requires some CPU-sucking sorting, it causes overdraw, and alpha-blended stuff is slower to draw. Also: Aside from speed concerns, sorting is annoying. You’ve got this awful sorting system right in the middle of your beautiful rendering pipeline. It introduces a lot of code complexity. Oh sure, sorting itself is basically a solved problem. But you’ve still got to do it at the right time. You have to do it just often enough to keep the scene from looking glitchy. You need to clutter up your rendering logic with stuff to make objects fade in and out. A couple of lines of code can easily become a couple of pages if you really want to do it right. So what is a cheap, corner-cutting, lo-fi developer to do?

There’s a trick I want to play around with. We’ll try it next time.



[1] Which we aren’t, but for educational purposes we’re pretending we are.

[2] Also known as hitching, stalling, frame-skipping, frame-dropping, and “crap”.

[3] Unless we had to dump something ELSE out of memory to make room for the cannon. In that case, the player might have another stutter when they go look at that OTHER thing.

[4] Try turning off grass in Oblivion. The game will suddenly look like Morrowind.

[5] Drawing over the same pixel again and again. In the ideal scenario, you would draw every pixel on screen exactly once.

From The Archives:

97 thoughts on “Frontier Rebooted Part 6: Worst-case Scenario

  1. “Our chief weapon is surprise…surprise and fear…fear and surprise…. Our two weapons are fear and surprise…and ruthless efficiency…. Our *three* weapons are–”

    I’m assuming this is what Shamus was going for, and somebody had to do it.

    1. Shamus says:

      Yeah. And really, it’s what it feels like when you’re doing this kind of work. You’ve got several things to worry about, and all of them are the most important.

      1. Ilseroth says:

        I’ve recently started trying to learn how to do game development and it really is this every step of the way. I am even using the Unity engine and still I get an endless round of, “Ok the only thing stopping me from doing this is this…. ok, but before that I have to do this and this. Oh I see to do one of those I have to do this, this, this, this and this… Wait what was I doing this for?”

        1. Jake says:

          yak shaving

          [MIT AI Lab, after 2000: orig. probably from a Ren & Stimpy episode.] Any seemingly pointless activity which is actually necessary to solve a problem which solves a problem which, several levels of recursion later, solves the real problem you’re working on.

          The Jargon File

    2. silver Harloe says:

      Whenever any programmer says something that sounds like a Monty Python reference, it *is* a Monty Python reference. Memorizing their works is a job requirement for programming. It’s in The Contract.

      1. Mephane says:

        And now for something completely different.

          1. DIN aDN says:

            *stock footage of applauding crowd*

      2. Blake says:

        “Why doesn’t the game work just like this nice flowchart?”

        “It’s only a model.”

        1. Akri says:

          On second though, let’s not go to 20 Sided. It is a silly place.

          1. Halceon says:

            Telling apart 20-siders from a distance.

            The Rustkarn

            1. Groboclown says:

              Except that he knows how not to be seen.

              Mr. Rutskarn, would you stand up, please?

  2. Scerro says:

    At this point it’s starting to feel like you’re just stalling.

    Not to say that I don’t mind theory. After getting spoiled with your 2D project where there’s far less theory and complexity, going back to 3D programming explanations leaves for a lot less feature excitement.

    1. AnZsDad says:

      I think it says a LOT for Shamus that his stalling is more interesting and informative than some people’s most information-rich output.

      1. ET says:

        Heck, people get paid to produce things which are less informative than what Shamus is doing when he’s “just stalling”. It’s why I started coming to this site in the first place! :)

    2. Kian says:

      Shamus has to stall now so the program won’t stall later :P

      Design is the most important part of programming. Design and… (jk, not goingt o start that joke again). What I mean is, before you can write something in a language the computer can understand, you need to write it in a language a human could understand. Because you’re a human, and if you don’t understand what you’re trying to tell the computer to do, you won’t get very far along.

      With a solid design, the actual programming becomes a lot easier. But the first part of designing is identifying what problem it is that you want to solve, and what obstacles are in your way. Shamus could just write about how he solved the problems he encountered, but that’s the boring bit. How he got to that solution is what’s illuminating. Imagine if you reduced every story to “There was a bad guy, and then the hero defeated him.”

      1. Neil Roy says:

        That is VERY true! I found that when I took the time to thoroughly design my own games, that when it came time to actually code it, the program almost wrote itself. I was constantly surprised when I would compile it and thinking “okay, lets see what errors I made this time…” and then it would run fine without bugs. The planning had paid off dividends.

        Time spent planning and designing equals time saved coding, at least in my own experience.

        1. Deoxy says:

          Clearly, you can not possibly EVER be in management. EVAR.

  3. Rick C says:

    I wonder if you could do a bit of profiling at first startup, to figure out how long it takes to draw various things, and then use that to dynamically scale back what you draw–if you determine you need 1ms more than you have to get this frame out, and you know grass requires an average of .01ms/”blade”, you skip about 100 blades this frame. Likewise, if a tree takes .15ms, you skip about 7 of ’em (or maybe you have a simpler model that takes effectively no time to draw, and you substitute a few of those for the detailed model. (Yes, I know that’s veering into LOD, but I’m talking about time, not distance, as the discriminator.))

    1. ET says:

      This is actually something I’ve thought about myself, while screwing with one game or another’s graphical settings. Shouldn’t it be possible for a game to run some benchmarks on the user’s computer, then auto-set the graphical fidelity, or even shut off/on certain features? e.g. User A’s card is pretty beefy, but doesn’t have hardware support for feature X, so they get a generally high-polygon version of the game for the most part, but with feature X scaled back since it’s done on the CPU.

      1. Rick C says:

        To some extent I think big games already do that; WoW, for example, usually seems to have settings adjusted for your computer. There’s also a manual button you can press to recalculate the proper settings, presumably for when you’ve gotten better hardware.

        1. ET says:

          I think I remember Warcraft 3 doing that too. Maybe even some other older games. Too bad it’s not standard in the industry. Seems like about only 50% of games made even have something like this as an option, and only another 50% of that manage to get it working right. (i.e. Not over- or under-selecting the graphics options.)

    2. Humanoid says:

      The marketer’s solution would be to substitute frames containing subliminal advertising for faster PC components in place of those stutter frames.

      1. ET says:

        Yeah, but loading in the images for those frames would probably take up a couple cycles, potentially skipping more frames…oh…

  4. Geoff says:

    There was a game (but sadly I can’t remember which one) which handled grass via scaling instead of alpha. They were (as I recall) using Alpha clip, so no soft edges to try and sort. Objects that were outside of the cull distance (20m in this case) were scaled to 0 and culled. As they crossed the threshold they would start to scale up to 1 (with the pivot attached to the ground) at 5m. There may have even been a bit of an exponential curve to the scaling so that it scaled quickly at a distance and slowly when it was closer and more noticeable (say, going from 0 to 0.75 in 10m, but taking the last 10m to go the last 0.25 units).

    1. Paul Spooner says:

      It sounds like something along these lines is a better solution than the alpha blending one.

      I’d be interested to see how well a “popping out of the ground” implementation would work. Something that introduces distant objects below ground level, and slowly moves them upward through the ground as you approach them. That way you get the draw and sort for free with the geometry. Could probably even make it into a vertex shader, or whatever handles modifying the position of objects.

    2. ET says:

      Too bad you can’t remember the name; It’d be really cool to see how this compares visually, to an alpha-blended approach. :)

    3. Neil Roy says:

      That’s an interesting solution. Although personally, I have seen games where the grass just pops in and honestly it doesn’t bother me, I don’t even notice it most of the time as you’re focus is usually on something else. Something like this sounds like an ideal solution rather than slowing down the system with blending.

    4. WJS says:

      See, what I would probably try first would be to vary the grass density with distance. All the grass appearing at once might look like crap, but a bunch of grass appearing inside other grass would be much harder to notice, probably even harder than if all the grass was scaled down.

  5. Daemian Lucifer says:

    Nobody expects the spanish renderer!

    1. evileeyore says:

      Funny enough the actual Spanish Inquisition gave a 30-40 day notice (refereed to as a period of grace) in which the accused could come to the Church and reconcile themselves with the Church to clear up any charges.

      So, actually everyone expected the arrival of the Spanish Inquisition.

      1. Akri says:

        ….my worldview is shattered.

      2. Daemian Lucifer says:

        Nobody expects to be surprised by the spanish inquisition!

        1. evileeyore says:

          It is one of their chief weapons…

  6. TMTVL says:

    […]but because the hardware literally can't draw eveything in the given time?[…]

    As a small aside, I always turn off grass in Oblivion because it just seems way too tall. It makes me wonder if I’m playing an Argonian or a gnome.

    1. krellen says:

      Honestly, what it mostly means is you’re a modern person used to well-manicured grass and modern, short-stem varieties we have bred in the intervening centuries. Grass gets pretty tall naturally.

      1. McNutcase says:

        Ayup. My body proportions are a little weird (torso of a guy six foot four tall, legs of a guy a foot shorter, so I wind up five ten but with really annoying clothes sizing requirements) but wild unkempt grass is easily waist-high on me, in temperate zones. Even the short-stem lawn grass I hate on general principles will get past knee-high if you don’t mow it. Grass is only short when it’s continuously cropped, whether by animals or mechanically.

        As a further aside, pretty much all our grains are some kind of grass. Wheat, oats, barley, rye… all grasses. Even sweetcorn, believe it or not; it’s been bred from teosinte, which is a grass. Those nine foot corn stalks are grass.

        1. DIN aDN says:

          Not to mention sugarcane. But that being said, it really does depend on the area and the type of grass as to how tall it grows. Here in Australia, for instance, there are wide open patches of native grasses that only get above knee-height during flowering season [very pretty, though] and at other times are kept pretty low by the local grazing wildlife [mostly kangaroos and their relatives].

          1. Duneyrr says:

            This is also grass:

        2. Peter H. Coffin says:

          And rice. Which is probably the biggest cereal crop in the world. Buckwheat, quinoa and amaranth aren’t, but outside of that, you’re getting into staples that aren’t much grain-like at all: beans, nuts, tubers…

        3. Vi says:

          I sort of already knew in the back of my mind that corn = grain = grass, but when you put it like that it sounds so trippy!

      2. Daemian Lucifer says:

        Also,for the sake of fairness,grass tends to grow big in warmer climates,while it remains short and stubby in colder ones.So while you should expect tall grass in oblivion,you should expect short one in skyrim.

        1. Ciennas says:

          Except for that region that’s in the caldera of a massive volcano. I’d expect taller grass there.

          (I also think we see Bethesda’s “Exit Strategy” for whatever player choices they allowed in Skyrim.)

          1. WJS says:

            They allowed player choice in Skyrim? (Zing!)

    2. Groboclown says:

      This was partly because Oblivion used something like 8 polygons for grass (there was a mod to turn this down to 3). Because, you know, photo realistic grass in a world with potato-face people.

      Which means the only correct answer to this is to have your video game setting in either an alien world which does not have grass, or set it during or before the dinosaurs (about 65 MYA by current research standards).

      But then you have to deal with ferns. Many, many ferns.

  7. Jonn says:

    Very interesting read. Couple typo / grammar things:

    It's usually make up of tens of thousands a really cheap triangles.

    Guessing you want:

    It's usually made up of tens of thousands of really cheap triangles.

    The 4th… note? Addendum? Anyway link #4 needs a period after it.

    Will be interesting to see where this goes.

  8. Mersadeon says:

    My biggest worry with the Occulus and it’s competitors has always been: what will happen to the mouse? Will we use it as cross hairs zipping around independent of head movement to give it the needed precision? Do we disable it and bind the cross hair to the head movement to keep it from feeling weird? I just cannot see a good solution for this. What are your thoughts, Shamus?

    1. LazerBlade says:

      I think this is why people like Shamus experimenting with dev kits are a must. The way to figure this out is to try stuff and see what feels weird and what doesn’t, as that type of thing is very inaccurate on paper.

    2. Sure says:

      The Power Glove will make its triumphant return!

    3. TMC_Sherpa says:

      The short version is I think the mouse will go away.

      The long version is using something else to track your hands. It could be something as “simple” as a glove but then you are talking about more gyros and position sensors to track and calibrate so my best guess is a Nth generation Kinect type thing. I can see a dedicated box that can do mo-cap with out the dots, have enough resolution to track fingers, do all the image processing and spit out a couple data points to feed whatever engine you are running. Someone will come up with a Lawnmower Man GUI interface (of which I think there are several already done and dusted) and you’ll just poke at the air.

      Way long term there has been a decent amount of research into reading and sending signals into the brain so it’s possible someday everyone will be wearing colanders on their heads to play games.

      Of course in this hypothetical future Real Men will get sockets in their skulls and skip all that wussy ‘trode crap. So chombatta, you gunna run with the men or the boys?*

      *Note: Male pronouns are being used in the Californian “dude” sense of the word and are not meant to preclude or exclude anyone based on their gender

      Edit: Or Sure could jump in with a one line answer while I’m typing and steal my thunder *shakes fist*

    4. Shamus says:

      From what I gather, both have been tried, and it seems like making aiming and looking independent are a must. You don’t want every mouse movement to drag the head along with it, and we need the mouse or else you can’t turn all the way around. (Because your head, under normal circumstances, does not spin freely 360 degrees.)

      Carmack mentioned that once the two of these systems were divided, people began treating the mouse like the gun. So, when they walked up to an NPC they would naturally lower their weapon.

      Sounds kinda neat.

      1. Doomcat says:

        Just going to throw This on here

        Just watch the video, its a prototype of a new kind of control to go with the occulus (made by some other people of course) And its the kind of thing I’m personally very excited for.

        Reminds me of Dot.hack in a good way, not perfect (there is still no way to ‘walk’ with it) but definitely very cool.

        1. Cinebeast says:

          Oh my gosh that was hilarious. Also very promising, even if it’s in an early, early build.

        2. ET says:

          Yeah! VR beer pong!!!

        3. Steve C says:

          That was impressive. All that in 3 days? I haven’t finished mowing my lawn in 3 days.

      2. Decius says:

        I want my hand to control ONLY the direction I interact with, my foot to control the direction I move and turn, and my head to control the way I see.

        That’s two axes on the hand (X, Y) plus buttons, three on the foot (X, Y, yaw), and three on the head (roll, pitch, yaw).

        Very doable, but slightly disability-unfriendly an it requires an input device that I haven’t seen implemented yet.

        1. Paul Spooner says:

          While we’re inventing input devices, the mouse really should give the computer rotation information as well. Especially now that we aren’t using roller balls and mickey wheels any more, but instead using optical surface tracking.

          Plus rotating the mouse to rotate the view makes total intuitive sense. My kids do this all the time with Minecraft. Come on computer input developers! Give us a three axis mouse! (x, y, and theta)

          Then you could just strap one of these three axis mouse things to your foot, and hey presto, there you’ve got your foot controller.

          1. ET says:

            The heck with 3 degrees of freedom, why not just go for 6? I saw a video a couple years ago, showing something like this. I couldn’t find the original video, but apparently people are still working on it. Here is somebody showing a system, where you just have a coloured…dodecahedron (?) made out of paper. :)

            Ooh! I just remembered that there’s an even more natural-feeling method — coloured gloves! :D

          2. Volfram says:

            They actually make 3D mice, but very little software has proper support for them.

            Kerbal Space Program added 3D mouse support in one of the more recent udpates.

            1. Paul Spooner says:

              You kids and your silly toys. I’ve used 3d mice several times before. Other engineers are always saying how awesome they are. How everyone will be using them in five years.

              But it’s a lie. They suck, which is why no one supports them.

              You know why? Because the table I’m working at is a functionally 2d surface, I’m working under the constant acceleration of one Gee, and my brain is wired for direct manual motion. That means that 3d “mice” either need to be integrative devices (which makes them thumb sticks instead of mice, and much less precise and quick than a mouse) or direct motion devices, in which case I have to repeatedly lift my hand up in the air (which, due to the gravity I mentioned, is much more tiring and still less precise than using a mouse)

              Even if we were working in microgravity where fatigue isn’t an issue, the 2d table still gives you a surface to brace against for precise small movements. Maybe we’ll have a working volume with a brace lattice for precision or something, (or a multi-mode input, like using the “shift” key in Blender) but until then it’s going to be direct motion 2d spatial input devices, and damn the “3d mouse” evangelists.

              I’d still like to see the third axis though (rotation) as it doesn’t add any major problems (other than the temptation of additional wrist strain (which may be why it hasn’t been implemented yet, too many potential lawsuits)).

      3. Paul Spooner says:

        In addition, you can start using the user’s head position as input, Suddenly you’ve got another channel of game input that’s basically included in the display screen. So exciting for game developers!

      4. Neil Roy says:

        I really love the way World of Warcraft does this. I think all games should adopt a similar input system, they seemed to have gotten it just right. At least for a third person perspective anyhow. In a first person perspective the head movement (the way you are looking) could be limited to a certain angle to the right or left of the direction your body is facing, if the body turns and the head is looking too far left or right, than your view starts to turn to maintain that maximum angle.

    5. Daemian Lucifer says:

      Mice can still be used as a proxy for your hand,though personally,a completely 3d controller,like the wiimote,or kinect work much better.Or better yet,lets bring back the power glove.

      However,without the complete neuromotor hijack of your brains impulses,I dont see vr headsets or kinect type games to become as influential as the screen and controller any time soon,because flailing around blindly requires lots of open space,which one does not expect in your average room.

    6. Disc says:

      This guy used Wii Motes and Nunchucks to play Doom 3:

      I’d imagine it shouldn’t be too hard to make something similar for simulations like the Arma series, where you can already move your head separately from your gun/land vehicle/boat/aircraft.

      1. postinternetsyndrome says:

        The Razer Hydra is essentially two wii chucks. There’s a bunch of experiments and tech demos with it and oculus, heres one:

        I don’t think it’s widely available or even produced anymore though.

  9. Shamus you have my utmost nerdiest respect, I do hope you have a comfy chair.

    For those that missed the nice skit Shamus recreated, take a look at Monty Python: The Spanish Inquisition, a true classic.

  10. Not sure on the grass thing.
    Personally I can think of fading which is obviously high quality but expensive.
    But doing a simpler parallax method where you have x number of rows of grass, so you’d have lets say 10 to 20 layers of parallax grass, at that many layers the player would not really notice if the farther back ones pop in or pop out (if walking backwards).
    This does mean that one would need to group the grass in a way into groups rather than purely distance, so you are popping in groups rather than individual grass.
    Also the farther away the grass is the larger the grass group can be, I guess.

    You could also probably get away with just using blur instead of alpha blending on grass that is further away, the effect could potentially make the grass look more varied too.

    That missing canon texture example is a real issue.
    One possible way to solve it is to give the object a base color, and if for some reason the texture for an object is not in memory then render using the base color (black or very dark grey is the canon will use a black texture), this will look odd for a second as the player rounds the corner but a second later it’s fully textures and no render stutter.

    I think Shamus mentioned this previously, that it may just be better to render whatever you have and keep the frame rate smooth.

    Also, as to the amount of detail to render with. What a game could do at startup (first time after install) is to do a quick performance test, render a few frames and gauge performance and then dial back one notch and use that as the default), but still let the user tweaks stuff if they want.
    I have noticed a few games or game engines that do this now, but I’m not sure if they do a quick test or if they just go by a name check list (which could quickly get outdated in the near future, I’m sure many have noticed old games claiming your brand new GFX superduper card is “too old” or weak right?, that’s the limit of a checklist to gauge system abilities).

    Oh Shamus, are you using LOD for the grass?
    Is it possible to do it so that each LOD is actually twice as many straws of grass or similar?

    So at LOD “0” it’s your normal grass texture, then at the next LOD level you squeeze in twice as much grass in the texture, but render half as many straws if you get what I mean?
    That would hopefully exponentially reduce the polygon count for the grass.

    Another thing that might improve the looks is to make the grass closer to the camera taller than the grass further away, if done right it will look fine and the different rending of the crass in the back is masked by the grass in the front, you will see it through the grass in the front but any flaws due to whatever tricks used are essentially masked. (works better the taller the grass is relative to the player)

    1. Geebs says:

      One way to do it is to use clumps of individual blades nearest to the camera, low poly textured clumps (x-shaped billboards) further away, big square waffles further away than that, and then sheets of stippled texture parallel to the ground at the furthest distance.

      I wrote a “meadow” system that keeps a set number of positions and rotations for grass tufts in an array, which makes up a square patch surrounding the player; when they get a set distance away from the player they get moved to the opposite side of the patch. They get displayed using instanced drawing – the same model gets drawn each time, but the positions and rotations get changed. This means that you don’t have to arrange grass tufts on a grid, which looks fake, and you don’t need to keep track of every single grass tuft in the world, which rapidly gets very memory intensive. You can also keep the array of positions in graphics memory and update it as a ring buffer, which speeds up drawing and saves bandwidth.

      One advantage of rendering actual geometry next to the player is that the transparency and sorting issues become much less apparent and you can get away with just doing alpha-to-coverage, which doesn’t need blending (but does have an overdraw penalty).

      Kevin Boulanger describes a pretty good implementation in his 2008 phd thesis

  11. neolith says:

    “So what is a cheap, corner-cutting, lo-fi developer to do?”

    Adapt an art style that doesn’t need alphablended grass?
    Make a game that doesn’t have grass?
    Use some middleware where somebody else did the dirty work?

    …but you’re not going to do any of those, are you? :D

    1. Tizzy says:

      Sterile clean corridors are looking pretty attractive right now, aren’t they?…

      1. MichaelGC says:

        Or somewhere dark and subterranean … without a lot of vegetation …

        I know! A sewer!

        1. ET says:

          Pshh! Sewers need good water graphics. Dungeons all the way!

    2. Bryan says:

      Order-independent transparency! Which means that I might get to carry on about that for far too long again!


      (But seriously. The nvidia paper on dual depth peeling’s main point — that is … dual depth peeling, duh :-P — is interesting, but I like the hack midway through it, slightly modified. First render all the fully-opaque polygons with Z buffer writes and depth testing on. Then render all the partially-transparent polygons (that is, the ones whose textures have a non-1 alpha at any point) and “discard” — from the fragment shader — any fragment with alpha!=1, also with depth testing and Z buffer writes turned on. Then render all the partially-transparent polygons again, this time “discard”ing all the alpha==1 fragments, with Z buffer writes turned off, but with the stencil buffer op set to increment with each fragment, and rendering to a texture, using IIRC an additive blend function, doing some weird exponential math in the shader. Then loop through all the stencil buffer values, render a single screen-sized pair of triangles for each level, bind a uniform to that value, pass in that original texture, and use the uniform to do some combination math that averages the color values correctly. All of a sudden you don’t have to sort *anything*. Of course it does mean lots of draw calls, but the vertices and indices can be stuffed into a static GPU buffer as well…)

      1. Piflik says:

        So instead of sorting a nearly sorted set of polygons each frame, you want to draw each pixel twice as many times instead?

        Not sure if it really increases Draw Calls, since you could do all this on GPU, methinks (don’t quote me on that), but still it doesn’t sound very practical. Sorting transparent objects each frame is not as bad as it is made to be, since sorting nearly sorted sets is of linear complexity.

        1. Bryan says:

          …No, not draw each pixel twice as many times, or at least I don’t think so. I believe it draws each one just as many times as you’d have to do if the polygons were sorted. The fragments are either discarded by the depth test during any pass (because they’re behind a fragment that’s fully opaque), discarded by the alpha!=1 test in the second pass (…as the first thing in the shader), or discarded by the alpha==1 test in the final pass (again, first).

          Any given output fragment that includes any partially-alpha samples will have just as many render operations done to it as the sorted case: it’ll have the bottom fully-opaque fragment, the next partially transparent fragment, etc., etc., just they won’t necessarily go in order.

          There is the last full-screen-polygon render, but the only math done per fragment in that pass is dividing the sum of all input fragments (the data texture that everything else rendered to) by the count (a uniform for each stencil-buffer value). And that math was done a few times over in the sorted case, when each layer alpha-blended into the previous one. It’s multiple draw calls, too, but the stencil buffer means each final-output fragment gets rendered only once.

          Unless I’m completely misunderstanding how either the stencil buffer or the discard operator is implemented, which I suppose is possible…

          1. Richard says:

            “as the first thing in the shader” simply doesn’t matter at all.

            Because of the way that shaders (usually) run in a GPU, what actually happens in (almost all) GPUs is that the entire shader always runs to completion.

            The GPU still does all the calculations regardless of the result, it’s just that if ‘discard’ is called at any point the result is ignored.

            A lot of the optimisations that you do without thinking on the CPU don’t have any effect on a GPU, and some even make it slower.

            1. Bryan says:

              According to nvidia:


              only on GPUs before GF6, which is many years old now. Predication is no longer required for either the vertex or the fragment program as of GF6 series cards.

              In the fragment processor (which this is), GPUs that age or newer can do SIMD branching, which “is very useful in cases where the branch conditions are fairly “spatially” coherent”, which is the case in most alpha channels. Only around the edges of “objects” do the alpha values change from 1 to not-1; the branch conditions will be the same as for the neighbor fragments everywhere else.

              But even if it *were* true that in this case every pixel would get drawn 2*N+1 times (where N is the transparent polygon count), then the question becomes, is that a higher performance penalty than having to send the vertex data down to the GPU every frame, over the PCIe bus, which is slow compared to graphics memory from the GPU, and *very* slow compared to the GPU running code, and I think blocks both GPU and CPU? I have no idea for sure, as it may depend on your hardware. But doing it this way means that almost all geometry can be downloaded to the GPU once, rather than every frame, as camera movement is then doable by just changing a uniform mat4.

              (…Doing it this way also gets per-fragment ordering correct. There are a whole bunch of ways to arrange polygons that make the “sort” operation completely-undefinable when done per triangle, e.g. from MichaelG, another commenter on this very blog. If you don’t care about the polygon order then it’s possible to actually render these scenes… :-) )

  12. Piflik says:

    “Those blades of grass on the right show us what happens if you don’t draw back-to-front. The edges of the blade nicely blended with the background just fine. But then when we’re drawing the more distant grass it gets blocked from view by the wispy edges of the close grass. There’s no way to properly “insert” the far grass in between the near and far stuff.”

    You can also render front-to-back, the only difference is the blend-equation. You’ll still have to sort, though, since you have to draw either back-to-front or front-to-back and can’t render in an arbitrary order.

    But sorting is basically of linear complexity in this case, since there are not many changes each frame (and you HAVE to sort each frame), so you have nearly sorted lists to begin with. It is still not a free problem, but not as bad as sorting usually is.

    1. Decius says:

      Can’t you parrelelize the sorting by using a z-buffer that contains the first N transparent polygons? Put the color, alpha, and distance of each texture under a pixel (dropping the highest distance if there are already N stored), and then when all polygons are drawn, sort the distance of the polygons for each pixel and determine the final color. (Also stated below)

      I expect strange results in specific cases where the number of transparent polygons in a line exceeds N, but I don’t think there’s a graceful way to handle more than N stacked transparent polygons anyway.

      1. Bryan says:

        may be relevant. It has a bunch of different ways to do transparency without sorting polygons. I’m a fan of “weighted average”, modified to handle fully-opaque individual fragments by playing tricks with the depth and stencil buffers.

        Sounds like you’re thinking something vaguely similar to an A buffer, which isn’t in this paper but is in the wikipedia order-independent-transparency article?

      2. Piflik says:

        Would not work. Drawing is done in the GPU to the Frame Buffer (actually the back buffer, while the frame buffer is shown, they are swapped when drawing is done), while sorting would have to be done on the CPU. And transferring data between CPU and GPU is a bottleneck in the pipeline (thats why we want to minimize Draw Calls).

        If you want to draw multiple objects and then sort the drawn pixels, you’d need a buffer for each object. Also pixels are only color. There is no depth information in them. You do have depth information in a ‘fragment’ that you can use in the Pixel Shader (or Fragment Shader) to do some magic, and that is used to determine if a pixel is drawn with opaque objects (or with alpha-testing) by comparin it with the depth-buffer, but for alpha-blended pixels this is sadly useless.

        1. Kian says:

          It’s not strictly true that you don’t have pixel depth information. That’s what the depth-buffer is. You have several buffers: the back buffer is where you draw the colors of each pixel (you can stick alpha there as well). As you said, the fragment shader receives depth information, and that depth information is then stored in the depth-buffer if you have it enabled.

          The depth buffer starts at the max value for depth, and when you draw a pixel you compare your pixel’s depth with the value stored in the depth-buffer. If the buffer’s depth is higher, that means your pixel is closer to the screen. You then draw the pixel and replace the value in the depth buffer for the one from your drawn pixel.

          You could store the depth buffer, read it as a black and white image, and you’d get a cool effect where things closer to the screen are brighter than things far away (no colors though).

          Of course, if you want to store the depth of several objects, you need a buffer for each of them, as you said. And doing post processing on that would be more expensive than doing the sorting up front.

          1. Piflik says:

            That’s just what I said. You do have depth information available in the Fragment Shader, and it is used to determine if a opaque fragment is rendered, via the Depth Buffer, but there is always only one single value for each pixel in this buffer. No way to sort transparent fragments.

  13. Decius says:

    So, what’s wrong with doing the overdraw, tracking each color, alpha, and distance of the closest N transparent polygons each pixel has, and then calculating the final color as the last step?

  14. “So what is a cheap, corner-cutting, lo-fi developer to do?”

    Oooh! I know, I know! Make a game set in an underground dystopian complex ala Paranoia and get Rutskarn to voice Friend Computer!

    1. MrPyro says:

      “That substance on the ground is green, and is therefore above your security clearance, citizen.”

    2. ET says:

      Shamus! y u no work on this? :P

  15. Peter H. Coffin says:

    Once you get to the point of actually playing with the 3d stuff, you’ll probably want to show screen shots. I recommend Phereo as a tolerable joint to throw things that actually support various ways of rendering 3D: from primitive freeviewing and red-blue glasses, to supporting nVidia-standard 3D displays and the Oculus itself. Free accounts are only allowed 300MB uploads per month, but that’s gonna be more than enough for a couple of screen shots per week. (there’s a client tool that’ll convert images, too, so you can render for left and then right eyes, load them into the tool, put on a cheap pair of red-blues to check that it’s doing what you want, LONG before you’ve even got hardware. Then upload straight from the library to the website to share with the world.)

  16. Paul Spooner says:

    A trick for easily implementing bulk object instantiation for LOD control you say? Do tell!

  17. MichaelG says:

    So despite your Patreon riches, you *aren’t* ordering an Oculus Dev Kit???? Why not?

    Mine is due to arrive in July. Don’t be the only kid without one!

    1. Shamus says:

      I am, but I have to wait for the first payout. :)

      I’m going to have to wait until August. :(

  18. Jason Hutchens says:

    Nobody wins a wanking and pissing match.

  19. thak says:

    “When the screens are strapped to your eyeballs and you're moving you're head around…”

    Should be “you’re moving YOUR head around…”

    1. Hehe! Poor Shamus, he’s got like a hundred editors here :P
      On the positive side of that though is that Shamus’s articles contains fewer errors than “professional articles”, even without the Army of Shamus correcting him he still writes better articles than the majority of pros out there (In some online mags the standards have dropped horribly).

  20. Neko says:

    I wonder if you could treat grass the way snow and rain are handled in Minecraft; a bunch of planes that are arranged cylindrically around the player, anchored to the ground but which pop in and out quite quickly to stay in the area around the player. Look directly up while it’s snowing at night and move around and you can see the effect more clearly.

    Although obviously, you can get away with a lot more in Minecraft because of the graphical style…

  21. Neil Roy says:

    In your screenshot of the grass which was drawn out of order, you illustrate the problem with ordering, but, in that example you have alpha blended grass close to the camera. What if you only blended the grass that is in the distance, so any problems due to ordering won’t be noticed due to the distance away, don’t blend the grass up close.

    1. WJS says:

      You might as well not blend the grass at all then. Blending is expensive, and only doing it in the distance where the player can’t see it anyway seems kind of stupid.

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply

Your email address will not be published.