Frontier Rebooted Part 7: What Have We Learned Today?

By Shamus Posted Wednesday Jun 18, 2014

Filed under: Programming 66 comments

So grass is lightweight but high poly and you need lots of it blah blah blah. Let’s make some.

The trick I want to try here is to just have the grass rise out of the ground as you approach. Which sounds ridiculous. Wouldn’t that just make it look like grass is magically growing all over the place? Wouldn’t all that motion attract your eye and be incredibly distracting? Wouldn’t it just be easier to have crap fade in?

If someone had suggested this trick to me I would have called them a moron and made them stay late to defrag all the hard drives in the office as punishment. But it works. It works so well that you’ve likely seen it happen without noticing it. It works so well that I looked at it for hundreds of hours before I noticed. Remember this little game?


Yeah. Skyrim. To be clear: I’ve got the draw distance turned down a bit, which makes it easier to show this off. Also, this might not be representative of the game proper, since I’m running a ridiculous number of mods. (My collection has grown since I compiled that list.) But this is enough to give you an idea of what I’m talking about. Let me zoom in on the center of the screen…


The little hill there doesn’t have any grass. Then you take a few steps forward…


That stuff doesn’t fade in. No fancy alpha-blending or object fade shenanigans. The tufts of grass just rise right out of the bare dirt. I’ve watched it happen and I still don’t believe it works. I would expect it to be this confusing sea of motion, like bugs on the edge of your vision. But instead this stuff pops right up in front of you and it’s pretty much seamless.

Let’s try it.

One of the key things is that we want to stagger the appearance of grass. We don’t want this wall of foliage all rising up along a clearly defined boundary. So I make three meshes. One is sparse. It covers only ¼ of the ground, but these isolated tufts reach (say) about thirty meters into the distance. Then I’ve got another mesh that covers about ½ of the ground, but it only reaches (say) twenty meters. Then finally the last mesh fills in the remaining gaps, and it only reaches ten meters or so.

You can visualize their distribution like this:


Not to scale. Red is the sparse mesh, blue is the medium one, and green is the final one to fill in the gaps. The default grass mesh is just a cluster of white polygon X’sThat is, two upright panels, like a pair of intersecting billboards., all resting at the same height, all exactly one unit tall. I add this texture:


Now we make another little shader. Every frame, it grabs this mesh and shifts it so that it’s always under you. In one of my surface textures I’ve packed some data for this shader to use. One field says “draw your grass texture from this region of the texture”. Another says, “Adjust the panel height to be height N.” It also draws from the color map I made earlier in the project to color the grass, and uses the heightmap to move the grass up to match the terrain.

The result?


Yeah. The colors are kind of random. I was trying to make sure it was drawing the proper colors from the right parts of the texture, and the best way to do that was to make the color variations drastic.

Note the variations in height and texture. We’re just drawing the same three grass meshes over and over again every frame. So no sorting. No alpha-blending. No worrying about draw order. We don’t need to edit the mesh when the player moves around, or add or remove tufts of grass if they go to someplace that has more or less foliage. We’ve offloaded everything onto the GPU. The CPU is just throwing static models at the graphics card and then taking a nap, because there’s nothing to do.

So what happens when you’re someplace that doesn’t need grass? As it’s rendering, it looks up in the texture and sees that there’s not supposed to be any grass here. So it scales the grass down so that it’s exactly zero tall. Then it renders it anyway, because vertex shaders are stupid like that. If they were smarter, they’d be slower. I guess it’s sort of like a fly having reflexes an order of magnitude better than a human. Simplicity makes for speed.

Rendering a zero-tall tuft of grass isn’t a big deal. It ends up drawing a couple of lines, which are usually tucked under the terrain anyway.


I’ve mentioned before that there are two major kinds of shaders: Vertex shaders and fragment shadersThe third type is a GeoMetry Shader. I’ve been looking for an excuse to do something with them. More on this later.. (I would love to know what they’re called “fragment” shaders in technical documents and not just “pixel shaders”. The latter is much more descriptive and easier to grasp. This is probably some bit of nomenclature that makes more sense if you’re spend the last decade fooling around on the driver and hardware level, and not on the software level where I usually tinker.) A verttex takes the corners of a polygon and puts them into place for rendering. The fragment pixel shader then fills in the pixels.

Let’s say I’ve got a couple of shaders. The vertex shader figures out where each vertex ends up on screen. Then the pixel shader comes along and fills the space defined by that polygon. And because I’m not feeling really ambitious, let’s say all my pixel shader does is output blue pixels.


Nitpick shield: Yes, this would actually be two triangles and not a quad, but I’m trying to keep this visually simple.

The vertex shader would run four times, once for each vertex. The pixel shader would run… I dunno. I’m not counting those blue pixels. Let’s say about plentylots. Obviously the GPU spends a lot more time chewing on pixels than shuffling vertices. In the long run, processing those zero-height grass tufts just isn’t a big deal. They come down to a few vertex transforms and (essentially) a single line of pixels. It really is faster to mindlessly attempt to draw everything than to spend time avoiding drawing stuff we don’t want.

Maybe I should add another bunch of polygons for flowers:


That’s pretty.

We’re still not taxing the hardware at all. It’s not even close. And having these flowers magically pop up out of the ground thirty meters away is fine. Not distracting at all. (Will this trick work just as well in VR, I wonder?)

This was an educational project, but not quite as illuminating as I’d hoped. Mostly I became aware of just how far behind my knowledge is. I still don’t have a good feel for performance bottlenecks. While this project was a lot more GPU-intensive than my usual stuff, it’s not really hitting the hardware like modern games do.

Let’s see how we’re doing for Oculus VR support:


To be clear, that’s not a real VR image. I just draw the scene twice with an arbitrary camera offset to make sure I could stay at 60FPS. I don’t think the image would work with 3D gear.

In any case, I seem to have hit my goals without learning everything I wanted to know. I’ve got a better grip on shaders than before I started, but I’m still not pushing the hardware. I still haven’t used a geometry shader. I still have a lot of gaps in my understanding. In fact, I think this project just made me more aware of them, so I feel like I know less.

Well, this is what I get for sticking to my comfort zone. I think I’m going to try again with something more ambitious.

Okay, not more “ambitious”. More “not based on old-school heightmap terrain”. Which is ambitious from a certain pathetic, slacker, can-barely-be-arsed point of view. I want to mess around with some modern lighting and shadowing techniques. I think we may even do bump-mapping. However, this kind of thing is usually done in the context of a heavy-duty modern-day engine. Photorealism and such. I don’t want to have to make a Unreal Engine class editor just to make some content to work on. What I need is a way to generate a lot of content really fast.

Ah, of course:


I still have the code leftover from project Octant.

So next time we’re going to take this heap of cubes and do something modern with them.

I’ll leave you with a fun fact: My projects always have different names in the code than they have here on the blog, and are usually named with really stupid and awkward acronyms. I don’t know why. I think I hate naming projects.

Internally, Octant was called WoD – World of Dig. This one was called “Song” – Same Old Nature Generator. This project is called “Unearth”, because I wanted a synonym for “dig” and I wasn’t in the mood to spend much time thinking about it.

So next time we’re going to do more goofing around with shaders, only this time with blocks. (I considered using marching cubes. They’re more interesting to look at, but they’re a pain to texture and I’m here to play with shaders, not texture projection theory.)




[1] That is, two upright panels, like a pair of intersecting billboards.

[2] The third type is a GeoMetry Shader. I’ve been looking for an excuse to do something with them. More on this later.

From The Archives:

66 thoughts on “Frontier Rebooted Part 7: What Have We Learned Today?

  1. Jnosh says:

    IIRC, its fragment shaders and not pixel shaders (in OGL, DX calls them pixel shaders) since the output is not (always) a pixel.

    For example if you are doing multi-sampling, the final pixel “value” is determined from multiple fragments. Also the fragment shader does more than just produce a color output for a pixel. It can produce no output at all and of course also write to other buffers (depth, other texture buffers, …).

    But someone who knows this stuff better should feel free to correct me if I’m wrong :D

    1. lethal_guitar says:

      Yes, basically – “fragments” are like pixel “candidates”: One fragment might completely define a pixel, it might contribute to it, or it might not get used at all. For example if you happen to draw something in the background first, this will generate fragments, which will then be overdrawn later by something in the foreground etc.

      So whereas there is only one pixel at a specific screen coordinate, there can be multiple fragments, and the shader has to run for each of these. So technically, “fragment shader” is a more accurate term.

      1. Knut says:

        Yes, according to Wikipedia, “a fragment is the data necessary to generate a single pixel’s worth of a drawing primitive in the frame buffer”.

        So technically, OpenGL is correct, while DirectX is more clear (I guess? I don’t really mind to be honest, I’m pretty used to calling it fragment shader)

      2. Volfram says:


        And here I was under the impression that Vertex shaders manipulate existing vertexes, Pixel shaders alter the pixels after they’ve been drawn to a backbuffer, geometry shaders fabricate new vertexes from algorithms, and “Fragment” shaders were a catch-all term for “all of the above plus GPGPU.”

        Merits further investigation.

    2. Piflik says:

      A Fragment is what comes out of the Rasterizer Stage in the Programmable Pipeline.

      The Rasterizer gets a stream of Vertices (either from the Vertex Shader or from the Geometry Shader), and interpolates the data in these Vertices barycentrically across each triangle. Then it produces a Fragment for each pixel on the screen, whose center-point lies inside that triangle. This Fragment already has the final pixel-coordinates on the screen, but additionally it has a depth value and all other data that you Vertex Shader’s output had (like texture coordinates, vertex normals, vertex colors and what ever else you consider important). This data is then used by the Fragment Shader to produce a colored pixel on the screen.

      Vertex Shaders work with Vertices, Fragment Shaders work with Fragments.

      1. Volfram says:

        This site needs upvotes.

        1. Worthstream says:

          This comment would merit an upvote!

  2. Stephen says:

    The three different circles of gradually thickening grass are probably key to not noticing it. I definitely notice the groundcover grow-in in SWTOR, but that’s possibly because they just have one radius to have it come in. So it’s very clear in any open areas that you’re running around centered in a thirty-meter bubble of grass.

    1. Smejki says:

      One of the key tricks is to not use flat terrains. It is very easy spot the grass bubble in such scenarios.

    2. Paul Spooner says:

      It would help if there were two different colors of ground used. One for distant terrain that blends the dirt color and the grass color, and one for close terrain where the colors are “seperated” onto the different geometry. This would really help the illusion that the grass is there all the time, and not just crowding around the player.

      1. Paul Spooner says:

        Ooh, you could go one further!
        Make the height of the terrain in the distance moved upward by half the height of the grass, and color it the mixed color of the grass and underlying terrain.
        Then, as you move closer, the grass moves up, and the terrain moves down, leaving the average occlusion height the same. You’d need to dynamically re-shade the ground texture, but a pixel shader would probably work fine for that?

        I imagine this combined texture and geometry approach could work really well, and result it pretty convincing LOD transitions.

  3. RandomInternetCommenter says:

    While it doesn’t bother me too much, I’m definitely distracted by grass suddenly showing up. In Skyrim it was obvious, with whatever default draw distance the game autodetected (on a fairly decent system). Also, it could be because I turn Depth of Field off, and perhaps this is why developers are so insistent on including, sometimes even forcing the feature?

  4. Daemian Lucifer says:

    Shader Learning And Creating Kool Environment Rhombuses.

  5. Spammy says:

    The image is too dark for me to make out if it works as a stereopair. It probably doesn’t. I can get the images to overlap but I can’t see if they are producing the right effect or not due to the darkness and lack of detail.

    1. Paul Spooner says:

      It’s not. The cameras are offset laterally for a “focus to infinity” effect, but they are also offset along the axis of viewing. Basically, when you get the images to align by focusing “beyond” the screen, you see that one “eye” is viewing much further “forward” than the other.

      You can compensate for this by turning your head to the left (so that your actual eyes are correctly staggered in the screen-normal axis) but this puts your eyes closer together on the lateral axis, which requires your focus to go beyond infinity. You can compensate for that by making the images smaller, but that makes it even harder to see…

      While these issues aren’t impossible to overcome, my brain started to question my motives before I could get it right.

      1. Fizban says:

        “which requires your focus to go beyond infinity”

        Just thought this line was awesome in any context

      2. Peter H. Coffin says:

        *nod* There’s a lot of oddnesses that look like this image would be very difficult to make sense of as a stereo pair. Since offset (or parallax, if you get technical) varies with apparent distance, “nearby” points should be not overlapping in the same was as “distant” points. Looking at the pair, we see the shoreline in the same spot in the lower-left corner, and the glowing bit (the sun, probably) is also in the same spot from left and right edges of frame. But things in the mid-distance ARE different; there’s a steep slope off to the right and a very gentle slope in the left image, and there’s one probably-nearish round hill on the left of the right image, and a much skinnier hill and a second hill. That’s okay for a LARGE amount of parallax that include some rotation of the object as well (like we have an oblong hill), but it’s in the mid-distance, which messes the essential scale up. And I suspect the that the sun positioning is an artifact of it not being really rendered per se, but rather positioned at a particular azimuth and elevation regardless of where the camera is. Which means it’s “at the window” no matter what. I think I’ve babbled a bit about window before.. Stuff that’s exactly overlapping in parallax is “at the window”, which is everything that exists outside of the image. For your eyeballs, that’s your peripheral vision. For a screen it’s the edges of the image. Stuff that’s offset in one direction (in the same direction as the eye-side) is “behind the window” (that is, it’s more distant) and stuff offset oppositely is “in front of the window” (it’s in your lap). And people get very uncomfortable with things in front of the window that are cut off by it. It’s okay for a flower to pop out into your face, but grass shouldn’t.

        For comparison, there’s a stereo pair of a dude on a motorcycle at (parallel view). The close things (the white paint near the manhole) are very close to each other laterally between images while things in the mid-distance (like the fellow with the light blue sweater at the left edge of each image) are offset a bit more from each other, and the white sign on the building behind is offset by the whole width of the sign.

  6. SteveDJ says:

    I want to see how it looks to have TREES sprouting up out of the ground!

    1. ET says:

      My initial guess, would be that this could work for trees, but you’d have to be more clever about how you implement it. Like, trees are big enough, that they need proper locations in the world map, instead of just the here-ish for a certain location having/not having grass. I bet it’d work visually, though. :)

      1. Paul Spooner says:

        Totally! And why stop there? Buildings could do the same thing. Mountain ranges even!

        1. syal says:

          Dungeons could do it with bandits!

          1. rofltehcat says:

            I think most of you guys are just joking because it sounds like the kind of stuff you’d see in late 90s 3D games. But now I kinda want to see a game where everything grows, gets built and decays at supernatural speed and your character is basically moving etc. at normal speeds. So processes that would take years normally would happen in a space of few minutes.
            So the player would kind of be trapped in time or a ghost. Those items could also be the obstacles because you surely wouldn’t want to risk a little sappling growing and trapping or even impaling you or a thick wall be built around you while you weren’t paying attention. The amount of time distortion could also vary from area to area.

  7. RCN says:

    I actually noticed the growing grass pretty early on Skyrim. I really don’t know why, but I didn’t mind. I think it is because I obsess over stuff “popping up” in games, so having it growing was actually nice for a change.

    So… was that actually faint praise at Bethesda’s art direction? Or would this be considered software bottleneck wizardry? (Probably one last new way to squeeze that last bit of processing smokes n’ mirrors from outdated console hardware?)

  8. Paul Spooner says:

    Yay! (I totally called it)

    So, now that you’re moving on to another project, you’re going to release the source code for this one… right?

  9. Abnaxis says:

    Don’t you also have to do fisheye coordinate transforms for VR? Wouldn’t that be a good experiment for using the geometry shader?

    1. Paul Spooner says:

      My guess is he’s waiting on trying vr rendering until he has the proper output hardware. No point trying it if you can’t test to see if it works.

      It does bring up an interesting point however. He tried the dual view window, but he didn’t say what framerates he was hitting. Just that it’s still “really fast”

      Is the engine locked to 60fps? Or do you cut it loose to see how fast it will render?

    2. Shamus says:

      Actually, the Oculus dev kit will do the distortion for you automatically. You just render to texture, and it over to the SDK, and it does everything.

      1. Paul Spooner says:

        Whaaaat? That’s amazing, but it can’t be as efficient as transforming the scene geometry directly… can it?

        I assume you need to render with a specific camera zoom level and resolution at least?

        1. Shamus says:

          I think it’s just required. Vertex manipulation is good enough for creating fisheye distortions in a regular game. (Like, maybe the game has an effect for when you’re drunk or something?) But the Oculus distortion is there to correct existing lens distortion. If it’s not pixel-perfect, then everything might wobble. This would be pronounced when you approached a large wall that wasn’t tessellated.

          I guess I’ll find out how true this is later this summer. *fingers crossed*

          1. Paul Spooner says:

            Ahh, of course! You’d need to sub-divide all the geometry to get smooth deformations.

            I wonder how long it will be before micropolygon rendering becomes feasible for real-time applications.

            1. Wouldn’t it be almost really tiny voxels at that point?

              1. Hmm, just looked at wikipedia about micropolygons and saw this “a renderer using micropolygons can support displacement mapping simply by perturbing micropolygon vertices during shading.”

                This means you could add “physical” details to a mountain.

                1. Volfram says:

                  It also devours processing power, which is why you don’t usually see much of it in games these days.(yet.)

                  Pixar renderers use micropolygons. They didn’t actually do any raytracing until fairly recently. As I recall, I think Cars was the first Pixar movie to include raytraced elements, and even then it was only certain things, like reflections of eyeballs on hoods.

            2. Unbeliever says:

              “Ahh, of course! You'd need to sub-divide all the geometry to get smooth deformations.”

              Am I the only one who “heard” this line in Doctor Who’s voice?

              In the same exact tone in which he’d solve the problem by reversing the polarity of the neutron flow?

              Maybe it was just me… :)

          2. Tom says:

            Do you want to get the DevKit 2? I think they mentioned that they’d prefer 120fps to prevent motion sickness, maybe aim for that :D

            1. ET says:

              Better aim for 240 just to be safe/future-proof! :P

  10. DaMage says:

    So a fun little trick with geometry shaders (especially with simple shapes like cubes or grass planes) is that you can use them to generate simple objects so you only pass a single vertex instead of all 8 that make up a cube.

    So you pass in a vertex that is on the cube, this is positioned with your vertex shader in the scene, then get passed to the geometry shader. You then create the other 7 points that are offset from the vertex you passed in, before now passing this cube shape to the fragment to be rendered.

    Suddenly you have to pass 8 times less data to the GPU for rendering, which as you pointed out earlier in this piece is one you the big bottlenecks you are looking to fix.

    1. Paul Spooner says:

      Hmm… how much procedural content generation could you do directly on the video card I wonder?

      1. DaMage says:

        Not much as you do not have access to randomly generated numbers.

        1. MadTinkerer says:

          Maybe not random, but how much non-random procedure is there? In terms of generating geometry according to specific rules?

          EDIT: voxel or vertex or otherwise?

          1. Piflik says:

            Vertex and Voxel are two different things. A Voxel is a Volume Element (similar to Pixel = Picture Element), and Vertex is a point in space.

            You can create cubes from Vertices using a Geometry Shader (and its quite simple to do so), but you wont pass a corner vertex to the Geometry Shader, but the center of the cube (the Vertex Shader would not do anything but pass the position of the cube from the Input Assembler to the Geometry Shader). The Geometry Shader would then create the 8 corner vertices (you have to make sure to create them in the correct order, so they are connected correctly to triangles) and calculate their positions and normals (if you want real corners instead of smooth shading, you would need 24 Vertices, 4 for each side of the cube, since one Vertex can only ever have a single Vertex Normal) and pass them to the Rasterizer.

        2. Couldn’t you generate say 4KB of random data and plop that as a texture in the GPU memory and then use that as a random data source?

          1. Paul Spooner says:

            Just what I was thinking. You could even run a twister to increment the texture occasionally. Or use the user input to generate seeds? So many possibilities!

        3. You can totally generate procedural content in a vertex/ fragment shader: for example :)

      2. MadTinkerer says:

        This has me wondering about voxel engines and cellular automata in particular. Minecraft uses some relatively simple cellular automata for trees, but you could, for example, do real-time vine growth in Voxlap (although I don’t have the technical know-how to do so and it seems all of the commercial Voxlap engine games are focused on chunky Minecraft cubic-meter-sized voxels so they wouldn’t bother trying).

        Maybe… some kind of octant tree? I don’t know enough tech to talk about this properly.

        1. Paul Spooner says:

          Really, you could store your entire relevant game data-set in the frame buffer. Results in massive parallel processing, and you get leak-proof code on par with functional-programming discipline. Would probably require a pretty tightly coded fractal parametric generation system… but I know a guy who might be able to pull that off.

          1. You aren’t talking about THE Mr. S.Y. are you?!

    2. ET says:

      I think it would be simpler to use vertex arrays (or whatever they’re called) for simple objects. I mean, then you’re just dumping a regular, boring, simple set of vertices, same as you would normally, but then the GPU dumps it into its memory. After that, you just say “draw those same vertices, but at this point in space”.

      The pseudo-random stuff discussed above would be a good use of geometry shaders, but if it’s just static data, it’d be simpler (and less work) to use the existing tools/pipeline.

      1. DaMage says:

        That is true, by using a vbo (vertex buffer object) you can pre-store the object in memory, but then for each object you want to draw the CPU has to individually call the drawing method…which is not what you want.

        With the geometry shader you can pass in 1000 points, then on the GPU these all become cubes or whatever that are drawn. Doing it with just vbos means that the CPU has to tell the GPU 1000 time to draw a cube.

        Always the aim when rendering is to try and get the GPU to do as much as possible in a single big hit. The more times you need to go back to the CPU the slower its going to be.

    3. Richard says:

      This is exactly what geometry shaders are for.

      I’m using them in a couple of quick’n’dirty programs to create billboard geometry – pass an array of points, vertex transforms them into screen-space and the geometry shader then turns the points into billboards.

      One complication is that the vertex shader runs first, followed by the geometry shader and finally the fragment shader.

      In one of my applications the vertex shader actually does nothing whatsoever, the transformation from object/world to screen space is done by the geometry shader instead.

  11. Neil Roy says:

    I wonder if you made the grass get smaller, as opposed to shrinking into the ground. That is, it mimics the effect of objects getting farther away from you, they simply get smaller until at some point, they vanish from view as opposed to growing out of the ground.

    This way it would appear as if it is just far away perhaps? I guess it would get smaller in both directions, along both planes. I assume the grass currently only changes size along one plane, vertically?

  12. Might be worth reading up on VR & bump-mapping before implementing it (ok, it’s not exactly a horror to implement but if you’re going to have to not use it because it doesn’t work in VR then it seems a bit of a waste to add in the first place).

    There is some benefit to it (from what I’ve little I’ve looked into this case, hopefully DK2 will make it easier to do good research in a month or two), but you’ve got to be careful to not accidentally make everything look really fake.

  13. kdansky says:

    About grass. Very recommended for programmers. The other articles are illuminating too.

  14. Richard Ingram says:

    For the VR rendering, does each image render the grass growing in separately? If so will that result in grass appearing in one image before the other if you are moving sideways, or is the offset too small for that to be noticeable?

    1. Hopefully the grass is physically in the same area worldwise (placement/orientation) and should give the correct stereoscopic depth behavior.

      Also for stereoscopic rendering both views MUST be of the exact same slice of time otherwise you’ll have nasty shit like flickering shadows and whatnot (many AAA games suffered/still suffer from that).

      1. Richard Ingram says:

        but using the system as i read it means that the position of the grass is dependent on the camera position, using the three meshes. I think I see now how they work. There are three global meshes, which together fill the environment, but grass appears based on the mesh positions, the mesh number and the distance from the camera. The global meshes are static, so you don’t get the grass moving as the camera moves. Is that right? However, you might still have the problem where a particular tuft of grass appears in one eye, but not in the other, due to the distance between camera positions. Is this something that needs to be taken into account, by having a position that defines the landscape, and rendering the frames from positions offset from that, or is the camera-camera distance so small, that any differences are so small they are unnoticeable in practice?

  15. Shamus I stumbled across this, might be worth a read
    He’s got a lot of OpenGL 4 and shader stuff too documenting learning OpenGL 4.

  16. A tip on 2D stuff (like a GUI) when working with VR, Shamus is probably aware of this but others might not be.

    Make sure that any 2D GUI is rendered/projected/whatever you call it physically in the 3D “world”, if you just overlay it you will cause “double vision” or nausea.

    If the GUI is “in the world” it may look odd with it hovering in front of you (if using VR) but both eyes will see it as if somebody was holding up a card infront of your real eyes.
    How far away should the GUI be? I have no idea.

    A bonus though is that you could allow players to choose if both eyes should see the same info or if they want they can spread some info between the two eyes or just show the info in one eye. (ala Google Glass)

    Why not automatically spread GUI elements between eyes? Well, people with no sight or poor sight in one eye are still able to use VR they just won’t see stereoscopic).

    Many people forget that VR does not always equal stereoscopic, monoscopic VR is still as immersive, if the scene cover your field of view and combined with head tracking.

    Myself I consider headtracking vital for VR, without head tracking you just basically have a screen strapped in front of your eyes, which is cool but I would not call it VR, just how a movie is not a game.

  17. rofltehcat says:

    About grass borders:
    No-Man’s Sky was pretty popular this E3 because it simply looks incredible:
    (It also sounds like something right up Shamus’ alley with all the procedural generation etc., imo also a great example of style vs. photoreal-brownism.)

    But what irked me are the grass borders close to the water. The grass just… stops.
    There is a ground texture below the grass and maybe the effect is created by that ground texture reaching out from the grassy area. Otherwise, grass normally doesn’t just simply stop from being high at one point and nonexistant at the next. I’d expect it to be lower at the edges because of increased trampling, grazing, wind and overall worse conditions (sandy ground, maybe wet roots). I’d also expect it to have a few tufts grow in the sandy area, maybe reaching in there a little isntead of growing along an edge. If there is an edge, I’d expect there to be an erosion border with the ground above stabilized by the grass and below that a kind of steep drop (even if it is just 5 cm/2 inches) with the water having removed some of the ground below.
    But it also shows another problem: There are no roots. The grass just grows out of the ground without any transition near the ground, where the grass should be closer than just its stem with barely any leaf around there. Some of the grass is also floating in the air.

    Yeah, it is alien planet grass but it still looks kinda off. I absolutely love the looks of the game but sometimes noticing small details can disturb the impression. I’m sure they’ll change the grass a little to make the borders look better and fix the floating grass but I’m sure things like erosion borders etc. might overall be hard to do in procedurally generated terrain, especially when the grass is just such a small part of the game.

    Anyways, what would be a sensible way to fix:
    -floating/rootless grass?
    -grass area edges?
    -erosion borders?

    1. Paul Spooner says:

      It certainly does look incredible. Sadly, from what I’ve seen, they are going mostly for appearances instead of solid simulation. I also haven’t seen anything about modding support.
      We’ll have to see, of course, and it’s certainly a huge step in the right direction.
      Still, I can’t help but feel that, far from being true to the title, this is very much HELLO GAMES’ sky, and they intend to keep it that way.

  18. Jabrwock says:

    The problem I find with the “grow the grass” trick is that 9/10 the underlying surface does not match the grass texture, so it’s VERY obvious where your grass is no longer being drawn.

    In the Skyrim example, you assume the hill has no grass, because it’s covered in the same texture as the ground slightly in front of you, yet clearly has no grass. Only once you approach does it pop into existence.

    What you need is a LOD for textures that will have grass on top of them, so that when they do draw the grass, they look like deadfall (slightly yellower), providing a convincing “ground level” under the populated grass objects. But past your draw distance for the grass, it changes the texture to a combination deadfall/grass (sort of in-between the yellow bottom layer and the greener sprites), so that the dividing line isn’t so jarring.

    A game I betatested over a decade ago tried a trick where they had two kinds of grass. One for close-up was a mesh of sprites, each batch of grass was made up of 6 2D sprites that crossed in the centre. Looked a bit silly when you were moving around a tuft, but from a short distance looked reasonably good. The other was for longer distances, and was effectively a box over the ground, with grass textures around the outside. So from a large distance, it was easier to draw over all the ground, but still hid things short enough to be hidden in tall grass. Then as you got closer it became easier to see “in between” the tufts, giving you a chance to spot the objects within the grass.

  19. Abnaxis says:

    If you stand in one place and spin around, will grass polygons out in the farthest section (the red grid) pop in an out of existence like a wave, as the drawn squares change which section they overlap with? It seems like that would happen with the way you’ve described it, and it sounds distracting…

    EDIT: Oh wait, I bet you don’t rotate the grid with the view. Derp…

    Still it seems like this solution would look weird if you are moving laterally with respect to your view. You’d see waves of grass popping in and out in the distance if you decided to strafe.

  20. Do you only move the grass to the closest integer world-space coordinate(or in steps according to the texel size of the grass amount/colour textures’ resolution) to the camera? I imagine if it was continuous, and interpolating/ point sampling the textures, the shifting circle of grass attached to your feet would be pretty obvious.

  21. Roxor says:

    I just noticed something: The grass density diagram looks a lot like a 2*2 ordered dither.

    Perhaps an alternate way of implementing the grass density would be to use an ordered dither of distance from the player. Dither the values 0 and 1 for presence and absence of grass, and if using larger ordered dither kernels, you could have a lot of bands for the grass getting added.

    With a 4*4 dither kernel, you could have 16 levels of grass, which could make the growth as you approach pretty smooth. Might also give you some room in how far out you can still have physical grass, too.

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply

Your email address will not be published.