So grass is lightweight but high poly and you need lots of it blah blah blah. Let’s make some.
The trick I want to try here is to just have the grass rise out of the ground as you approach. Which sounds ridiculous. Wouldn’t that just make it look like grass is magically growing all over the place? Wouldn’t all that motion attract your eye and be incredibly distracting? Wouldn’t it just be easier to have crap fade in?
If someone had suggested this trick to me I would have called them a moron and made them stay late to defrag all the hard drives in the office as punishment. But it works. It works so well that you’ve likely seen it happen without noticing it. It works so well that I looked at it for hundreds of hours before I noticed. Remember this little game?
Yeah. Skyrim. To be clear: I’ve got the draw distance turned down a bit, which makes it easier to show this off. Also, this might not be representative of the game proper, since I’m running a ridiculous number of mods. (My collection has grown since I compiled that list.) But this is enough to give you an idea of what I’m talking about. Let me zoom in on the center of the screen…
The little hill there doesn’t have any grass. Then you take a few steps forward…
That stuff doesn’t fade in. No fancy alpha-blending or object fade shenanigans. The tufts of grass just rise right out of the bare dirt. I’ve watched it happen and I still don’t believe it works. I would expect it to be this confusing sea of motion, like bugs on the edge of your vision. But instead this stuff pops right up in front of you and it’s pretty much seamless.
Let’s try it.
One of the key things is that we want to stagger the appearance of grass. We don’t want this wall of foliage all rising up along a clearly defined boundary. So I make three meshes. One is sparse. It covers only ¼ of the ground, but these isolated tufts reach (say) about thirty meters into the distance. Then I’ve got another mesh that covers about ½ of the ground, but it only reaches (say) twenty meters. Then finally the last mesh fills in the remaining gaps, and it only reaches ten meters or so.
You can visualize their distribution like this:
Not to scale. Red is the sparse mesh, blue is the medium one, and green is the final one to fill in the gaps. The default grass mesh is just a cluster of white polygon X’sThat is, two upright panels, like a pair of intersecting billboards., all resting at the same height, all exactly one unit tall. I add this texture:
Now we make another little shader. Every frame, it grabs this mesh and shifts it so that it’s always under you. In one of my surface textures I’ve packed some data for this shader to use. One field says “draw your grass texture from this region of the texture”. Another says, “Adjust the panel height to be height N.” It also draws from the color map I made earlier in the project to color the grass, and uses the heightmap to move the grass up to match the terrain.
Yeah. The colors are kind of random. I was trying to make sure it was drawing the proper colors from the right parts of the texture, and the best way to do that was to make the color variations drastic.
Note the variations in height and texture. We’re just drawing the same three grass meshes over and over again every frame. So no sorting. No alpha-blending. No worrying about draw order. We don’t need to edit the mesh when the player moves around, or add or remove tufts of grass if they go to someplace that has more or less foliage. We’ve offloaded everything onto the GPU. The CPU is just throwing static models at the graphics card and then taking a nap, because there’s nothing to do.
So what happens when you’re someplace that doesn’t need grass? As it’s rendering, it looks up in the texture and sees that there’s not supposed to be any grass here. So it scales the grass down so that it’s exactly zero tall. Then it renders it anyway, because vertex shaders are stupid like that. If they were smarter, they’d be slower. I guess it’s sort of like a fly having reflexes an order of magnitude better than a human. Simplicity makes for speed.
Rendering a zero-tall tuft of grass isn’t a big deal. It ends up drawing a couple of lines, which are usually tucked under the terrain anyway.
I’ve mentioned before that there are two major kinds of shaders: Vertex shaders and fragment shadersThe third type is a GeoMetry Shader. I’ve been looking for an excuse to do something with them. More on this later.. (I would love to know what they’re called “fragment” shaders in technical documents and not just “pixel shaders”. The latter is much more descriptive and easier to grasp. This is probably some bit of nomenclature that makes more sense if you’re spend the last decade fooling around on the driver and hardware level, and not on the software level where I usually tinker.) A verttex takes the corners of a polygon and puts them into place for rendering. The
fragment pixel shader then fills in the pixels.
Let’s say I’ve got a couple of shaders. The vertex shader figures out where each vertex ends up on screen. Then the pixel shader comes along and fills the space defined by that polygon. And because I’m not feeling really ambitious, let’s say all my pixel shader does is output blue pixels.
Nitpick shield: Yes, this would actually be two triangles and not a quad, but I’m trying to keep this visually simple.
The vertex shader would run four times, once for each vertex. The pixel shader would run… I dunno. I’m not counting those blue pixels. Let’s say about plentylots. Obviously the GPU spends a lot more time chewing on pixels than shuffling vertices. In the long run, processing those zero-height grass tufts just isn’t a big deal. They come down to a few vertex transforms and (essentially) a single line of pixels. It really is faster to mindlessly attempt to draw everything than to spend time avoiding drawing stuff we don’t want.
Maybe I should add another bunch of polygons for flowers:
We’re still not taxing the hardware at all. It’s not even close. And having these flowers magically pop up out of the ground thirty meters away is fine. Not distracting at all. (Will this trick work just as well in VR, I wonder?)
This was an educational project, but not quite as illuminating as I’d hoped. Mostly I became aware of just how far behind my knowledge is. I still don’t have a good feel for performance bottlenecks. While this project was a lot more GPU-intensive than my usual stuff, it’s not really hitting the hardware like modern games do.
Let’s see how we’re doing for Oculus VR support:
To be clear, that’s not a real VR image. I just draw the scene twice with an arbitrary camera offset to make sure I could stay at 60FPS. I don’t think the image would work with 3D gear.
In any case, I seem to have hit my goals without learning everything I wanted to know. I’ve got a better grip on shaders than before I started, but I’m still not pushing the hardware. I still haven’t used a geometry shader. I still have a lot of gaps in my understanding. In fact, I think this project just made me more aware of them, so I feel like I know less.
Well, this is what I get for sticking to my comfort zone. I think I’m going to try again with something more ambitious.
Okay, not more “ambitious”. More “not based on old-school heightmap terrain”. Which is ambitious from a certain pathetic, slacker, can-barely-be-arsed point of view. I want to mess around with some modern lighting and shadowing techniques. I think we may even do bump-mapping. However, this kind of thing is usually done in the context of a heavy-duty modern-day engine. Photorealism and such. I don’t want to have to make a Unreal Engine class editor just to make some content to work on. What I need is a way to generate a lot of content really fast.
Ah, of course:
I still have the code leftover from project Octant.
So next time we’re going to take this heap of cubes and do something modern with them.
I’ll leave you with a fun fact: My projects always have different names in the code than they have here on the blog, and are usually named with really stupid and awkward acronyms. I don’t know why. I think I hate naming projects.
Internally, Octant was called WoD – World of Dig. This one was called “Song” – Same Old Nature Generator. This project is called “Unearth”, because I wanted a synonym for “dig” and I wasn’t in the mood to spend much time thinking about it.
So next time we’re going to do more goofing around with shaders, only this time with blocks. (I considered using marching cubes. They’re more interesting to look at, but they’re a pain to texture and I’m here to play with shaders, not texture projection theory.)
 That is, two upright panels, like a pair of intersecting billboards.
 The third type is a GeoMetry Shader. I’ve been looking for an excuse to do something with them. More on this later.
The Witch Watch
My first REAL published book, about a guy who comes back from the dead due to a misunderstanding.
The Biggest Game Ever
Just how big IS No Man's Sky? What if you made a map of all of its landmass? How big would it be?
The Disappointment Engine
No Man's Sky is a game seemingly engineered to create a cycle of anticipation and disappointment.
Quakecon Keynote 2013 Annotated
An interesting but technically dense talk about gaming technology. I translate it for the non-coders.
Could Have Been Great
Here are four games that could have been much better with just a little more work.