The last several posts have been a sort of lead-up to this one. It probably doesn’t look that way, but all the fussing I’ve been doing is about to pay off. Moving from Qt to Visual Studio broke things. Moving to marching cubes broke things. Adding shaders broke things.
Since then I’ve been adding technology and fixing things, and as those pieces come together we’re finally getting something worth looking at. Bit we’re getting ahead of ourselves. First, I need to backtrack and explain a few items that I glossed over in earlier entries.
The big problem is that it is taking crazy, super, insano, bonkers long to generate terrain. It’s about a second per node. Several people, including fellow code-hobbyist Michael Goodfellow – suggested that the octree is really expensive to use. So let’s disable that and see if we get any speed gains.
And by “disable” I mean, “write an entirely different, simpler storage system for my data”. No trees. No dynamic allocation. Just one huge, honkin’ block of memory where you can get anywhere in a single lookup. And let’s see how the new system performs…
Wow. Just over a tenth of a second. Almost ten times faster.
Click on octree.cpp. Hit delete.
Now my project is probably a thousand lines smaller and runs ten times faster. And of course now I have a project called “octant” that is no longer related to octrees. So that’s hilarious. I think I should make this into a game called “Shoot Guy Online”, which is a single-player game about a woman who fights with a sword.
I’m not really that sad to see the octree go. The octree made it super-fast to do things that were rare, and super-slow to do things that were common. This is like a sprinter buying shoes that make her walk three times as fast and run ten times slower.
I’d wanted to use it so that I could fit a really huge data set into memory, and see how far out I could get my draw distance. It became clear a few days ago that memory wasn’t going to be my chief hurdle. I was going to run into throughput problems long before I ran into memory problems, even though every doubling of distance results in eight times as much memory usage. A kilometer view distance is just not going to happen within the context of this project. (Oh, it’s possible, but it would require super-long loading times, or multi-pass block generation where the map would fill with increasing levels of detail, or some other trick to manage the sheer volume of stuff I have to process. That could take days to investigate and it might not pan out.)
People have been asking about the surface normals. If you remember, those are vectors (think of them like arrows) that say which way a vertex is pointed.
Now that normals are working again, I can use lighting.
I’ve also got a super-cheap shadowing system that will darken a block if there is another block anywhere above it. It’s strictly pass/fail – the “sunlight” doesn’t spread. I’m not sure I want to change it. This looks pretty good as-is.
Now that we have some free CPU cycles, let’s see if we can’t make more interesting terrain. Right now I’m just making rolling hills with a maze of random tunnels underneath. The above screenshot is from a rare location where the terrain dips below the tunnels and exposes them. I’m not even using the noise system very much yet. No overhangs, no mountains. Cliffs are massive boring walls of smeared pixels. Yawn.
So let’s just generate some random pockets of noise and use it to remove volumes of terrain. How does that look?
That’s a lot more interesting. This more complex topography is really exposing the shortcomings of my texture mapping system. We’ll deal with that in a bit.
Let’s add some different types of terrain. Maybe some dirt would look good.
I like that. And the tunnels look good too. They looked very mechanical in the cube world, but here in blob world they look like natural caves. It’s fun to run through them to go from one open area to the next.
You know we need next? Let me grab the grass-scattering code I used in Part 3 of Project Frontier.
Yeah. Okay, enough fun. Now that we have a shader, let’s fix this texture mapping problem.
The problem: I don’t just want to smear texture around randomly. If I put a brick texture down, I want to be able to map it all over these lumpy hills and still have the bricks align properly. But how can we do this? We’ve got a 2D square that we’re trying to project onto complex 3D shapes.
Well, there are three ways to apply patterned textures to a scene so that you can be sure they’ll line up. First, I could project the texture along the x-axis, so that walls facing east / west look good:
You can see this makes the x-facing walls look great, at the expense of making everything else look like a big smear.
Also: I should note that these square trenches are part of my testing grounds, not my terrain system. I created these for exactly this sort of situation.
Another way to map things is to project the texture along the y-axis.
I don’t think you’d call this an improvement, unless you happened to be looking at a north-facing wall and you didn’t have any plans to turn your head.
Finally, I could project from above:
But again, this just changes who gets screwed. What we need is to switch between these systems. If only there was some way to know which one we needed…
Oh right! Those surface normals we just got done fixing. To make the above image, I just had it color the walls according to which way the normals were pointing. (Programmer talk: Actually, I’m using the absolute values of the normals and not
(vector / 2) + 0.5 like you do when converting normal maps to color data, because this is closer to how I’ll be using them.)
The more something points along the x axis, the more red it has. The more along y, the more green. The more up or down, the more blue. Because our normals are normalized (remember from above when we resized them?) we know that if you add up red, green, and blue the sum of the values will always be 1. So, red=1, green=1 is impossible. This means we have a nice, clear indication of which way a particular bit of wall is pointing.
So all I need to do is pick my projection method based on which axis is the predominant one. (Which color has the highest value.)
Perhaps you’re wondering why we needed shaders for this. Consider:
That vertical edge would be a bad spot. On the left edge of those polygons they’re using one projection system, and on the opposite edge they’re using another system. The texture would go crazy at this point. Maybe it would repeat sixteen times to try to reconcile the discrepancy. Maybe it wouldn’t repeat at all, leaving a smear. The corner above would be even worse: Every vertex in that triangle would be using a different projection system. That triangle is doomed.
(We could also solve this if we didn’t have the triangles share verts. If I wanted to pump three times as much data through our graphics pipeline I could give each and every triangle its very own copy of the verts to use its own projection system so that it didn’t have to share. Also, I could do ten times as much programming and make it only duplicate verts when it’s required according to the shape of who am I kidding it’s not even worth typing the rest of this sentence.)
This happens because texture mapping values are usually set at the corners of a triangle and then averaged across the face. But with a pixel shader we can use whatever projection system we like, at any time. The projection can change on a per-pixel level, instead of per-vertex.
So let’s see how it looks. I’ll put a brick texture on the world to make sure the corners all match…
That texture is pretty much our worst-case scenario, and it seems to hold up pretty well, even on those round edges.
So that’s the week. Going forward: There’s an infuriating bug that I can’t nail down that’s resulting in seams between nodes. (You might see really bright lines in some of the above sceenshots.) It’s a bit random, but I need to nail that one down before it drives me nuts.
Also, I’ve got speed issues again. Dropping the octree freed up a ton of horsepower, and then this new terrain system ate it all. At a full sprint, the program can just barely generate stuff fast enough to keep polygons under my feet. It takes a good twenty seconds to fill in a scene like this. I need to climb down into the guts of this thing and see where the cycles are going and if there’s anything I can do to help that. I’m going to be working more on terrain, and I don’t want to have to wait twenty seconds every time I make a change.
The Death of Half-Life
Valve still hasn't admitted it, but the Half-Life franchise is dead. So what made these games so popular anyway?
Why Google sucks, and what made me switch to crowdfunding for this site.
A video discussing Megatexture technology. Why we needed it, what it was supposed to do, and why it maybe didn't totally work.
Raytracing is coming. Slowly. Eventually. What is it and what will it mean for game development?
Lost Laughs in Leisure Suit Larry
Why was this classic adventure game so funny in the 80's, and why did it stop being funny?