|By Shamus||May 29, 2012||63 comments|
And now it is time to point and laugh at all the silly things that dum-dum Shamus has done over the past week or so. Let’s start with noise. Remember the image I ended with last week?
There’s something wrong here. Don’t get me wrong, that’s a cool canyon and all. I’m not knocking the canyon. The problem is that there is another canyon right next to this one, running in parallel. Defying all odds, there’s yet another one, very similar, just beyond that one. And another. And…
Yeah. My noise-generating system is broken. It’s not spewing out duplicate data. These canyons are unique. But we’re seeing patterns when we shouldn’t.
Also, the format of the noise makes it really annoying to use. It’s supposedly giving me values between zero and one. As I’ve plugged these numbers into my world-building system, I’ve noticed that it’s all kind of homogeneous. The possible range might be between zero and one, but the actual values it gives me are very, very rarely lower than 0.4 or higher than 0.6. This isn’t really a bug. This is how value noise works, actually. It’s just that it’s not very convenient like this.
So far I’ve designed my world-building code around this clunky mess, constantly re-scaling the noise and fudging the numbers until things look the way I want. But this makes for messy code. If I want mountains that range from zero to fifty, I’ll end up with code that looks like
(noise - 0.4) * 250. Those are what programmers call “magic numbers”. They’re unexplained values floating around in source, and another programmer (probably future-Shamus, programming from the cockpit of his flying car) will look at this and say, “If you want mountains fifty meters high, why are you multiplying by 250? And subtract point four? What’s that all about? What idiot wrote this crap!?!”
So let’s calibrate:
This is our noise spectrum. When we generate a page of noise values, we should see a little bit of everything. A bit of low red. A bit of high purple. Things can follow a bell curve, but we need to see the full spectrum. If, during the course of churning out the 262,144 pixels for one of the upcoming images, I don’t get a single purple pixel, then it’s so rare that it’s not worth having in the spectrum. If I actually needed something to be that rare, there are better ways to come up with it than sacrificing one-fifth of my noise spectrum.
So let’s generate a page of noise and see what we get:
Wow. There are so many things wrong with this image.
- We’re using less than a fifth of the spectrum.
- Worse, it’s not even the middle of the spectrum!
- We can see a strong pattern of vertical lines.
- I don’t know if it will show up in the eventual blog-screenshots, but I can see a couple of pure blue pixels mixed in with the cyan values. These sudden shifts in values should not happen. The noise generator is for making smooth outputs. If I needed spikes, I’d generate them some other way. Random spikes like this will produce…
Yeah. Man, I’ve been seeing those telephone poles now and again. I wondered what was causing those.
(Half an hour of furious pondering, which non-programmers mistakenly refer to as “surfing the web and reading webcomics”.)
Ah! The input image. I’m using a image of pure noise – a 256×256 image of random color. It’s a PNG file. I’m just loading these data values and dumping them into a big ‘ol bin of bits. Can you intuit the problem?
PNG files have an alpha channel. One pixel is a single byte of red, another byte for blue, then green, then the transparency. This image is opaque, which means the forth value is always the same. (Max value.) I’m sure this problem appeared when I switched from Qt to Visual Studio and changed to using a different set of image-loading tools.
I could modify the image to have random opacity, but it’s probably safer to just drop that channel. Let’s see:
Much better. The stripes are gone and the noise is nearly centered. (It’s at 0.53, when a perfect average ought to come up with 0.5. Not sure if I care about that. That could be an artifact of the noise I’m using.)
I’m not so sure about the blue dots, which is where the noise system is giving a value that doesn’t fall in with the gentle gradients it normally turns out. Why? Blue is not the bottom of the spectrum. The bad values all appear on the same spot in the range. For example, we’re not seeing incongruous purple or red dots. I don’t see any pattern to their placement.
I’ll have to come back to this one later.
So now I just need to re-scale my noise to fill as much of the spectrum as I can. Here is what we’re doing:
Most of the spectrum is going to waste, which means that I’m groping around, trying to find the range where the “interesting” noise is taking place. Every time I ask for some noise I re-scale it, and then re-normalize it to the zero-to-one scale. It’s much neater and more sensible to do this at the source, so that I can just blindly query some noise and be sure that the numbers I’m getting will be useful. The only danger is this:
If I narrow the range too much, then occasionally a value will drift out of bounds. These values are probably very rare, but statistically inevitable. In fact, this is sort of the whole point of noise.
If I don’t tighten the range enough, then my values will be too flat and I’ll once again clutter up my code with scaling and endless tweaking. If I tighten it too much, I’ll end up with clamped values.
A clamped value might look like a mountain with a perfectly flat top. Or a cave with a flat bottom. It’s some point where the numbers drift off to the maximum value and just… stay there. Suddenly I’m not getting noise anymore. I’m just getting max value, over and over.
I adjust the ranges a bit. Let’s see how it looks:
Lots of variety. If some of those red or purple spots result in a bit of flatness, we can probably play it off as part of the scenery. Much better to have the occasional freak spot than to have it bland everywhere.
Which leads us to our final problem:
On Friday I mentioned that it was taking crazy long to fill in the scene with data. It turns out that problem wasn’t so much a bug as a design flaw. Consider a grid of nodes:
Imagine we’re looking down from above, and the viewer is in A. Each letter represents a node – a 16x16x16 block of points. The order of the letters is the order in which nodes will be filled in. A, B, C, D, etc. This builds the stuff close to you first, and works outward in concentric circles. There are three stages to building a node:
- Generation – The noise system is used to create a bunch of points and decide what they’re made of. (Grass, dirt, what-have-you.)
- Lighting – The system looks at each block and figures out if its in direct sunlight or not. Eventually this stage will probably include some sort of other light sources, so that underground areas can be lit.
- Mesh construction – We take the data from the previous two steps and turn it into a big pile of polygons.
I had a bit of code in here that would cause new nodes to force their neighbors to update. When A is built, nothing in B exists yet. So it can’t make these blocks match up with whatever will eventually appear in B. When B does appear, there will be a gap or a seam or a bunch of extra polygons or whatever.
To correct for this, as soon as B is done building it would nudge A and tell it to update. So then A rebuilds, repeating step #3. Then C is created, and it nudges A, and A does step 3 again. Then D, and A is built again. Then E, then A again. Then it gets really, really stupid…
F is created, and it causes both B and C to be rebuilt. Oh, and because it shares a single corner cube with A, then A must be built again.
The upshot is that in the process of filling in the nine nodes in the center, A will be rebuilt nine times.
Note that I couldn’t see this happening. I mean, if it rebuilds a node and the node looks exactly the same, then the process is invisible to me. This is a case of me just not thinking things through. I put in that bit of nudge code and moved on without considering the implications.
The fix for this was to just have it be a lot less stupid about when it builds meshes. A doesn’t do step #3 until the other 8 nodes have finished the first two steps. It means you wait a little bit longer up front, but in the long run it saves a ton of time.
Let’s crank up the draw distance:
Here we have a draw distance of 384 meters. It’s a bit bland because we’re only making these swiss-cheese canyons.
I’d wanted to go for higher, but it seems sort of silly now. This feels pretty far. It took almost a minute to generate this. (Although over half of that time is spent filling the last few rows on the horizon. Every doubling of the view distance is quadruple the time cost.) That’s not too bad for rough, un-optimized code. I’m content that we don’t have any serious design flaws or obvious structural problems that would prevent us from creating scenery at an interactive rate. Moving at a full sprint, I can’t get anywhere near the edge of the terrain as it’s being built.
Let’s do one last test. Let me double the draw distance again. I’m also going to disable the fog, since I think it makes things look far away and kind of screws with your perception of how far you’re seeing.
That’s over half a kilometer. Took ages to generate the terrain, and I don’t think it really looks any more impressive. And on the ground, you can almost never see that far because there’s always something in the way. Maybe this would change if we had some mountains or other large-scale features to look at.
At any rate, the framerate is still over 40fps, even with this aggressive draw distance. Again, it’s too early to start patting myself on the back, but it does mean I don’t have any serious problems in what I’ve made so far.
I’m looking at this endless expanse of Swiss cheese and thinking the next step should be to add some variety.