I’ve implemented erosion simulation a few times in my career. My usual technique has you begin at a single spot on the map, landing like a virtual raindrop. From there, you examine the immediate surrounding points and see which one is lowest. Then you move there and repeat the process. As you go, you shave a tiny little bit off of each point. If you want to get fancy, you look at the steepness of the slope. If you’re going on a nice downhill then you assume this theoretical trickle / stream / river is moving fast. You might pick up a little extra dirt (thus making the riverbed deeper) and taking it with you. When the land levels out, you assume the flow has slowed and you drop off a bit of what you’ve collected.
 |
| And if the erosion code doesn’t pan out, maybe we can repurpose it into a sled-riding simulator. |
Eventually you hit the ocean or find yourself at the bottom of a hole. Here you drop off whatever you’ve collected, which ought to help form sloping beaches, river deltas, and gradually fill in craters. At this point you’re done. Now pick another point on the map, drop another raindrop, and start the whole thing over.
It’s not perfect. This would be useless to a geologist or other science-type person trying to study science-type stuff. But it’s perfectly good for making a plausible landscape.
This is nice, but shaders simply can’t do this kind of processing. You can’t say, “I’m done with grid coord X, Y, now let me move next door to X+1, Y”. You can’t just store values globally, and you can’t stop processing when you feel you’re done. (Unless you don’t want to generate any output. In which case you just wasted your time.) When your shader is executed, you’re dropped into a situation where you’re expected to do the calculations of one pixel, and only that pixel. You can’t change any adjacent pixels and you can’t carry values between them using variables. You can LOOK at adjacent pixels, but because processing is heavily parallelized, you can’t even guarantee that points will be handled in any particular order.
So how do we handle large processing jobs like this, where we have a lot of inter-dependency and changes that need to propagate in unpredictable ways?
Continue reading 〉〉 “Frontier Rebooted Part 4: Stuck in a Rut”
Shamus Young is a programmer, an author, and nearly a composer. He works on this site full time. If you'd like to support him, you can do so via Patreon or PayPal.