Project Octant Part 4: The Beautiful Noise

By Shamus Posted Monday May 7, 2012

Filed under: Programming 62 comments

Perlin Noise is a technique for quickly generating a metric crapload of really interesting pseudo-randomness. “Interesting” in that it forms nice organic patterns instead of pure random noise. “Pseudo Random” in that you can give it the same input and get the same output. “Crapload” means that you can make a final data set thousands of times larger than the noise you start with.

Note that in the context of this project I’m going to discuss Perlin in terms of 2D images, but I’m using it in 3D. It’s just easier to show you what we’re doing in 2D.

We begin with a basic image of really random noise, which I will depict as a 2D greyscale image. The more random the better. We want areas of light, dark, and medium brightness. We want it to be really diverse overall, but have small local clusters of bright or darkness. We don’t want large areas to be homogeneous, and we don’t want the small areas just just be a scatter of white and black pixels. We can accomplish this a lot of ways. I could churn out a bunch of values in a random number generator, for example. Or, we can just open up a new image in your Photoshop of choice, crank up the noise filter on a blank image, and hit save.

octant4_1.jpg

Awesome, right?

No. Obviously pure noise is boring.

To make Perlin noise…

Stop!

Okay. According to Chris Serson below, this is not Perlin noise. This is… It’s complicated. The point is, when I Google’d for Perlin noise I ran into multiple pages that made the same error I made here. To avoid perpetuating this mis-naming of noise systems, I’m adding this note here.

I will say that it is hard to find a proper implementation of either Perlin Noise or the (sometimes preferred) Simplex noise. Most of the stuff I find is:

  1. Impenetrable jargon.
  2. Pages of indecipherable, Greek-laden maths.
  3. Example images of what the system can do, provided you can figure it out.
  4. A history of all the noise systems Ken Perlin has made, how he used them, and how people keep mis-attributing other noise systems to him.

So… not much in the way of code out there. In the end, what I have here works well enough for my purposes. If I need something more robust I can go on the quest for the One True Noise Algorithm at another time. Let’s just get on with this.

…we have to sample this image at different resolutions. Massive resolutions. Bigger than we would want to bother keeping in memory, really. What we do is take this noise and kind of zoom in on it. For example, here are the four pixels in the upper-left of our noise image.

octant4_6.jpg

Blending between these four values, we can come up with values in between. If it asks for the value 20% of the way across and 80% of the way down, we get this:

octant4_7.jpg

To be clear: We’re not calculating ALL of these values. It only creates the one pixel that was requested. I’m just showing you how it arrives at that value.

The neat thing is, I already had 90% of the code needed for Perlin Noise. For those of you reading through Project Frontier, check out MathInterpolateQuad (). It has everything you need for doing the above.

The terrain generator is feeding us coordinates. We’re dividing those values by a thousand, so that it would require a thousand samples (a thousand meters of scenery) to cover the gradient from one edge of this four-pixel square to the other. Of course, if we stopped here we would have some really boring, super-flat scenery.

So we take another sample. Instead of 1/1,000, this one is taken at 1/500. Then we take another sample at 1/250, and another at 1/125. Each level of noise is at twice the frequency of the one before. Then we add all of these samples together, giving them all equal weight. We average them.

The upshot of all of this is that we get this large pattern of interesting noise.

octant4_2.jpg

This might not look terribly useful, but we can use this to generate interesting topography. Obviously we could use this to make hills where brighter = taller. But we can also use it to generate underground scenery. Let’s say we set a threshold. Everything above a certain brightness will be hollow, and everything below that threshold will be solid rock.

octant4_3.jpg

Which gives us a system of caves. If we set the threshold higher, then instead of large caves we’ll get little pockets, like Swiss Cheese.

octant4_4.jpg

If we take a narrow range of values above one point and below another, then we get a bunch of passages, all twisty-like.

octant4_5.jpg

So let’s take this noise and feed it into our cube-world and see how it looks:

octant4_8.jpg

This is just raw Perlin output. I’m not doing anything fancy with it at this point. Eventually it might be worthwhile to combine different sets of noise. (Perhaps one to make hills, and another to bore tunnels in the hills.) Or to set up different ways of deciding what’s solid and what’s empty. (Mess with the thresholds.) Maybe I’ll stretch noise along one axis to make tall or long caves. Or flatten out the bottoms to make Star-Trek style caves with lumpy walls and floors level enough for championship billiards. (A classic trope. Actually, is it a trope? I’ve never seen it officially listed.)

octant4_9.jpg

One of the things I wanted for this project was to experiment with noise like this, and see what other sorts of shapes and places one can make.

octant4_10.jpg

So now we have a foundation for making scenery. I’m sure we’ll come back to this when it’s time to work on the output.

 


From The Archives:
 

62 thoughts on “Project Octant Part 4: The Beautiful Noise

  1. sab says:

    [little nerdgasm]

    Towel please!

  2. Chargone says:

    significant chunks of that went right over my head….

    but the results are weirdly pretty.

  3. Tharwen says:

    I wonder if there’s any way to do this project without it looking incredibly similar to minecraft…

    1. Kdansky says:

      Not before you write shaders that make the lighting and texturing look different. Before that, any block-based game will look like Minecraft.

      Unrelated: I do not understand the explanation on the creation of Perlin noise. I understand its properties and how to use it, but the sampling stuff? Really unclear.

      1. Dragomok says:

        Not before you write shaders that make the lighting and texturing look different. Before that, any block-based game will look like Minecraft.

        Still, any block-based 3D game is going to remind people of Minecraft, in the same way every 4X game reminds people of Master of Orion. Or every space combat sim of Elite. Or every MOBA of Defense of the Ancients. Or every pop-up shooter of Gears of War. Or every FPS-RPG of Mass Effect. Or… You get the idea.

        1. Aldowyn says:

          I was getting a little unsure of your comparisons until you got to DotA…

          In my mind, Civilization is the 4X everyone thinks of, and Wing Commander or Freespace is the space combat sim.

          Googling Elite (which I hadn’t even heard of) and Master of Orion (wow that’s old) gives me the distinct impression that you’re pulling from a… deeper pool of games than I am.

          1. Dragomok says:

            My point was that Minecraft is a genre-defining game – as in, it might not be the first to have certain mechanics or design decisions, but is the first to gain enough popularity to ‘inspire’ several projects* – so it is completely natural that anything using even its superficial elements will remind many of it.

            * And yes, I know, I know, Mass Effect was followed only by Alpha Protocol so it doesn’t really count.

            Googling Elite (which I hadn't even heard of) and Master of Orion (wow that's old) gives me the distinct impression that you're pulling from a… deeper pool of games than I am.

            What can I say, I used to read a lot of articles about videogames history – and now I am scarred for life.

          2. Adam Fuller says:

            Beat or equal minecraft at what it does, and someday all similar games will remind people of your game….just like your reference pool is younger than the OP’s

            1. Peter says:

              Minecraft had a LOT of hype to draw from though. It was a phenomenon. It’s very tough to get something like that without being really novel (at least in the experience of the common gamer/user), and it’s what helped make minecraft such a powerhouse later on.
              Beating mass effect in its game by comparison, is a lot easier, in the sense that it’s more controllable; a better game can become the new marker for subsequent games. Being a better blockish game than minecraft would ‘just’ make you “that awesome minecraftian game”.
              This all is not to say minecraft isn’t awesome or doesn’t deserve what it got (hype and all), but it’s something that needs the luck of seven gods to equal, let alone match.

      2. PurePareidolia says:

        Basically, he’s saying you generate random noise at different resolutions, smooth each one out and then stick them on top of one another.

        So if you start with 256×256, you then get the top left quarter (128×128), stretch it out to the original resolution and average out all the missing pixels so you it’s kind of blurry. You now have two 256×256 noise textures, one sharp, one less so. You then repeat that with the top 64×64 of the original, expanding and interpolating and so on with 32×32, 16×16, 8×8, 4×4 all the way down to 2×2.

        So then you have 8 256×256 textures, each blurrier than the last, you then average out each point on the image to create a single texture that’s an amalgamation of the others, and looks like the one in the post.

        Assuming I understand correctly.

    2. Caffiene says:

      Yes, you can.

      In 2d its pretty simple. Instead of using blocks, you use each pixel as the height of a point on a surface. Shamus used this method for his terrain in Project Frontier, just with a different image as the basis instead of a “perlin” based image.

      In 3d the principle is the same – you use the information as a connected system of points instead of individual blocks. With the method Shamus is currently using (where he sets a threshold and ends up with the black and white blotch images) I think you could do it by setting anywhere where a black pixel edge meets a white pixel edge as a point, and connecting that point to any adjacent points.

      After that, you have the trouble of figuring out which part of the shape is the inside and which is the outside (which is important if you want to be efficient and not render the side you cant see)…

  4. Chris Serson says:

    First off, I just want to say I’ve been reading your blog for a few years now and I have a ton of respect for you. Your work has influenced my own research.

    Having said that, I hate to burst your bubble, but you haven’t actually made Perlin Noise here. Perlin Noise is a form of Gradient Noise, while what you’re actually describing is Value Noise.
    Ken Perlin posted a talk online which may interest you. On slide 15, he begins to describe the actual algorithm used to produce Perlin Noise. I would also recommend checking out the DirectX SDK as it has implementations of all 3 of his Noise algorithms (Perlin Noise, Enhanced Perlin Noise, and Simplex Noise).

    Also, when you describe combining octaves of Noise together at different frequencies, you’re describing something which is typically referred to as Fractional Brownian Motion (fBm). This webpage describes it fairly well, with images comparing the difference.

    I had a heck of a time figuring all of this out myself the first time I went to implement Perlin Noise. Half the websites I looked at described it exactly as you do; however, that description is NOTHING like Perlin’s own. In reality, I believe the algorithm you’re using is quite a bit simpler and very possibly faster. And the results are more than good enough. In fact, the last link I posted shows that there isn’t a huge difference between Value Noise and Perlin Noise.

    1. Shamus says:

      Heh. Well, I found out about “Perlin” noise via Google, so… that’s how we ended up here.

      Dangit. I suppose I need to update the post or I’ll just be making it worse.

      EDIT: And OF COURSE I named the source file perlin.cpp. Dummy. Name it “noise” next time so you can switch systems without touching the systems that use it. Grrr.

      1. Knut says:

        Or even better….one noise interface so you can have several implementations? That’s what I did when I implemented a similar system. Then use a factory class so the actual implementation is only referenced one place.

      2. Tharwen says:

        Heh. I found a typo. “Perlin nose”.

      3. X2Eliah says:

        You should claim that perlin.cpp doesn’t stand for perlin noise, but for Portable Environment ReaListification ImplementatioN (or, PERlIn).

        1. Syal says:

          Or Per Line. Misspelled.

          1. X2Eliah says:

            Naw, you should never admit to a misspelling or anything unintended when coding. Every single thing has to be intentional and deliberate, foreplanned before the very inception of the general pre-idea even.

            1. Syal says:

              Then it’s Per Line, memory-saving version.

        2. silver Harloe says:

          Or that he’s added a Perl interpreter to his code

          1. Kerin says:

            HE SAID PERL! *sirens*

            1. Chargone says:

              well, at least he didn’t say ‘nose’.

              *flees before the deranged muppet with the exaggerated schnoz appears*

              ((yes, i am referencing, to the best of my knowledge, a single, old, sesame st. skit as if it were a major and well known meme. no, i do not care :P ))

    2. Anorak says:

      I’ve implemented Perlin Noise before (I think, anyway. I implemented SOMETHING noisy, but it might just have been shouting). I used it to generate marble-like effects to texture primitives on my home brewed ray tracer. It was really, really slow. I dug around in my old code, and found this image: http://blog.daft-ideas.co.uk/?p=159

      It’s not the best I ever produced, but it’s my own fault for not using proper version control on my older projects.

      Shamus, Perlin’s original code for Improved Noise is on his website here: http://mrl.nyu.edu/~perlin/noise/ if you are still interested. However, your noise system is going to be fine for what you’ve got in mind.

    3. Simon Buchan says:

      Sorry in advance for the codespam, but I believe this is “proper” Perlin noise, in HLSL (should be pretty much the same in GLSL):

      float noise(int n)
      {
      n = (n << 13) ^ n;
      n *= n * n * 15731 + 789221;
      n += 1376312589;
      n &= 0x7fffffff;
      return 1.0 – n / 1073741824.0; // float(0x40000000)
      }

      float _perlin2d_impl(float2 x, int2 i, int2 g)
      {
      float2 r; i += g;
      sincos(noise(i.x + i.y * 57) * 6.2831854, r.y, r.x);
      return dot(r, (x – g));
      }

      float perlin2d(float2 x)
      {
      int2 i; x = modf(x, i);
      float s = _perlin2d_impl(x, i, int2(0, 0));
      float t = _perlin2d_impl(x, i, int2(1, 0));
      float u = _perlin2d_impl(x, i, int2(0, 1));
      float v = _perlin2d_impl(x, i, int2(1, 1));
      x = 3*x*x – 2*x*x*x;
      return lerp(lerp(s, t, x.x), lerp(u, v, x.x), x.y);
      }

      float _perlin3d_impl(float3 x, int3 i, int3 g)
      {
      float3 r; i += g;
      sincos(noise(i.x + i.y * 57) * 6.2831854, r.y, r.x);
      float zr;
      sincos(noise(i.z) * 6.2831854, zr, r.z);
      r.xy *= zr;
      return dot(r, (x – g));
      }

      float perlin3d(float3 x)
      {
      int3 i; x = modf(x, i);
      float s = _perlin3d_impl(x, i, int3(0, 0, 0));
      float t = _perlin3d_impl(x, i, int3(1, 0, 0));
      float u = _perlin3d_impl(x, i, int3(0, 1, 0));
      float v = _perlin3d_impl(x, i, int3(1, 1, 0));
      float s2 = _perlin3d_impl(x, i, int3(0, 0, 1));
      float t2 = _perlin3d_impl(x, i, int3(1, 0, 1));
      float u2 = _perlin3d_impl(x, i, int3(0, 1, 1));
      float v2 = _perlin3d_impl(x, i, int3(1, 1, 1));
      x = 3*x*x – 2*x*x*x;
      return lerp(lerp(lerp(s, t, x.x), lerp(u, v, x.x), x.y),
      lerp(lerp(s2, t2, x.x), lerp(u2, v2, x.x), x.y),
      x.z);
      }

      That's only one level, though, you want what's generally called "cloud" noise: which is just iterating like:

      #define CLOUD_ITERATIONS 6
      float cloud2d(float2 x)
      {
      float r = 0, s = 1;
      for (int i = 0; i != CLOUD_ITERATIONS; i++)
      {
      r += perlin2d(x * s) / s;
      s *= 2;
      }
      return r;
      }

      Despite being super complex for a shader, it is well more than fast enough to use realtime on my machine for the simple scenes I'm rendering. You could use it to generate levels on your card if you can ship them back to CPU (slow, but faster than doing it on CPU)

      EDIT: I also apologise in advance for everyone who actually knows HLSL who is having a heart attack right now at my awful code :P In my defense, I just hacked it together 1:1 with Perlins summary of it, I wasn’t thinking of performance at all.

    4. X2Eliah says:

      Out of curiosity, what is the difference between gradient noise and value noise in laymen’s terms?

      1. Chris Serson says:

        Value Noise works as Shamus described above. You have a regular grid of random scalar values. When you want the value at a point, you just interpolate between them.

        Gradient Noise uses a regular grid of random values as well. The difference is that those values aren’t scalars. They are vectors. Vectors, if you’re not familiar with them, are essentially arrows pointing in a given direction.
        So when you pick a point in a Gradient Noise function, you then need to perform a bunch of math using the location of the point and each of the vectors to get the values. You can then interpolate between those values.

        Basically, Gradient Noise functions are way more complicated but have an important property that a simple Value Noise function doesn’t have: All those apparently random features? They’re actually all roughly the same size. In Shamus’ example, he CHOSE to use a starting grid of values that pretty closely resembles this. He had to make that starting grid himself. But with a correct Perlin Noise implementation, you ALWAYS get this property.

        1. Shamus says:

          And I actually managed to mess up my noise gradient on the first try. I mistakenly began with a black image and added noise to it, instead of starting with neutral grey. This resulted in the outputs trending low, and not having as much variance as they should have. The average noise value was 55-ish, instead of 128. (On a 256 scale.)

          Took me a while to figure out why my noise was so “quiet”.

        2. decius says:

          I have trouble believing that there is no entropy input to a ‘correct Perlin Noise implementation’ that doesn’t result in unusual sizes.

          If I understand your math correctly, a three-dimensional value noise field is continuous on four dimensions (x,y,z,intensity), while a gradient noise field would have five or six (x,y,z,magnitude,elevation,azimuth)(elevation and azimuth possibly combined).

          In each case, I’m using the mathematical definition of “continuous”, meaning roughly that there would be no jumps in value, if the resolution was allowed to be infinite.

          1. Chris Serson says:

            I may have been a little hasty in my using the word “always”, especially in all-caps. It would, perhaps, be better to say that Ken Perlin specifically set about creating his algorithm with the intent that all features in a given texture would be about the same size.

            I’ve actually been trying to avoid talking about the math because, while I understand Perlin’s intent, I don’t really get WHY his algorithm works. It’s a crazy concept.

            I think where you may be falling down is in thinking it has to be continuous in more dimensions because it uses vectors. Both algorithms are doing basically the same thing: take a bunch of values on a square/cube/etc and interpolate between them to get the value at a point. The difference is in what the values are at the vertices and where the point actually is. With Value Noise, you just take the values and interpolate for the point. You can think more in absolutes. The point (x, y, z) is actually at point (x, y, z) within the Noise Field.

            With Gradient Noise, you have to manipulate things a bunch to get to the same point. I look at it as warping space.

            For instance, with Perlin Noise, for each vertex of the square/cube/etc, you take the dot product of the gradient and the vector from the vertex to the point you’re trying to find. At that point, you have scalar values and can interpolate between them. Of course, there’s another bit of complexity thrown in because you’re not actually finding the value at (x, y, z). What you actually do is find the values u, v, and w along a curve at the points x, y, and z (what Perlin calls the fade function in his implementation, but what I prefer think of as the warping of space). Then you interpolate for (u, v, w).

            1. decius says:

              So, gradient noise uses a vector field for input but outputs a scalar field with only a single value per point? The key feature of each is that if you use them as transformations rather than simply evaluating the results at various points, the resultant field is continuous (the limit as you approach any point is equal to the value at that point)

  5. noahpocalypse says:

    In that 8th image, I could’ve sworn that I saw a block being held right in front of the camera, Minecraft-style. Then I looked down, and… *brainfuck*

  6. PurePareidolia says:

    Oh so that’s how Perlin noise works. I assumed there was something a bit more complicated going on but that makes a lot of sense. Thanks for that.

    EDIT: Read the above post. OK, it’s not Perlin then but it’s still pretty cool.

  7. MadTinkerer says:

    “If we take a narrow range of values above one point and below another, then we get a bunch of passages, all twisty-like.”

    Ahahahaha. Notch was explaining terrain generation in Minecraft on his weblog and in one post he said something like “in a future post I’ll explain cave generation” and then never got around to it. He implied it was pretty simple, but I couldn’t figure it out from what he had explained. But now it seems so obvious!

    Thanks, Shamus. And (sarcasm)thanks(/sarcasm) Notch!

  8. JJ says:

    I’ve been going along this same path myself, wanting to play around with octrees and figuring a Minecraft clone would be a fun approach. One thing I can’t find anything on is efficient occlusion culling in octrees. Of course I’m doing frustum culling, but that’s not nearly enough with all the hidden nodes that are inside the frustum. Notch mentions doing occlusion culling via ARB_occlusion_query, but I haven’t come around to trying that yet.

    What (if anything) are you doing for occlusion of all those contained blocks? Are you just passing the lot in a vertex buffer and letting the card do it for you?

    1. Shamus says:

      I’m not sure what Notch is referring to. Minecraft doesn’t really do occlusion. You can confirm this be creating a texture pack with transparent (say) stone, and walking around underground. It draws everything.

      I’ve been thinking about occlusion quite a bit. The devil of it is, there are a LOT of cases where you can remove 90% of the scene with cheap culling tricks when you’re surrounded by co-planar walls. But when things are blocked by non-simple geometry? Like, twisty tunnels? It is not obvious how to efficiently cull in those cases.

      And there will still be cases where crazy players will dig a huge trench or a long tunnel that thwarts culling, you have to allow for that. Your game has to be fast enough in that worst-case scenario, because it will come up often. And if it works well in that worst-case scenario, it will work in all others. Ergo, any optimizing you do will serve to make the framerate rocket upward in certain cases, but may actually make things worse in other cases, which is when performance counts.

      That said, I still take the idea out and toy with it now and again. It’s a fun problem, because it seems SO SIMPLE when you’re standing in a 1×2 tunnel, and so impossible when you’re looking at noise-caves like the ones above.

      1. Kevin Reid says:

        Minecraft does do occlusion (if the “Advanced OpenGL” option is on) but, if I understand it correctly based on my experience playing, it is pixel-based and in screen space: a chunk (volume of the world) is not drawn if none of its pixels were visible in the previous frame. Therefore if you have transparent textures, the occlusion doesn’t happen. You can see that if you have the option on and swing your view around, it will occasionally fail to draw some chunk at the edges, for just one frame.

        I wonder if you could do the coplanar walls thing like this: for each tree node, check if each of its six faces is fully opaque (i.e. has blocks occupying all of the surface). If so, then you can treat it as a wall for culling purposes. The simplest case is that for a given octree node, if you are looking at it along the Z axis, i.e. being within the XY bounds of one of its octants, and the nearer octant has an opaque face, then you can skip drawing the geometry for the farther octant. This would not help much for caves like your examples since most faces would not be completely solid, but in a more densely opaque world it might.

        1. Shamus says:

          Ah. Very interesting. Thanks.

      2. Tom H. says:

        ARB_occlusion_query is asking the GPU what’s occluded, and if you make the front wall transparent, the GPU will know that it isn’t occluding anything!

  9. Dasick says:

    This is pretty cool stuff Shamus. I like how well you explain theese pretty complicated concepts, but I’ve been wondering where you’re getting the raw data about octrees and what-not. Do you just Google around? And have you found any good resources?

    1. Shamus says:

      I didn’t need to Google for Octrees because I’ve spent so long working with Quad trees. Just needed to add a dimension.

      I did google for Perlin noise, and you can see how that turned out. :)

      1. Nick says:

        http://mrl.nyu.edu/~perlin/doc/oscar.html#noise

        Direct link from the Perlin Noise Wikipedia page, which was my top Google hit.

        1. Shamus says:

          Pfft. I hit that page, read a few paragraphs, and hit the back button. Totally missed the code at the bottom.

          Thanks.

          1. Nick says:

            If you figure out how to actually use it, by all means let me know…

  10. Nick says:

    I was going to suggest trying some sort of fractal based noise (like the diamond square algorithm), which could possibly be extended to 3D, but your way is probably simpler. Is there a good way parameterise it for specific output (“now produce plains… now produce mountains… now product fjords…” etc)?

    1. Don’t forget to take extra care with the fjords. You could win an award for them.

      1. Nick says:

        Alas, I hear they are no longer in fashion…

      2. anaphysik says:

        Is there a good word for a fjord award? Or for someone who scored a fjord award? What about when they’ve stored their fjord award by their sword? Not to be untoward, of course…

        1. ZzzzSleep says:

          Yes. Slartibartfast.

          His designs struck a chord,
          So he won an award,
          For he work with the fjords,
          Priced to afford,
          By the Golgafrinchan Horde.

          Too much time I’ve poured,
          Into this poem. Oh Lord,
          I’m out of my gourd.
          And somewhat bored…

          1. Syal says:

            …Burma-Shave.

    2. WJS says:

      Fractal noise means adding noise to itself at different scales, which is exactly what Shamus did here.

  11. Christopher M. says:

    There’s also the Diamond-Square algorithm to consider – which, while not as high-quality in a macro sense, seems even better at producing things like rolling hills, coastlines, etc. than Perlin noise is. Disadvantage is, you generally have to hold a lot more data in memory at once (it’s really only good for creating premade terrain, as opposed to real-time generation.)

  12. Rosewire says:

    I just felt like someone ought to say this. I understand almost nothing of this entire article. In fact, I don’t even follow much of the entire series of articles. And yet I still enjoy the heck out of reading them. What’s up with that?

  13. Marmakoide says:

    You seem to use bilinear interpolation to make the smoothed version of your pure noise picture. It makes those star-like patterns. Bicubic interpolation, albeit a bit more work, would avoid those patterns. For your caves, it would give the whole thing a more organic look. Yes, nitpicking and all :D

    1. Mephane says:

      I’ve just done a quick comparison according to your link to wikipedia, and I don’t think you are merely nitpicking. Bicubic interpolation indeed looks like the better approach, the result is remarkably better.

      1. Marmakoide says:

        Pro-trip for extra quality with minimal work =>
        When adding the layers of smoothed noise, if each layer is 2 time larger. Rotate them first by i * phi where i is the index of the layer, and phi is the golden angle. It minimize correlation artifacts that appears when adding layers…

  14. andy_k says:

    Your blog continues to inspire. Just awesome.

    Look forward to the next bits!

  15. Nevermind says:

    If you’re interested in noise generation (and procedural generation in general), read this book: http://www.amazon.com/Texturing-Modeling-Third-Edition-Procedural/dp/1558608486

    It’s like a bible for PCG.

  16. Yonder says:

    “Perlin Noise is a technique for quickly generating a metric crapload of really interesting pseudo-randomness. “Interesting” in that it forms nice organic patterns instead of pure random noise. “Pseudo Random” in that you can give it the same input and get the same output. “Crapload” means that you can make a final data set thousands of times larger than the noise you start with.”

    And ‘metric’ because we’re not savages!

  17. Daniel says:

    I found this explanation with source code about value and Perlin noise invaluable.

    http://www.scratchapixel.com/lessons/3d-advanced-lessons/noise-part-1/

    1. mast4as says:

      Actually this lesson describe an implementation of value noise. They said they would work on a lesson for perlin noise but they seem to be working on other lessons at the moment. I like scratchapixel because as you said in your introduction, they don’t use jargon and all maths are explained in plain english.

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply to Tharwen Cancel reply

Your email address will not be published.