Project Octant 12: Fix All The Things

By Shamus
on May 25, 2012
Filed under:

The last several posts have been a sort of lead-up to this one. It probably doesn’t look that way, but all the fussing I’ve been doing is about to pay off. Moving from Qt to Visual Studio broke things. Moving to marching cubes broke things. Adding shaders broke things.

Since then I’ve been adding technology and fixing things, and as those pieces come together we’re finally getting something worth looking at. Bit we’re getting ahead of ourselves. First, I need to backtrack and explain a few items that I glossed over in earlier entries.


The big problem is that it is taking crazy, super, insano, bonkers long to generate terrain. It’s about a second per node. Several people, including fellow code-hobbyist Michael Goodfellow – suggested that the octree is really expensive to use. So let’s disable that and see if we get any speed gains.

And by “disable” I mean, “write an entirely different, simpler storage system for my data”. No trees. No dynamic allocation. Just one huge, honkin’ block of memory where you can get anywhere in a single lookup. And let’s see how the new system performs…


Wow. Just over a tenth of a second. Almost ten times faster.

Click on octree.cpp. Hit delete.

Now my project is probably a thousand lines smaller and runs ten times faster. And of course now I have a project called “octant” that is no longer related to octrees. So that’s hilarious. I think I should make this into a game called “Shoot Guy Online”, which is a single-player game about a woman who fights with a sword.

I’m not really that sad to see the octree go. The octree made it super-fast to do things that were rare, and super-slow to do things that were common. This is like a sprinter buying shoes that make her walk three times as fast and run ten times slower.

I’d wanted to use it so that I could fit a really huge data set into memory, and see how far out I could get my draw distance. It became clear a few days ago that memory wasn’t going to be my chief hurdle. I was going to run into throughput problems long before I ran into memory problems, even though every doubling of distance results in eight times as much memory usage. A kilometer view distance is just not going to happen within the context of this project. (Oh, it’s possible, but it would require super-long loading times, or multi-pass block generation where the map would fill with increasing levels of detail, or some other trick to manage the sheer volume of stuff I have to process. That could take days to investigate and it might not pan out.)

Moving on…

People have been asking about the surface normals. If you remember, those are vectors (think of them like arrows) that say which way a vertex is pointed.


Now that normals are working again, I can use lighting.


I’ve also got a super-cheap shadowing system that will darken a block if there is another block anywhere above it. It’s strictly pass/fail – the “sunlight” doesn’t spread. I’m not sure I want to change it. This looks pretty good as-is.

Now that we have some free CPU cycles, let’s see if we can’t make more interesting terrain. Right now I’m just making rolling hills with a maze of random tunnels underneath. The above screenshot is from a rare location where the terrain dips below the tunnels and exposes them. I’m not even using the noise system very much yet. No overhangs, no mountains. Cliffs are massive boring walls of smeared pixels. Yawn.

So let’s just generate some random pockets of noise and use it to remove volumes of terrain. How does that look?


That’s a lot more interesting. This more complex topography is really exposing the shortcomings of my texture mapping system. We’ll deal with that in a bit.

Let’s add some different types of terrain. Maybe some dirt would look good.


I like that. And the tunnels look good too. They looked very mechanical in the cube world, but here in blob world they look like natural caves. It’s fun to run through them to go from one open area to the next.

You know we need next? Let me grab the grass-scattering code I used in Part 3 of Project Frontier.


Yeah. Okay, enough fun. Now that we have a shader, let’s fix this texture mapping problem.

The problem: I don’t just want to smear texture around randomly. If I put a brick texture down, I want to be able to map it all over these lumpy hills and still have the bricks align properly. But how can we do this? We’ve got a 2D square that we’re trying to project onto complex 3D shapes.

Well, there are three ways to apply patterned textures to a scene so that you can be sure they’ll line up. First, I could project the texture along the x-axis, so that walls facing east / west look good:


You can see this makes the x-facing walls look great, at the expense of making everything else look like a big smear.

Also: I should note that these square trenches are part of my testing grounds, not my terrain system. I created these for exactly this sort of situation.

Another way to map things is to project the texture along the y-axis.


I don’t think you’d call this an improvement, unless you happened to be looking at a north-facing wall and you didn’t have any plans to turn your head.

Finally, I could project from above:


But again, this just changes who gets screwed. What we need is to switch between these systems. If only there was some way to know which one we needed…


Oh right! Those surface normals we just got done fixing. To make the above image, I just had it color the walls according to which way the normals were pointing. (Programmer talk: Actually, I’m using the absolute values of the normals and not (vector / 2) + 0.5 like you do when converting normal maps to color data, because this is closer to how I’ll be using them.)

The more something points along the x axis, the more red it has. The more along y, the more green. The more up or down, the more blue. Because our normals are normalized (remember from above when we resized them?) we know that if you add up red, green, and blue the sum of the values will always be 1. So, red=1, green=1 is impossible. This means we have a nice, clear indication of which way a particular bit of wall is pointing.

So all I need to do is pick my projection method based on which axis is the predominant one. (Which color has the highest value.)

Perhaps you’re wondering why we needed shaders for this. Consider:


That vertical edge would be a bad spot. On the left edge of those polygons they’re using one projection system, and on the opposite edge they’re using another system. The texture would go crazy at this point. Maybe it would repeat sixteen times to try to reconcile the discrepancy. Maybe it wouldn’t repeat at all, leaving a smear. The corner above would be even worse: Every vertex in that triangle would be using a different projection system. That triangle is doomed.

(We could also solve this if we didn’t have the triangles share verts. If I wanted to pump three times as much data through our graphics pipeline I could give each and every triangle its very own copy of the verts to use its own projection system so that it didn’t have to share. Also, I could do ten times as much programming and make it only duplicate verts when it’s required according to the shape of who am I kidding it’s not even worth typing the rest of this sentence.)

This happens because texture mapping values are usually set at the corners of a triangle and then averaged across the face. But with a pixel shader we can use whatever projection system we like, at any time. The projection can change on a per-pixel level, instead of per-vertex.

So let’s see how it looks. I’ll put a brick texture on the world to make sure the corners all match…


That texture is pretty much our worst-case scenario, and it seems to hold up pretty well, even on those round edges.

So that’s the week. Going forward: There’s an infuriating bug that I can’t nail down that’s resulting in seams between nodes. (You might see really bright lines in some of the above sceenshots.) It’s a bit random, but I need to nail that one down before it drives me nuts.


Also, I’ve got speed issues again. Dropping the octree freed up a ton of horsepower, and then this new terrain system ate it all. At a full sprint, the program can just barely generate stuff fast enough to keep polygons under my feet. It takes a good twenty seconds to fill in a scene like this. I need to climb down into the guts of this thing and see where the cycles are going and if there’s anything I can do to help that. I’m going to be working more on terrain, and I don’t want to have to wait twenty seconds every time I make a change.

Enjoyed this post? Please share!

202020161 comments? This post wasn't even all that interesting.

From the Archives:

  1. X2Eliah says:

    Hmnnnnng brickwall texture on curved surfaces.. MADNESS. Sorry, just my hobby-modeller side kicking in..

    Anyway, for only having basic planar mapping, it’s looking pretty decent.

    What did you mean by sunlight spreading? I mean.. sunlight is rays, it doesn’t really go around obstacles. Did you mean diffused/reflected ambient light? That might be cheaper to do by smearing out shadows at their borders, I suspect.

    Also.. Are day/night cycles, or any sort of angled lighting looking possible so far?

    • Adam says:

      …diffraction. But yes he was probably mostly refering to the lack of radiosity such as procworld radiosity

      • Matt says:

        Man that project’s looking good… can’t wait to see what comes out of it!

        • MichaelG says:

          It does look fantastic. My big worry is that because of his approach, nothing can change in the scene! He can’t have waving grass or leaves or anything like that, since all his geometry is expensively precalculated. I’m not even sure he can change the light angle in real time. I think the shadows and lighting are all baked into the scene as well.

          It’s going to be beautiful and sort of dead when it’s done.

          • Eathanu says:

            That worked for Myst, didn’t it? =)

          • MCeperoG says:

            You should not worry. Tree leaves, grass and other plants are instanced. They are not precalculated. It is no problem to animate these, make them sway in the wind, break if you shoot them. You will get pretty much the same behavior you get in commercial games like Skyrim, Crysis, etc.

            The ambient light is precomputed, but direct light and its shadowing is realtime. The angle and color can change significantly, as long it does not invalidate the precomputed ambient. There is a lot of room for changes in lighting, especially if you bake two ambient solutions, one for day and one for night.

            What you won’t have is a fully destructible environment, or the ability to build new things. I am leaving that for a future iteration of the engine.

            (Sorry Shamus for hijacking this comment, but I though I should reply.)

    • Exetera says:

      Curved brick is a thing. One of the schools I’ve attended was in a round brick building, in fact…

      • Adam Fuller says:

        I happen to be sitting in a round brick building at this very moment! (well technically I guess it is some sort of polygon).

      • Robyrt says:

        There’s even precedent for weird brick shapes like Shamus demonstrates. The garden walls at the University of Virginia are a sine wave of bricks. It’s a cheap way to build a stable brick wall that’s only one brick wide, although obviously it doesn’t make for great houses.

      • X2Eliah says:

        There is a difference between a flat brick placed in a pattern respective to surrounding bricks, an intentionally curved brick, or an flat brick texture pulled over a randomly angled surface, causing resolution deterioration and stretching.

        Likewise there are people with grossly uneven faces, but you cannot apply a symmetric face texture on a deformed random geosphere and call it adequate.

    • Submersible Scout says:

      Just one nitpick. I wouldn’t want non-opticians to get the wrong idea. Most people model light as a ray, but it’s actually a wave (and a particle). The effects of light being a wave (and a particle! again!) are so tiny that they don’t matter much, but it’s good to remember why we see diffraction and interference.

  2. DGM says:

    >> “I think I should make this into a game called “Shoot Guy Online”, which is a single-player game about a woman who fights with a sword.”

    The subtitle HAS to be “Unironic.” You and Alanis Morissette can compare notes.

    EDIT: Also, you might want to put in a break. You’ve got the entire post showing on the front page.

  3. Carlos Castillo says:

    Shamus, I think you learned AGAIN, the same lesson you learned at the end of your terrain project

    Agonizing over polygon counts with modern graphics hardware is often time poorly spent. It’s better to give the CPU a break, even if it means being sloppy and letting the GPU (your graphics card) pick up the slack.

    Although it’s involving slightly different concepts (data size vs polygon counts) you have been using the octree to speed up rendering of your smallest elements (chunks of terrain) and could have used it instead to determine quickly a much more important problem to your GPU, which chunks to render. It drove me mad a few posts ago, when you arranged your terrain cells into an essentially unaccelerated pile, but put a great deal of effort into (not actually) speeding up the rendering of the chunks.

    You did hit the nail on the head with regards to the costs/benefits of the octree, the main benefit it gives you is the ability to group a set of related elements together spatially which can be used to reduce the amount of redundant information in the set, but unfortunately you don’t see much of that benefit, since you have to unpack it when you send to the GPU, and you don’t seem to be generating it that way either. So your octree is essentially a complicated structure which you have to do more work to put stuff in (generate terrain) and take it out (send it to the GPU).

    Although your claim from 6 years ago has remained strong to the present day (GPU’s aren’t gonna get slower…), one thing that CPUs do that GPUs still can’t do, or at least find very hard to do, is to use complicated structures that require essentially random access to memory.

    • Mephane says:

      Maybe the octree is still useful for persistence? I don’t know how large a file would become that held your entire world, but given that everything is probably going to be destructible/modifiable by the player, you might have to save the entire world state into a file.

      On the other hand, you could instead just zip the file instead, which might save even more hard disk space for this. Or only store the world’s seed and all subsequent changes, and recreated them when loading (which would make saving incredibly fast, but loading possibly extremely long).

      Ah, so many trade-offs to consider…

      • Carlos Castillo says:

        If I’ve read the documentation of it correctly Minecraft goes the zip route storing its chunks, which makes sense since a lot of that data is redundant in both empty space, as well as underground, thus any decent compression method or even straight up run-length encoding will group together the identical blocks. Since the DEFLATE family of compression algorithms (Deflate, zlib, gzip, zip) is built-in to Java (Jar files are essentially Zip files), it probably made the coding of that aspect of the file format easy.

        If you think about it and squint a lot, you could almost consider mincraft data files to be 3D PNG files.

    • Dasick says:

      Another lesson Shamus seems to have forgotten:

      In the future, I need to set more specific goals for projects like this. Without any guiding design, I felt a lot of my choices were arbitrary when confronted with various tradeoffs. For example, I never decided ahead of time how fast the user should be able to move around the landscape, how high or low their viewing angle was allowed to be, how far they should be allowed to see, or even what scale I was dealing with. Many games put limits on stuff like this, and deciding those limits ahead of time (or at least having an idea of how I wanted the interface to work) would have given me more to go on.

      Or do you already know your limits/capabilities (curtesy of Minecraft)?

      • Shamus says:

        I stated my goals at the start of the series. I also had some un-stated ones, like huge draw distance.

        I suppose minecraft shapes expectations in the sense of “I should be able to roam around and change things without encountering loading screens”. Which seems pretty obvious.

        • Kdansky says:

          I remember that most reviews mentioned how WoW didn’t have any loading screens, or in some special cases (teleports and instances) really short ones. It was a huge achievement at the time.

          Annoyingly, it’s still an achievement, and most games can’t do it. I blame the consoles with their 512 MB or RAM, which means you have to be super-conservative with it, and don’t you dare spend it on reducing loading times!

          Example: Deus Ex HR could essentially keep two or three instances of itself in RAM at all times (the last two quick-saves, for instance), and when you press F9, swap a single pointer and reload in the space of a millisecond. PC-optimized games feel a bit like that. Diablo 3 has such short loading times I don’t even know what the loading screens look like.

          • Mephane says:

            I so much agree. Some games still have awkwardly long loading screens (SWTOR comes to mind lately), while Blizzard really pulls the extra-short or no-loading-screens effort off, not just in WoW, but as you mentioned also Diablo 3. Even transitioning between chapters, which likely requires replacing a lot of art assets (the chapters vary in landscape types) etc. takes not much more than a second, entering and leaving buildings, dungeons etc. is almost instantaneously. I suppose they are doing some heavy streaming in the background, and it works pretty well.

  4. Mephane says:

    Heh, Shamus, that post title immediately reminded me of that Clean all the things picture here:

    (Btw, hilarious blog over there once you get used to the fact that the poor quality of the images is deliberate and for comedy effect.)

    • Tharwen says:

      That post is the origin of the ‘all the things’ meme. Every instance of ‘all the things’ stems from it originally, which is probably why this one reminded you of it.

  5. Mr Guy says:

    Oh project who…until recently said “Octree.”

  6. Zukhramm says:

    I know nothing about rendering 3D but just looking at it mathematically I do not understand why you need to know the normals of the vertices. They’ll neither have textures nor light reflecting off of them since that’s actually on the surface between them which you already have the normals of.

    • Stardidi says:

      But how do you think he got those normals in between?
      right, by interpolating between the normals of the vertices from that triangle / surface.

    • Shamus says:

      “the surface between them which you already have the normals of.”

      I actually don’t have a surface at all. The surface comes from the verts. When you draw, you don’t describe the pixels, you describe the 3 points and tell it to draw a polygon between them. When you describe the points you provide information regarding texture mapping, color, normals, and position. The surfaces ARE the points.

      • Zukhramm says:

        Regardless, you still have the normals of the surface.

        • bigben says:

          If I understand correctly, You have three points.

        • decius says:

          I think he only has the normals of the points, since each point on the surface has a different normal, interpolated from the normals of the points.

        • Andy L says:

          Having the normal value of the triangle isn’t as useful as you might think.

          If you render the entire triangle with the same normal value it will appear flat, and the joints between triangles will appear as facets or corners.

          To create a surface without apparent facets you need to interpolate a smooth set of normals. If you have the normals of the three points, it’s easy to blend between them, and each triangle can be rendered without knowledge of adjoining triangles.

          Trying to calculate the normal of an individual pixel with only the normals of the faces as given would be a challenge, and would not allow you to render a triangle without knowledge of the adjoining triangles.

          (Besides the much more complicated math, a triangle-normal-only system would also have the hidden cost of requiring you to store information about which triangles were connected as opposed to merely tangent.)

          I’m not sure I’ve explained this well, but the idea is that when the goal is to smoothly interpolate a surface, calculating vertex normals and then blending between them is by far the easiest solution.

    • JTippetts says:

      Shading using vertexnormals is commonly called “smooth shading”, whereas shading using only the surface normal is commonly called “flat shading”. Smooth is used to approximate a smoothly curved surface. 3D rendering uses low-detail representations of high-detail objects, but if you do flat shading then the actual low-resolution of the thing becomes obvious. Consider this image:

      The object on the left is using smooth shading, the object on the right flat. They are the same exact object (the one is a duplicate of the other). You can see how using surface normals only results in a blocky look.

      • Zukhramm says:

        Is there anything else other than shading they’re useful for?

        • JTippetts says:

          Tri-planar texturing (basically what Shamus is doing), normal mapping, environment mapping, etc… Vertex normals are key components of the graphics pipeline.

          Note that having the surface normal means nothing. Even with flat shading, you have to have vertex normals. The way the rendering pipeline works is that you feed it a stream of vertices, and it will assemble a triangle once it gets 3 vertices. It doesn’t have any per-triangle data; only per-vertex data. So in order to do flat shading, you still have to supply vertex normals; it’s just that all 3 vertex normals will point in the same direction.

          For each fragment (pixel) the pipeline will interpolate the 3 vertices of the triangle that “owns” the fragment. If the normals are identical, then the interpolated normal will also be identical to the 3 vertex normals, and the triangle will be flat shaded. If the vertex normals point in different directions, then the interpolated normal will point in some direction that is somewhere between the three; a weighted average of them, if you will.

          This normal can be further manipulated in the fragment shader; normal mapping is a technique that lets you modify this interpolated normal based on the value of a texture look-up into a normal map texture. The texel sampled from the texture is used to “move” the normal around, allowing the essentially flat triangle to represent a smoothly curved surface through manipulating the normal.

          For environment mapping, the normal is used to index a special kind of texture called a cube map. A cube map is actually a set of 6 textures forming the faces of a cube that conceptually encloses a 3D object. The x, y and z components of a vector are used to index the texels of the cube map in such a fashion that the cube map is sampled from the location that the vector “points at”. This is used, for example, to create the illusion of reflection on shiny metallic surfaces. The cube map is a texture representing the surrounding environment, and in the fragment shader it is sampled using a vector that is generated based on the view direction and the interpolated and normal-map-modified fragment normal, to obtain a color value for the reflection, and this color is blended with the surface texture sample for the given fragment. The view direction vector is reflected based on the interpolated normal vector.

        • Oleyo says:

          Yeah, they pretty much determine everything:

          The way the face being drawn…er, faces (aka facing the camera or not).

          How much light the vertex receives, and therefore how the face constructed is lit/shaded (even if it is facet shaded, the normals are required to determine the “face” normal direction).

          So the normals must exist at some stage before rendering. Essentially without normals you could only have wireframe renders, like Vectrex (if it were rasterized)

  7. Alex says:

    I think the graphics card image needs a pair of sunglasses in this context.

  8. MCeperoG says:

    I am a bit surprised about the issues you and Michel found in octrees. It is not clear to me why you need to traverse up to locate neighbor cells. You can traverse the octree in a way that each cell, face, edge or vertex is visited only once. For faces, edges and vertices you can get all the neighbors at no expense, even if they are in very different branches of the octree. This paper has a good explanation of the algorithm:

    This is about Dual Marching Cubes, but the octree traversal stuff is generic. You can find a description around pages 22,23 I think.

    I am currently using this kind of approach. I have nodes with millions of cells inside. They take less than a second to generate. Most of the time is spent evaluating the implicit functions for the content. Octree processing is nowhere in the time charts. They really helped in my case.

    • Shamus says:

      Interesting stuff. I only just skimmed it, but it looks like you’re using an octree to access existing data. You’ve got a big grid of data in memory, and you’re using an octree over top of that. That’s a bit different from what Goodfellow and I are (were) doing, which is allocating dynamically using the octree. The data is stored in pointers at the end of the branches, which themselves are pointed to by the trunk.

      • MCeperoG says:

        I do not have a regular grid. Just like you were, I am using the octree to store and generate the data. As more data is discovered the octree is refined down until I hit a maximum resolution (which is about one centimeter for some materials)

        Anyway I am not sure if any of this will help with your problem. Octrees are proven acceleration techniques, maybe you can transform your problem so it aligns with what octrees do well and avoid their pitfalls. There is a lot to gain: Project Octant is a cool name, would be a shame to lose it.

        • Shamus says:

          Looks like I have some reading to do.

          • void says:

            I mentioned it on a previous one of your Project Octant postings a little after the fact, so I’ll bring it up again: Sauerbraten is a high end 3d engine that runs off of octree based geometry. In my opinion it’s unfortunate it’s wasted on their Quake 3 clone, but it’s impressive tech nonetheless.

            (I sometimes toy with trying to hammer the graphics engine into working with a more competent environment, but … megh. So much work for a very haphazard codebase.)

            Point is, it’s Really Freaking Fast, and does real-time mesh deformation. People cooperatively edit maps that have fairly highly refined details.

      • Rod Spade says:

        Dynamic memory allocation is often expensive. Perhaps a custom allocator would help. You know, if you want to bang your head on lots more arcane stuff.

        • MCeperoG says:

          Right, dynamic allocation is a killer.

          It is faster and simpler just to assume worst case scenarios are the normal operating conditions for your code and work your way down from there. For adaptive techniques like octrees this makes a lot of sense. You can reserve a memory budget and the octree will refine down as long as memory allows.

          It gets a little freaky if you need to alter the octree after it is generated, but even then it still beats dynamically allocated nodes.

        • Jamie Pate says:

          My approach to avoid allocation was to preallocate a queue of objects in an array (they must all be the same size, so you loose some oopness) and then pile them into a queue and pull them from the queue as needed.

          of course this requires a bunch of preallocated space, but sometimes it makes sense to do it that way

    • MichaelG says:

      If you are working from the top down, the overhead of the Octree is negligible. I got stuck in two places. One, I was trying to trace rays from a block through the grid, so I had to come in from the top on each step of the ray.

      Two, I was generating the Octree from heightmap data, and did it the naive way, using my setLeaf method. If I had regenerated the tree from the top, testing above or below the height level at each node, this would have gone faster.

      What finally decided me was the realization that I wasn’t saving much storage or time with an Octree. I work in 32 by 32 by 32 chunks of cells, and 32K memory isn’t a lot. For the things I was doing (looking for adjacent blocks), the Octree didn’t save much time either.

  9. swenson says:

    Well, we all knew this day would come eventually, or suspected it (at least after I started reading Sea of Memes after someone linked it in these comments a while back).

    Anyway, I think your solution for how to project textures looks pretty fantastic. I’m seriously impressed by how good the brick texture looks, especially because in the “real” world you probably would want to make brick blocks be all “hard” anyway, so you wouldn’t get rounded edges. (Actually, I don’t know if you’d want to do this, but it would make sense, as bricks would never be naturally occurring, so it wouldn’t be a problem if they looked somewhat unnatural.)

    I do have a question. If you remove/place a block (which I assume you can do?), how long does it take to update and reshape the affected blocks around it?

    • Shamus says:

      It’s been a little while since I clocked it, but I think the node-building takes ~ a tenth of a second. It’s interactive, but when I alter a block at the very corner of a node (where eight nodes all share a point) it’s a bit slow to update the neighbor nodes. I haven’t figured out if this is because node-building takes too long, or it’s just a scheduling problem.

  10. Thomas says:

    The stuff you create in these things always looks so cool, it makes me jealous :D

  11. Dasick says:

    Wow Shamus, those brick textures looking good. I don’t know if it’s just the broken terrain, but I don’t see any repeats.

  12. Piflik says:

    That kind of texture projecting would have been exactly what I would have recommended. At least a variation of it. I use it in 3ds Max extensively for noisy stuff (rocks and the like). I had the idea from Neil Blevins, he called it Blended Box Mapping.

    I’d love to have a look at the shader…doing that in realtime would be interesting (and maybe I’d be able to translate it to Unity ShaderLab)

  13. decius says:

    What does that brick texture look like on a flat surface?

    It does break down where it ‘blends’ into the grass, but only because artificial textures should not typically blend, but have sharp edges or custom blending textures. Grass blends with dirt, but either stops at the edge of bricks or grows up between them.

  14. Paul Spooner says:

    Needs a “more” tag, or “(read the rest of this stuff…)” or whatever you kids use these days. Currently this post is monopolizing your front page.

    Lost of work on aesthetics here. The texture blending looks like a solved problem. Along with the surface blending. Just, be careful not to over-optimize while you’re developing. Unless you enjoy that kind of thing. It’s just looking like you’re polishing the casting, and there’s still a lot of machining left to do.

  15. Jamie Pate says:

    you might want to keep the octree, but perhaps use 16x16x16 blocks (or larger) of data as the smallest octree node instead of 1x1x1

    could save you a ton of iteration, which is actually the best reason to go with octrees (imagine iterating over every empty sky block… is this empty? yes, is this empty? yes, is this empty? yes etc)

    If you wanted to go whole hog you could also do your block => smooth calculations in the gpu with a geometry shader. That way you would only have to send a massive 3d texture of data instead of all those messy vertices :)

  16. Bryan says:

    Seams between nodes, as in normals pointing in different directions on the polygons that join at that seam? Or actual space between the polygons?

    If the former, I wouldn’t be surprised if it was the floating-point equality thing I mentioned a few days back, unless you’ve pulled in a fix for that…

    See also if you’ve never read that particular doc before (though I’d be surprised). And it does seem to imply that my fix isn’t really correct either (it doesn’t correctly handle very small numbers). Hmm. :-/

  17. WJS says:

    That blending shader… OK, now I’m really curious how that compares to the HL2 one mentioned in Part 6. I was curious how such a thing might work when it was first mentioned, but now I really want to know.

Leave a Reply

Comments are moderated and may not be posted immediately. Required fields are marked *


Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun.

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>