Let’s Code, Part 4

By Shamus Posted Sunday Dec 19, 2010

Filed under: Programming 13 comments

Craft of Craftcraft is done!

Actually, that was a lie. It is, in fact, not remotely done. Which is good, since you can read about the steps being taken to correct the fact that it is not done.

In this entry Michael explains the z-buffer. Let me take a crack at the same thing, because I enjoy doing it…

So you’re there, rendering polygons, and you want to make sure that stuff in the background is properly obscured by the things in front of it. In the old days, it was a lot like painting with oil-based paints. You had to paint the far away details first, and gradually layer on things closer and closer to the viewer, covering up the stuff you painted before.

Of course, we’re talking about computer rendering here, so we’re trying to make a new painting 30 times a second. (Eat your heart out, Bob Ross.) The problem with drawing the world back-to-front, is that players insist on always moving around all the time. This means that you have to keep re-calculating the distances to all of the stuff to be rendered and then re-sorting them. That’s expensive even today, and back in the (very) early 90’s, computers had even less CPU time to waste on this sort of business.

Enter the z-buffer. (Sometimes called the depth buffer, because programmers are never happy until they have eleven different names for everything.) Imagine that we have two canvases. One is the rendering buffer, where we will paint our picture.


The other is the z-buffer, where we will record how far away things are. Now, the z-buffer is never shown to the user, but you can think of it as a grayscale image.


Every time we draw a pixel in the render buffer, we draw a pixel in the z-buffer as well, recording how far away that pixel is. In our example, farther = brighter. If we ever run into a situation where we would be drawing a brighter pixel over a darker one in the z-buffer, then we skip drawing it and neither the z-buffer or the render buffer are changed.

Once the painting is done, you wipe the z-buffer clean and start over.

But that only solves half of your problems. If you want to draw transparent stuff (like windows) then you need both a z-buffer and sorting to make it look right. Check out the Let’s Code entry to read the rest of that puzzle.


From The Archives:

13 thoughts on “Let’s Code, Part 4

  1. Zagzag says:

    Thanks for the link! These have been really interesting, and without the link here I’d never have found it! Your addition is just as interesting.

  2. mewse says:

    Back in the old days, we had to implement z-buffers in software, while of course, with the advent of 3D-accelerated video cards they’re now implemented on the video card itself.

    The interesting thing, though, was that z-buffers weren’t the only way to accomplish this “draw only the nearest polygon in each pixel” thing; there were lots of other approaches which worked in very different ways, each with strengths and weaknesses (z-buffers were commonly looked down upon at the time, as they were the “brute force” approach, and used far too much memory and CPU time on the computers of the day).

    At about the same time that the first 3D-accelerated video cards were coming out, a serious contender to the z-buffer was proposed; it worked just as well as the z-buffer, but required far less memory and CPU time. And if that wasn’t enough, it also sped up your rendering code, by working out visibility for every pixel on the screen before you ever drew a single pixel, which meant that you’d never draw to a single pixel twice in one frame (drawing pixels was by far the slowest part of rendering back then. It wasn’t until the 3D-accelerated cards were released and popular that ‘number of polygons’ became an important performance concern)

    This new contender was called an “s-buffer” (short for “span-buffer”).

    Where the z-buffer stores the depth value of each pixel, the s-buffer stored information for each horizontal scanline of pixels, showing what polygon was visible for each “span” of pixels within that scanline. So if a single polygon was visible in a horizontal span of forty pixels, the s-buffer would contain just two pieces of information; the depth at those two sides of the polygon, whereas the z-buffer would contain 40 pieces of information; the depth values at each of those forty pixels within the polygon.

    In practice, the s-buffer approach was vastly more complicated than the z-buffer. But it had a lot of advantages: It used a lot less memory. It meant that you never had to draw to the same pixel twice; you’d fill out the s-buffer completely before you even began to draw the scene, so by the time you were actually rendering, you’d already know which polygon was visible for which pixel. And it meant that you could render your whole screen line by line, with each line drawing from left to right; this made your rendering code vastly simpler than the z-buffer.

    I occasionally wonder whether, if 3D-accelerated video cards had first come out after s-buffers had become a little more well-known, whether all our video cards would be using s-buffers now instead of z-buffers. Probably not, since memory is cheap now and z-buffers are fantastic for our modern shaders in a way that s-buffers simply wouldn’t be.

    But it’s fun to think about what might have been. :)

    1. Simon Buchan says:

      This technique lives on in CGI as “scanline rendering”, used for most of what you see on screen that doesn’t involve light distortion, like reflections, refractions, heat waves, etc….

      Unfortunately, this still doesn’t solve the *real* problem with depth, which is transparency. Depth-peeling still seems to be the suggested way to do this, but I’m uncomfortable about it with a lot of very transparent effect polys flying everywhere in modern scenes. Oct-treeing all your objects lets you quickly sort them, so you could peel an object at a time to however many layers it needs, I guess. This wouldn’t handle object intersections, but that can be designed around.

  3. MichaelG says:

    Thanks Shamus! Pretty pictures…

  4. Jarenth says:

    Shamus you crafty liar you.

    I’ve never even noticed before that Minecraft glass and leaf blocks don’t draw the ‘unseen’ faces. But then again, the fact that I’ve never been bothered by it probably indicates it wasn’t a bad choice.

    1. Slothful says:

      I kinda thought of it more as a feature, since if Glass blocks drew the unseen faces, then a sizeable chunk of glass would be rendered totally opaque.

      1. Will says:

        Glass blocks not drawing ‘far’ faces is indeed deliberate, for exactly that reason.

      2. Primogenitor says:

        There are two kinds of “unseen”. Interior (between glass and glass) and exterior (between glass an air on the other side of the object). I guess in Minecraft a solid glass object and a hollow glass object (with air inside) would look exactly the same – because you would never see the inside exterior because youd be looking through some glass. Hmm, I wonder what you see if you look through muplie separate pieces of glass separated by air…

  5. decius says:

    For simple cases of either complete opacity or complete transparency, could you simply have transparent pixels not update the z-buffer? Render polygons with partial transparency last, respecting the z-buffer, or figure a way to draw behind a drawn partially transparent object.

    I can’t figure an easy way that would properly handle the complex case of shuffling a stack playing cards, of different colors and transparency.

    1. Simon Buchan says:

      Yep, and this is called an “alpha-test”, and is handy when you can get away with it, but without high AA this can look incredibly bad in a lot of cases :(. For example, you would think text (for example, on a signpost) would be a good option for this, since it’s either there or not, but if you try it the complete lack of an edge is glaring beside the blurred texture it’s over. This is also why fences “shimmered” in pretty much every game between ’98 and ’04.

      You can draw behind a transparent object (you just flip the color contribution ratio), so long as you sort all your transparent objects from nearest to furthest :) (This sounds pretty useless, but it’s the basis for the fancy-pants depth peeling technique). The *real* holy grail of rendering would be depth-*independent* rendering, being able to render a transparent object without having to know what it has to be in front of or behind.

      1. Zak McKracken says:

        Say, would that not be solved by having an expanded z-bufffer?
        For each pixel, not only a single depth value is stored but all depth and alpha values in front of the closest opaque pixel. If a new transparant pixel arrives, it is added if it is in front of the closest opaque one. If an opaque pixel arrives it is added under the same condition and everything behind it is thrown away. At the same time, you’d need to store the corresponding colour informations for all pixels in the z-buffer. Then, when you’re through with everything, you go through the z-buffer, back to front, and add up the values.

        Maybe just doing raytracing would be quicker still if you used the appropriate (not really existing …) hardware.

        1. Simon Buchan says:

          John Carmack thinks raytracing hardware is the way, he’s claimed his next next engine (after Rage) is a raytracing based engine, so you’re in good company with that opinion :).

          Multiple output buffers isn’t a bad idea, after all, brute forcing is what video cards are good at :) – but current hardware is much more ‘planar’ than deep, meaning when you are drawing a pixel, you can only really read and write to one set of color, alpha, and depth buffers at once (as far as I am aware, at least?). Combining has to be done all at once, by using the output as a texture in a later drawing step, meaning it’s *seems* that you can’t have a sorted list per pixel, since you have no idea what the other buffers have and therefore if you should be drawing this pixel on this triangle on this buffer. Sucks, huh? But these annoying restrictions on what can be read is why they can draw so fast, so take what you can get, I guess.

          Depth peeling is actually kinda close in concept, but done serially rather than in parallel: it’s essentially using two depth buffers, a ‘near’ and ‘far’ depth, to mask out progressively further layers of transparency – ‘peeling’ the scene. But it comes with it’s own problems, of course….

  6. Pete Zaitcev says:

    You graphics guys are monsters. And here I thought that consistent distributed storage in the cloud was challenging.

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply

Your email address will not be published.