Project Unearth Part 2: Skimming Hazzard

By Shamus Posted Thursday Jun 26, 2014

Filed under: Programming 37 comments

You know what they say about when you assume something? When you assume, you might save time by using extrapolated information and pattern recognition to quickly zero in on the solution to the problem. No? That’s not what people say about making assumptions? I just kind of assumed people would… oh.

Well, my assumptions did in fact make an ass of meBut not you. You’re still cool. this time. I’ve read about various shadow techniques and I thought I had a decent handle on what I wanted to do. So when I read up on the theory I got to the halfway point and was like, “Yeah, yeah. I got this,” and started skimming.

Like many 3D concepts, this one begins with a crappy diagram:

unearth_volume1.jpg

Here we’ve mooshed the problem down to two dimensions to make it easier to depict. (And spoiler warning: Also easier to misinterpret if you’re skimming because you think you know what you’re doing already.) On the left we have a light source. The green box is an occluder. On the right we have a wall that, if our technology works right, will have a big ol’ shadow right in the middle thanks to the occluder.

I’ve looked at a half dozen diagrams just like this one in the past couple of weeks. Light, occluder, and a big object to receive shadows. It seems silly to make ANOTHER version of the same stupid diagram, but that’s what we’re doing.

As I’ve mentioned in the past, objects are made up of vertices, and vertices have surface normals. Here are the normals for this scene:

unearth_volume2.jpg

So what we do is set up a special rendering pass. Before we draw the lighting, we’re going to mask out the areas where the light can’t go. The concept is that you look at the direction of the normal. If it’s facing away from the light, then we calculate a new direction, one going from the light to the vertex:

unearth_volume3.jpg

We take those points and give them a great big shove along this new vector. And by “big shove” I mean “all the way to infinity”. Now we draw our extruded shapes.

unearth_volume4.jpg

Note that we would do the exact same thing to the yellow box as well. We do this for any and all shapes in the scene, and they will all shadow themselves and each other without us needing to figure out the relationships between the objects or calculating which objects are occluding others. I’m just showing the green box to keep this simple.

So anyway. Everywhere that the extruded green shape is further from the camera than the normal yellow shape is a spot where the light can’t reach. We use this rendering pass to stencil out the next pass, where we draw the lights themselves:

unearth_volume5.jpg

Awesome, right?

Well, it would be nice if this worked, but everything I just showed you is a clumsy oversimplification of the actual technique. Don’t implement this. It won’t work.

unearth_volume6.jpg

The frustrating thing about this is that it nearly works. It works for the various illustrations that people have made to describe how you do this kind of lighting, including mine. It’s so close to working that I spent a lot of time looking for bugs in my shadowing system instead of questioning the design I was using.

The problem becomes obvious if you don’t use this platonic ideal illustration. Consider a slightly different arrangement of objects:

unearth_volume7.jpg

In this case, only one vertex is facing away from the light. Which means our shadow ends up shaped like this:

unearth_volume8.jpg

When it should be this:

unearth_volume9.jpg

It’s sad. What I implemented produces “pretty good results” a lot of the time. From a lot of lighting angles, it can look correct. And even when it’s wrong, it often looks “right enough” that the eye doesn’t question it. But from other angles it’s horribly and obviously wrong and dumb and bad. It makes all these thin little sliver shadows that aren’t shaped at all like the thing casting the shadow. On block terrain, you end up with shadows that look like they’re being projected by a jagged wall of pikes.

unearth_volume10.jpg

This is sad-making. The easy way is really easy, and almost works. It looks right (or good enough) 80% of the time. But that last 20% looks really wrong. And to do things right – to fix that last 20% – is about four times the work. The pipeline is more complex, you need another shader, and it requires a lot more graphical horsepower. (My way is super-cheap.)

It sucks when you have a job where a majority of the work goes into a minority of the benefit, but I suppose I can’t call myself an engineer if I’m not willing to put up stuff like this.

The actual solution – which I finally discovered once I went back and read the theory with a little more care – is that you can’t get away with simply shoving single points. You have to shove entire polygons. Specifically, you need to find polygons that are right on the edge, having some points towards the light and others away from the light.

unearth_volume11.jpg

Once I’m done bellyaching, I realize this is actually a boon for me. Sure, it’s more work. And it’s a lot more complicated. But doing this requires the use of a geometry shader, and learning about those is one of the goals of the project!

I’ve described shaders before. Here’s a diagram I made a couple of years ago:

octant11_1.png

Geometry shaders go between these two steps. So after the vertex leaves the vertex shader it’s available to the geometry shader. (If you’re using one.) The geometry shader is a crazy thing. It takes primitives as inputs. (Triangles, lines, or even just lone vertices.) In our above diagram, the geometry shader would sit between steps 3 and 4.

The other two shaders produce the same type of output every time, but the geometry shader can generate all kinds of different stuff. It can mind it’s own damn business and pass the values along without doing anything. Or it can make additional changes. Or it can generate entirely new primitives, creating new triangles based on whatever cockamamie system the programmer has devised.

In our case, it looks for edge triangles and turns them into groups of triangles like so:

unearth_shadow_volume.png

I used this tutorial as a guide. It turns out that shader has a bug in it. It doesn’t cap the end of the shadow (the bright magenta triangle in the above diagram) which can leave odd holes in your shadows. If you’re here via google and just want the goods, then here is a fix I came up with:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
/*-----------------------------------------------------------------------------
This shader takes a triangle of type GL_TRIANGLES_ADJACENCY. It tkes the 
following form:
 
                 1-----2-----3
                  \   / \   /
                   \ /   \ /
                    0-----4
                     \   /
                      \ /
                       5
Points 0, 2, and 4 are the points of the triangle actually being drawn. 
Points 1, 3, and 5 are corners of adjacent triangles, provided by OpenGL 
for the purposes of being able to analyze the topology here in a geometry
shader.
 
We're looking for triangles that fall along the edge bewteen surfaces facing 
towards the light, and others facing away from it.
When we find such a triangle, we extrude it, turning the 0, 2, 4 triangle 
into a tube that reaches the far clip plane. This makes the shadow volume 
for this triangle, and is used to stencil out regions that the light can't 
reach.
-----------------------------------------------------------------------------*/
 
#version 150 compatibility
 
#define gVP           gl_ModelViewProjectionMatrix
#define gLightPos     uni_light_position
 
layout (triangles_adjacency) in;
layout (triangle_strip, max_vertices = 18) out;
 
in vec3 WorldPos[];
 
uniform vec3        uni_light_position;
uniform mat4        gl_ModelViewProjectionMatrix;
 
float EPSILON = 0.01;
 
void EmitQuad(int StartIndex, vec3 StartVertex, int EndIndex, vec3 EndVertex)
{
    vec3 LightDir = normalize(StartVertex - gLightPos);
    vec3 l = LightDir * EPSILON;
    gl_Position = gVP * vec4((StartVertex + l), 1.0);
    EmitVertex();
 
    gl_Position = gVP * vec4(LightDir, 0.0);
    EmitVertex();
 
    LightDir = normalize(EndVertex - gLightPos);
    l = LightDir * EPSILON;
    gl_Position = gVP * vec4((EndVertex + l), 1.0);
    EmitVertex();
 
    gl_Position = gVP * vec4(LightDir, 0.0);
    EmitVertex();
 
    EndPrimitive(); 
}
 
void main()
{
  //We take our 6 points and create six edges.
  vec3 e1 = WorldPos[2] - WorldPos[0];
  vec3 e2 = WorldPos[4] - WorldPos[0];
  vec3 e3 = WorldPos[1] - WorldPos[0];
  vec3 e4 = WorldPos[3] - WorldPos[2];
  vec3 e5 = WorldPos[4] - WorldPos[2];
  vec3 e6 = WorldPos[5] - WorldPos[0];
 
  //This gives us the normal of this triangle.
  vec3 Normal = cross(e1,e2);
  vec3 LightDir = gLightPos - WorldPos[0];
 
  if (dot(Normal, LightDir) > 0.000001) {
 
    Normal = cross(e3,e1);
 
    if (dot(Normal, LightDir) <= 0) {
        vec3 StartVertex = WorldPos[0];
        vec3 EndVertex = WorldPos[2];
        EmitQuad(0, StartVertex, 2, EndVertex);
    }
 
    Normal = cross(e4,e5);
    LightDir = gLightPos - WorldPos[2];
 
    if (dot(Normal, LightDir) <= 0) {
      vec3 StartVertex = WorldPos[2];
      vec3 EndVertex = WorldPos[4];
      EmitQuad(2, StartVertex, 4, EndVertex);
    }
 
    Normal = cross(e2,e6);
    LightDir = gLightPos - WorldPos[4];
 
    if (dot(Normal, LightDir) <= 0) {
      vec3 StartVertex = WorldPos[4];
      vec3 EndVertex = WorldPos[0];
      EmitQuad(4, StartVertex, 0, EndVertex);
    }
 
    vec3 LightDir;
    //Front cap. (Original triangle.)
    LightDir = (normalize(WorldPos[0] - gLightPos)) * EPSILON;
    gl_Position = gVP * vec4((WorldPos[0] + LightDir), 1.0);
    EmitVertex();
    LightDir = (normalize(WorldPos[2] - gLightPos)) * EPSILON;
    gl_Position = gVP * vec4((WorldPos[2] + LightDir), 1.0);
    EmitVertex();
    LightDir = (normalize(WorldPos[4] - gLightPos)) * EPSILON;
    gl_Position = gVP * vec4((WorldPos[4] + LightDir), 1.0);
    EmitVertex();
    EndPrimitive();
 
    //Back cap. (Original triangle, projected to infinity.)
    LightDir = (normalize(WorldPos[4] - gLightPos));
    gl_Position = gVP * vec4(LightDir, 0.0);
    EmitVertex();
    LightDir = (normalize(WorldPos[2] - gLightPos));
    gl_Position = gVP * vec4(LightDir, 0.0);
    EmitVertex();
    LightDir = (normalize(WorldPos[0] - gLightPos));
    gl_Position = gVP * vec4(LightDir, 0.0);
    EmitVertex();
    EndPrimitive();
  }
}

I don’t promise the above is correct, optimal, portable, sensible, efficient, or clear. This is the effort of a novice trying to correct the work of his betters. Good luck!

I’ll add that I love having an excuse to draw elaborate ASCII diagrams directly into code like this. I’ve done it many times in the past, and I’ve never regretted it. Comments are nice, but when you’re fooling around with geometry and complex spatial relationships, nothing beats having a picture of the idea right there in the code.

The result?

unearth_volume12.jpg

The glitches are gone and we’re now casting correct shadows.

The great thing is that now that it all works, it scales up nicely. I can add more lights and more objects and all will continue to work as expected. All you need is more power. I don’t need to add special-case checking for situations where shadows stack, or eat extra texture memory, or where we need to analyze the positions of objects relative to each other. Even better, adding more objects and lights will require more GPU power, but not significantly more CPU power.

Onward.

 

Footnotes:

[1] But not you. You’re still cool.



From The Archives:
 

37 thoughts on “Project Unearth Part 2: Skimming Hazzard

  1. Daemian Lucifer says:

    “When you assume, you might save time by using extrapolated information and pattern recognition to quickly zero in on the solution to the problem.”

    And this is what people say about extrapolating.

    1. Taellosse says:

      Oh gods, I’d forgotten that one. It made me giggle until I read the alt-text, which made me laugh out loud. See, my wife is pregnant – with twins. Which we were NOT expecting, at ALL, and has thrown us for a serious loop. We planned on one, and only one, child. No intention of more. So we got involuntarily supersized. She took the home pregnancy test after missing her period, we were all excited, then about 3 weeks later, we go in for her first ultrasound and find out there’s two of the little blighters. We’re still adjusting to the concept, a month later.

      I must never show this comic to my wife. She would kill me dead.

    2. Joe Cool says:

      That XKCD title text has always bugged me. There’s no way you could ever extrapolate to hundreds of babies by the third trimester. The problem is you can’t know you’re pregnant until about two to three weeks from the date of conception, so you could never say for certain “yesterday I wasn’t pregnant, today I am with one baby, tomorrow with two,” etc. The best you could do would be “two weeks ago, I wasn’t pregnant, today I’m pregnant with one baby, in two weeks, I’ll have two babies,” which would mean you would have no more than 19-20 babies by the time you deliver.

  2. Color me (erm, shadow me?) impressed, this seems simpler than I had envisioned it (or you just are brilliant at making complex things simple).

    I’m definitely bookmarking this post in my now growing OpenGL bookmark folder.

    How much does the computational load increase per light source?
    (Using a later version of Process Explorer you can see the GPU load for each process/program. No idea how accurate that is though.)

    Normally one would have one global light (like in the last image),
    and then various small localized lights, now if this truly scales (and the placement of the localized lights are smart) then the performance impact should be minimal.

    It would also be possible to provide a user option to limit the max number of local lights I guess.

    And am I correct in assuming that a simple terrain will have less shadows (thus higher performance) ?

    Also, is it hard/easy to add a smoothing/blur to the edges of a shadow?
    Ideally the more it stretched (thus larger the shadow) the more blurred it gets.

    1. Abnaxis says:

      Bearing in mind that I haven’t actually done this, it seems like you could generate the shadow geometry with different shading values at each of the vertices, so there’s a gradient from a dark shadow at the object to a light shadow at the end of the shadow polygon, and let the pixel shader do the fading around the edges for you.

      That might get you some artifacts without a lot of fiddly bits of code added for edge detection, though.

    2. Richard says:

      Blurred shadows do not work that way.

      Blurred shadows happen for two three reasons:

      1) Ambient Light Scatter.
      Light scatters off all the surfaces and the sky in the world, and that bounce does three major things.
      – Firstly, it enables you to see objects at all.
      – Secondly, it fills out shadows so they are not entirely black and slightly softens their edges.
      – Thirdly, it makes the sky blue. (Well, gives the sky a colour at all)

      2) Real lights are not point sources. (Though LED chips and short-arc discharge lamps get really close)

      Real lights have a relatively large physical size, and most luminaire designs include lenses, diffusers (eg light shades) and reflectors intended to greatly increase the ‘apparent’ size of the light source.

      A fluorescent tube is the easiest way to visualise this, as it emits light along a very long length.
      It’s clear that more or less of the total tube is obscured by the occluder as you move around the shadow volume.

      3) Refraction, interference and other quantum bending.

      The sky is easy to simulate as it is very uniform.

      The other types of ambient scatter are much harder to do.

      “Soft” lights can be simulated by doing many passes and smoothing the results – eg, you might simulate a fluorescent tube as five point lights, then smooth the five shadows together.

      1. I’m not sure but I’m assuming that Shamus won’t be simulating sky light scattering nor will he be simulating LED or tube lights.

        Also, what I said was “Also, is it hard/easy to add a smoothing/blur to the edges of a shadow?
        Ideally the more it stretched (thus larger the shadow) the more blurred it gets.”

        I never said shadows behave that way (in real life), I simply asked if it was hard or easy to do this and that ideally (for what I had in mind) the more stretched the more blurred it would get, although other methods of doing the same effect is obviously welcome.

        This would allow my shadows to fade away and not have to render shadows of a huge building if the light (sun) is low across several kilometers of terrain.

        1. I imagine to do this you’d need to render shadow volumes (which shamus has rendered into the stencil buffer I think) into a separate buffer and blur it a different amount depending on the values, in a post-process (not sure how doable this is without GPGPU, i.e. compute shader). Multiple shadows on top of each other might not work properly without lots of effort.

    3. Tizzy says:

      Naive question from someone who obviously doesn’t know anything about the topic: I think I can see how the method will cause shadows of different objects to blend.

      But if we have several light sources, where in the proceedings is the part where a light source can dispel shadows? Probably a dumb question, I know…

      1. Turgid says:

        It’s not a dumb question. The answer is “nowhere.” These shadows are completely solid, meaning hard-edged and dark. Multiple shadows will overlap, but they’ll all be the same darkness. Changing that would take more processing afterwards. But this method works great for a single source!

        (This is a late reply, sorry.)

  3. Jamey says:

    Thanks for that fix. Worked for something I was having a problem with!

  4. Alexi says:

    > I used this tutorial as a guide. It turns out that shader has a bug in it. It doesn't cap the end of the shadow (the bright magenta triangle in the above diagram) which can leave odd holes in your shadows.

    I notice the tutorial you linked to had the following to say at the very end of the page concerning the AWOL end-cap:

    > Last but not least – we enable depth clamp. This means that when the far cap is extended to infinity the result is not clipped away but remains on the far clip plane. Without this call we would loose the far cap.

    Does your fix correct a different problem? Did the author of the linked tutorial update their page?

    1. Shamus says:

      Yeah, depth clamp will prevent the end cap from being clipped away. But in the example given, it wasn’t being drawn at all. In my code, this polygon begins at line 116. Not present in the original.

  5. Jan says:

    OK, I’m not a programmer, but a mathematician, and I got confused by your second diagram, the one where you draw your surface normal. To me, these arrows would not be surface normals, because a surface normal would be “normal” to the surface, in this case a line which is perpendicular to the line (which I assume the surface is).

    What I mean is: why are you basing your surface normals on the vertices? It does not make sense to me (any vector based at a point would be perpendicular to the point), and why are they not based on the direction of the lines? Again: I’m not a programmer, so I’m probably missing something obvious here.

    1. Shamus says:

      You can do it either way. In rendering, normals are assigned to verts, not polygons. So when you’ve got two adjacent surfaces that share a common vertex you can handle it one of two ways:

      1) Give each polygon a unique copy of the vertex. This can greatly inflate the number of verts you’re shoving into GPU memory. This can also result in a “faceted” look, which is not what most modern games want. They want to smooth those normals out as much as possible.

      2) Have them share a common vert, and give the vert the average of the normals of the two polygons. This smooths across the faces of the polygons. Good for smooth shading.

      In graphics, you can think of normals as normal to the idealized surface – the one we would draw if we had infinitely small polygons. In the above diagram, you could sort of think of the green square as a super low-resolution circle.

      1. Felblood says:

        I get it now! These are the surface normals of the imaginary object we are attempting to depict.

        Vertex normals have always been something of a blackbox to me. I knew what formulas to use to get them to do what I want, but it kinda bugged me that I didn’t know why, and thus could not create my own formula if I fell into a situation where I needed to.

        It’s nice to get these little A-HA! moments, when a question you’ve wondered about but never bothered to research just falls into your lap.

      2. Naota says:

        I may be mistaken, but isn’t this precisely the process that 3D modelling applications decide when assigning smoothing groups?

        Generally hard edges get different groups to make that faceted look, but they’re discouraged when not required because they effectively increase the number of verts being sent to the engine at runtime. Meanwhile, you get lighting errors going on when you try to smooth out the lighting around complex geometry without impeccable topography to your 3D model, because the idealized normal is so often some Frankenstein’s monster of a hundred different verticies sharing the same smoothing group.

        I’ve always wondered which one is more costly in the end – regular use of smoothing groups to outline your hard edges, or outputting a much higher-poly model in an attempt to get the lighting to look just right.

        Of course… normal mapping seems to have solved this problem already, but let’s not let that discourage us. It wouldn’t be a Shamus problem if it wasn’t finding creative and efficient uses for outdated graphics techniques.

        1. Piflik says:

          Yes, Smoothing Groups (or however you want to call them) do that. You can have split normals at a Vertex, one per triangle that uses this Vertex, or one unified normal (3D Modeling Applications usually allow for more than one normal per vertex). When you export the model to use it in a realtime engine, these split normals would be converted into split vertices (these splits will also happen at boundaries of Material IDs and Texture Coordinates…which is important to know for normal mapping, which didn’t actually solve the problem, but introduces additional rules on hard edges and UV Layouts to work properly).

          You can also change the normals of a Vertex to arbitrary values to fool the observers’ eyes (the length should be 1, or you will get unnatural lighting behavior…this can, of course, also be used on purpose). This is commonly used for foliage, where you pretend to have curvature on flat polygons to make a tree or bush look more realistic while keeping the polycount down (example).

          Regarding polycount: there is virtually no difference between a 90° corner with two smoothing groups and two 45° degree corners with unified normals. The latter usually gives the better visuals, though (with or without normal maps).

    2. Naota says:

      I actually had the same thought from looking at the diagram. The normal of that face should be pointing straight at us, since we’re clearly looking at it straight on. But then it occurred to me that the normal of a vertex in 3D space is the combination of the intersecting lines which make it up.

      If you average out the vectors of the edges on that box where they meet at each corner vertex, you do indeed get a line pointing straight away from the corner at a 45-degree angle. If this were a cube and you moved all of the verticies outward along their normals, you’d end up with a bigger cube.

  6. The Snide Sniper says:

    Have you considered shadow mapping? It takes more memory, but it is still good enough to be the standard approach to shadows in modern video games. It’ll also work a bit better with VR because you won’t be completely recomputing the shadows for each eye, and with high-resolution screens.

    1. Geebs says:

      The major issue with shadow mapping is that it doesn’t look great in a block world – you would need very large shadow maps to avoid getting horrible crawling pixel artefacts all over the block edges, and “Peter panning” would be very obvious.

      That said, shadow mapping is conceptually much simpler. I am too dumb to even really understand how stencil shadows are implemented, and I have tried a heck of a lot of times :(

      1. The Snide Sniper says:

        Many, if not all, of the problems you mentioned either won’t apply or can be solved easily. Peter Panning, for example, will not occur because the block world doesn’t have any objects sufficiently thin to cause it. The crawling artifacts can be removed by adjusting sample depth based on slope (basically, a surface that is “steep” relative to the incoming light should be more tolerant of nearby depths). Here’s a helpful guide.

        The basic idea of stencil mapping is if a ray of light enters a shadow more than it exits it, then whatever that ray hits is in shadow.

        If every object in your world was sufficiently simple (I.E. the shadow from one object can’t overlap with any other), then you can check this by simply toggling every time the ray touches a shadow border.
        If not (as in every scene you’d want to render), keep track of how many times the ray goes into a shadow, and how many it goes out, or (more importantly) the difference between those.

        In practice, what you do is:
        – Draw the scene. This fills the depth buffer with all the data you want.
        – Draw shadows (extrude edges like what Shamus did). In the fragment shader, increment the stencil buffer by 1 if it’s a front-face, or decrement if it’s a back-face.
        – Finally, render the shadows. If the number in the stencil buffer is positive (entered more times than exited), the pixel is shadowed. Otherwise, not shadowed.
        – There are also some special cases when the camera’s near clipping plane intersects with a shadow.

        1. Geebs says:

          Thanks for the explanation; I think my brain just wasn’t designed to accept the concept, though!

          The peter panning you get with blocky scenes is kind of inverted: big flat surfaces require you to be pretty generous with the bias to prevent artefacts, which means that the shadow doesn’t reach all of the way to the edge of the shadowed face of the block casting that shadow.

          The other thing is that if you have a block face which is partially shadowed, and oriented very nearly parallel to the incident light rays, (and a moving light source) you get horrible crawling smeared lines of shadow running up and down which no amount of bias can remove. Like this:

          -------------
          | |
          | |
          | |
          -------------

          Then you try to remove those artefacts by only casting from back faces, and that looks even worse…

          Personally, I find variance shadow maps provide the best look for the least hassle and performance hit, whereas PCF always looks horrible unless you’ve got a gigantic kernel, in which case it looks horrible AND runs really slowly. My case is really quite pathological for shadowmaps though – big draw distance, angle of the main (sun)light changing constantly, first person perspective, camera can fly about above the ground.

          1. The Snide Sniper says:

            Okay. I see the problem you’re having with peter panning. Shadow mapping is rarely used by itself. It’s usually used with some other diffuse lighting calculations, in particular Lambert’s cosine law.

            With these calculations, Peter Panning doesn’t matter on the object that’s casting the shadow, because the Lambertian term will remove all light from the back faces anyway.

            The artifacts that you described with a moving light source are actually easier to remove on blocks (or otherwise planar surfaces) than on curved surfaces. The link I provided earlier explains how to choose a depth bias based on the slope of your surface relative to the light.

            So the sunlight’s angle changing shouldn’t really matter too much.
            As for the large draw distance, that actually is a good reason to use stencil shadows. The alternative is to use multiple shadow maps for the different parts of the scene you can see. For example, one shadow map might cover everything within 8 feet of you, the next everything between 8 and 64, and so on. This technique is known as cascaded shadow maps.

            1. Geebs says:

              Yeah, I settled on cascaded variance shadow maps as being the best option. Believe me, though, I have tried to deal with shadow acne through using biasing on a scene which consisted of a bunch of cubes sitting on a plane (i.e. like a bunch of skyscrapers). When the sun was positioned so that light rays were nearly parallel to the faces of a given cube, the artefacting was horrible.

              Thanks for the link, I’ve seen that one but it is a great resource to point to re: how much PCF sucks, if somebody doesn’t already think that having seen the implementation which shadowed faces in the original Mass Effect. Ugh.

      2. Volfram says:

        I’ve messed around a little with both stencil shadows and shadow mapping. When I first encountered stencil shadowing, I thought it was “clearly” better than shadow mapping, since it eliminated the “shadow acne” and “peter panning”(that’s a new term for me!) problems that I’d dealt with previously.

        Now, though, I’ve noticed that while the Sega Dreamcast used stencil shadows almost exclusively for shadow calculations(I still remember being awestruck when Ryo’s shadow played across the leaves of a potted plant in Shenmue), modern games(Portal 2 and Space Engineers, for example) seem to have gone the entire opposite direction. There’s obviously some reason for this change.

        Naturally there are a lot of trade-offs to make and both techniques will produce unacceptable results(visual or caluclation) under particular circumstances. From what I’ve seen, Shadow Mapping seems to generally work better and faster these days.

        Generally.

        1. Geebs says:

          Shadow maps probably became popular for the same reasons why normal mapping etc. did – it’s been much cheaper to simulate geometric complexity than draw the extra polygons for the last few console generations. Newer hardware and the arrival of geometry shaders mean that these days you can brute-force geometry to a much greater extent.

          However, people have figured out clever ways to do soft shadows using shadow maps, whereas there isn’t a good implementation of soft shadows via stencilling that I’m aware of.

  7. The funny part is that the “wrong” way is actually closer to how shadows work in real life–if there were tails of medium-dark on either side of the super-dark area in the middle.

    1. Felblood says:

      We could blend the “Wrong” shadow with the “Right” shadow to produce a decent approximation of diffuse light, but it would only work in the 80% of cases where the “Wrong” shadows don’t break.

      In the broken cases, you’d end up with lots of small dark shadows inside the shadow of one large object, since the shadows are cast from its individual cubes.

    2. Paul Spooner says:

      You’re talking about modeling the umbra right? I think this would require a different method. You’d need to calculate “connected” objects all at once (otherwise the light would leak through the cracks between the polygons). Plus the umbra only extends to infinity if the object is larger than the light source. If it is smaller than the light source, it will come to a point (like you said). Finally, you’d need a way to blend the penumbra to get the soft shading.

      I suspect it could be done with some fancy tricks, but it might be faster to just use multiple samples. Replace the one light with, say, seven (one in the middle and one along each primary axis), and let the shadow engine sort it out from there.

  8. The Rocketeer says:

    This is the second time I can recall seeing Shamus do the, “If you have arrived from google, here is an answer to what has been tearing your mind apart” thing.

    This is a saintly endeavor.

    1. Blake says:

      Agreed, there is definitely one or more present or future people who have just been saved from an afternoon of confusion and anger.

      On behalf of the internet, thank you for this.

  9. Paul Spooner says:

    That’s some neat work on the lighting engine! Geometry-based shadow generation… open source lighting code… procedural terrain…
    But…
    Have you looked at the texture you’re using? It’s… what IS that? It’s horrifying!

  10. Rick says:

    I’m not a graphics developer, but is it possible to use the first solution but instead of using all surface normals that face away from the light source, use those plus the adjacent surface normal (in the direction closer to the light source if possible, otherwise both sides I guess)?

    I thought of this before I even got to you mentioning the problem… I’d noticed it wouldn’t have been shadowing from the front corners of the box, even when in the perfect alignment.

    1. Volfram says:

      I likewise noticed that something was horribly off about the initially described system. I would have grabbed all the surfaces with normals facing the light and extruded that into a volume, personally. The system he found is probably slightly faster than mine, though.

  11. Zak McKracken says:

    Two small nitpicks:
    1: In the second image, shouldn’t you copy the two vertices closer to the light and project them to infinity? The way it’s in the image, only the rear face of the square would cast a shadow.

    2: Wouldn’t it me more economic* to generally not use the entire back half of an object but only the vertices on the border? I.e. (in 2D): get the normals for all lines (on the lines, not the vertices), then find all vertices joining a backward-facing to a forward-facing line (relative to the light source). The shadow stencil then becomes the forward-facing half with the boundary points copied and projected to infinity.
    In 3D that would mean finding all lines connecting faces that face towards or away from a light and extruding those to infinity to make a shadow volume

    * “economic” in terms of having a simpler shadow stencil. I may or may not be more difficult to find all of those points…

  12. Zak McKracken says:

    How exactly do you determine whether a normal at a vertex is facing towards or away from a light source? Since that normal is just an average between the adjacent faces’ normals, I’d think that would sometimes be unreliable.

    Test: have an irregular polygon with strongly varying edge lengths rotate slowly and watch the stencil — does it always use the outermost points or will it sometimes jump to the next one just a little too early/late?

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply

Your email address will not be published.