Last time I promised we’d be dealing with rendering performance problems. Lies. We’re going to backtrack instead. I skimmed a lot of concepts back in part 2, and I think we need to talk about stencil shadows in more detail before we can proceed.
I’ve gone over this before, but polygons have front and back faces. Typically, the back face isn’t drawn. If I’m staring at a wall in (say) Doom 3:
In the real world, if I pass through the wallHow? and turn around, I’d find myself looking at the back side of this blue wall. But in a videogame…
…I just look back into the level. The back side of the blue wall (and indeed, of all the walls) isn’t drawn. It would be a waste to do so, since you’d never see it during the normal course of play. Similarly, if you stuck your head inside a Minecraft cube, you wouldn’t find your head in a box. Instead, you would find yourself outside the world looking in, as with the Doom image above.
Let’s go back to the horrible MS Paint diagram I was using before:
As I said before, we take the thing that should cast a shadow (the green cube) and extrude it away from the light. Everywhere this new object pierces the existing scenery is a place where the light can’t reach. What I neglected to mention is that for this to work we also have to draw the back faces and not the front ones. So, the object is effectively inside-out. Normally, that green cube would only draw the polygons that face the viewer. (The blue eye.) So the downward facing line and the leftward facing line would get drawn, and the other two would be left out. When we turn the cube inside-out, the opposite is true.
This also means that everything that casts a shadow must be an enclosed solid. This is sometimes called making an object “airtight”. Here’s why:
Let’s imagine our clever level designer realizes that, for whatever gameplay reason, the player will never, ever be able to reach the space to the right of the green cube. Since this is true, then we can just leave off that polygon:
Now the green cube is no longer an enclosed solid. Sure, it leaves hole in the level, but if the player can never get in a position to see it then it won’t do any harm, right? This is fine, and you can see this sort of polygon-trimming done frequently in older gamesIt’s generally not worth optimizing these one-poly cases on modern machines, but you can see it in titles like Half-Life 2. Use cheat codes to fly around and you’ll see how aggressive the team was at removing stuff like this.. But it becomes a problem when you use these objects to project shadows.
Without the right-side face, there’s nothing to project a shadow onto the wall.
EDIT: If you’re one of the many people annoyed/ confused by this and are wondering why the LEFT side of the cube isn’t blocking the light, then reading part 2 might help you out. I know this is tricky stuff. Good luck!
You can also think of it this way:
The little regions between the dotted lines could (depending on viewing angle) have proper shadows, but the space between them would be fully bright because the back face of the cube isn’t there to project shadows. And while this is an extreme example, it’s actually easy for tiny flaws to produce scene-altering results. Since the polygons are projected away from the light to infinity, any leak ends up expanding to cover large areas of the screen. In practice this looks like a bright stripe of light, visible through all the walls, standing at a great distance away.
The requirement that all shadow-making shapes need to be enclosed solids puts some interesting additional challenges on those of us working with large dynamic worlds. If you’re looking at a mountain in Minecraft, then in theory the mountain should be a fully enclosed solidOtherwise there would be somewhere you could go to “fall out of the world”.. But in practice this isn’t the case. Maybe the mountain is so huge that we don’t have the far side of the mountain in memory. Or maybe that part of the mountain hasn’t been generated yet. And even when the far side of the mountain is loaded, there are sill more hills and caves behind that, and more beyond those. You never get to a hard edge where all the space is closed off once and for all. The scenery on the far edge of your view always has a gaping void facing the horizon, and those missing faces – like our broken green cube above – will project malfunctioning shadowsYou may say, “Who cares if the far side of a mountain isn’t shadowed right, we can’t see it!” But imagine if you’re looking into the sunrise over that mountain. Those broken shadows will be projected RIGHT INTO YOUR FACE. The area around you will be shadowed all wrong..
Again, reducing the problem to 2d, the typical Minecraft world looks like this-ish:
Think of this as a sort of cutaway view. When turned into polygons, it looks like this:
Those open gaps at the sides and bottom are the problem. That’s where our bad shadows will originate. The solution I’ve settled on is to just cut up the world into chunks. Actually, you have to do this anyway so you can load and render the world piecemeal. The world is sectioned off into 8x8x8 chunks and so all we need to do is have it cap off every chunk.
This is really wasteful, but as far as I can tell it’s unavoidable if we want to use this shadow technique. (I might noodle around with other shadowing systems. I might not. We’ll see what sounds interesting when the time comes.) This also means we’re now working with two different sets of polygons: There are the polygons that you see, and there are the polygons that project shadows. The visible version of the chunk can be open (not enclosed) but needs to have information for normal mapping and surface texture included. The other version just needs to be a flat-color enclosed solid.
Now we’re ready to look into this ridiculous low framerate. And by “now” I mean “next time”.
 It’s generally not worth optimizing these one-poly cases on modern machines, but you can see it in titles like Half-Life 2. Use cheat codes to fly around and you’ll see how aggressive the team was at removing stuff like this.
 Otherwise there would be somewhere you could go to “fall out of the world”.
 You may say, “Who cares if the far side of a mountain isn’t shadowed right, we can’t see it!” But imagine if you’re looking into the sunrise over that mountain. Those broken shadows will be projected RIGHT INTO YOUR FACE. The area around you will be shadowed all wrong.
A look back at Star Trek, from the Original Series to the Abrams Reboot.
This is Why We Can’t Have Short Criticism
Here's how this site grew from short essays to novel-length quasi-analytical retrospectives.
Top 64 Videogames
Lists of 'best games ever' are dumb and annoying. But like a self-loathing hipster I made one anyway.
A game I love. It has a solid main story and a couple of really obnoxious, cringy, incoherent side-plots in it. What happened here?
Push the Button!
Scenes from Half-Life 2:Episode 2, showing Gordon Freeman being a jerk.
64 thoughts on “Project Unearth Part 4: Enclosed Solids”
Of course, the real problem is that, in a Minecraft-like game, you can have hundreds of torches lying around everywhere, and using this method to render shadows for every one of them is completely infeasible.
That’s why Minecraft only uses CPU-calculated light values stored in the air blocks to light the world.
So this is Part 3 – Chapter 2? :P
Whoops. Fixed. (Counting is hard.)
I’m a bit confused here. In the drawing you’ve supplied, the shadow is being cast by the left side* of green cube, not the right side. So, by definition, removing the right polygon from the green cube would not affect the shadows, with the current placement of the light source, and the green cube. The only way the right side of the green cube would project a shadow, would be if it’s closer to the light source, than the left side – so either the green cube needs to move, or the light source. As-is, the right side is occluded by the left. What am I missing here?
* And also the top and bottom of the green cube. So, the thing you should be extruding is the left, top, and bottom faces of the green cube, not the right face.
This is a difference between “how we approximate light in computer graphics” and “how light actually works”. Yes, in real life the left side would indeed be the surface that stops the photons, but in our stencil shadow technique we use the cube to build a volume shaped like the desired shadow, and that means the right side is the bit that gets projected into the wall to mark our shadowed region.
Why not just do real-time raytracing? The Nintendo 64 could do that, for crying out loud! The things you graphics programmers will do to get out of work.
Edit: This was sarcasm. OK? Don’t bite my head off, Internet.
What Nintendo 64 game use ray tracing?
Though I must say this technique seems really weird. Surely there’s a way to create a correct volume based only on the front facing faces. It seems to me that should give the same shape on all cases.
Either way shadow mapping seems like a much more intuitive way to do things, at least it’s closer to how I think about light (though I guess it has its drawbacks due to discretization). Haven’t done it yet though, got to figure out how these cube maps work first…
At one point prior to the N64’s release, Nintendo Japan claimed the upcoming Nintendo 64 would be capable of “real time raytracing.” This obviously never happened – what’s more I am embarrassed to admit that Google has no memory of this statement, but I distinctly remember reading it in a trade magazine back when Chrono Trigger permanently occupied my SNES.
Of course, since no record of that claim seems to exist now, I just look like a prick. Sorry, Shamus.
How many years are left before information that isn’t uploaded to “The Net” simply never existed?
Serious question. It is almost like, we as a society, have subscribed to the philosophy of “pics or it didn’t happen”.
Well I’m a pro googler and I found this:
The graphics are a clear step up from anything you have seen on the console, with alleged real-time raytracing and lighting effects and incredibly detailed textures.
Rayman of course! The game traced the path he traveled.
(Dodges thrown objects)
For mostly symmetric shapes, like the cubes you’ve been showing so far, your method works great, but non-symmetric shapes are going to cast the wrong shadows. I don’t know how bad the average case would be, but, here’s a quick image, showing a bad scenario. Depending on he geometry, the shadow could be off by enough degrees, to be noticed, although maybe only if they lined up their camera with the objects and the shadows they cast.
The more I think about this, the more discomfort I get. I can’t remember the reasoning behind* using this approximation (with the back-side polygons), but now the approximation is needing more and more stuff added to it, to compensate for its shortcomings. If you just did shadows the right way, then you wouldn’t have to worry about having all your polygons be double-sided, and you wouldn’t need to worry about your objects all being completely closed with polygons on all sides! :D
* Two posts ago? Three? I’m definitely too lazy to look it up. :P
Hm. This is indeed how the shadows worked if I had stopped part way through Part 2. But later on I talk about using the geometry shader to extend faces. It chooses faces that have some points facing the light and some points away. With this, it ought to properly extrude the bottom-most face in your diagram.
That’s why you titled that post “skimming hazard.” Apparently ET didn’t read the warning.
” If you just did shadows the right way”
I’m using Depth Pass: http://en.wikipedia.org/wiki/Shadow_volume#Depth_pass
Which is a pretty standard solution, so I’m not sure why you’re suggesting I’m doing it wrong.
I wrote my original text and the edit before your response, saying that your solution gets the side-facing polygons too. So, back before you showed me that you were actually getting the correct shadows. :)
I still have a hunch, that you can get these shadows using the front faces and side faces, instead of the back and side faces. If so, then it would side-step the problems, of needing the double-sided polygons, and needing objects to be fully enclosed in polygons. So…now I need to spend an hour or so, re-reading your other posts on this subject. :)
OK, now I know that:
1. It is possible to do the shadows by using the front faces, and
2. Shamus is already doing this.
The basics of what’s going on, as best as I can tell are -> The front faces stay near the light source(s), then the rear faces get stretched all the way to infinity. Objects (or portions thereof) which are intersected by the stretched-out fake-objects are marked as being in shadow. Repeat as necessary for each light source.
Also, even if a change would make it so the back-face doesn’t matter for the shadows, if you then moved the light source to the red wall you’d again have a problem*. Even then enclosing objects would save a lot of headaches as you don’t have to worry where the light is coming from.
* In a more practical situation: Imagine a small window showing a room you can’t enter that has a light source inside. The wall or door the window was attached to wouldn’t have the other side drawn so the room’s light would act like there is no wall/door at all.
Having reread that post and having actually studied shadows in a computer graphics class once I must admit that I still am baffled by this.
I mean I get that you can create shadow volumes by projecting the back faces out to infinity and create a volume, but I don’t really get why that is preferable to projecting out a copy of the front faces and use that as the back of your shadow volume.
That seems so much easier to handle since front faces must always exist and I don’t see the big performance drawback, so I am essentially asking why the natural way is infeasible and the hack is used instead.
Well, if this many people are confused then it means I failed at my job. :( I tried.
The most immediate problem is that in order to do the extrusion, you need to be able to determine if any given triangle is on the edge between the light and dark sides of the object. In order to figure that out, you MUST pass along three neighbors for every triangle. You and I can look at the diagram of the square with the missing face and mentally fill in, “Ah, there’s a face missing there.” But the geometry shader is stupid and has no sense of the thing being drawn, which means that without adjacent faces, it would have nothing to go on. So it’s actually impossible to pass along an object where each triangle doesn’t have three neighbors. (Which would mean you can’t just remove one face without also removing all of its neighbors.)
There’s an even LONGER explanation that gets into how we use the stencil buffer, incrementing and decrementing values depending on front-and-back faces, but I was trying to avoid getting into that because it would be very long, technical, and require many more illustrations. Maybe I got lazy here.
Maybe I’ll have another run at this explanation once I’ve had more time to think about it.
He does have a point, though. I only ever tried to implement shadowmaps, so the following is quite speculative, but using the front faces should work and be easier. You just can’t extrude them in the direction of the vertex-normals(like you did in Part 2), but the light’s direction. With point- or spotlights this would be the normalized vector between the object’s vertex and the lightsource, with a directional light you can simply pass the direction to the shader instead of the position.
No matter if the lightsource is behind the viewer, or the object is between the light and the viewer, this should create accurate shadows by using the faces the viewer is looking at (while extruding the back-faces along the vertex normals can only ever be an approximation). The shadow-objects would still have to be rendered double-sided, since they can be inside-out, depending on the geometry between viewer, object and lightsource.
“Well, if this many people are confused then it means I failed at my job.”
Perhaps you could make a video, grab footage from your game engine with the camera moving a little, this should show folks easily what happens when you do not close the back vs when you close the back and how the shadows behave.
This also gives you an excuse to add video grabbing/saving code (with maybe max eyecandy enabled like AA since it doesn’t have to be in realtime.)
agreed. I’m also confused by this.
edit: of course, Shamus answered while I was writing that.
now I need to re-read the shadow post, because I totally didn’t get that.
I had the same problem. I re-read the shadow post and now I think I see what’s going on.
The irony is that the title of the shadow post is “skimming hazard” and it’s about the problems you run into when you don’t read things closely enough.
I’m still totally baffled here too. The shadow volume is made by extruding the left face of the small cube, because that’s the only one the light touches. I don’t understand why the rest of the cube is relevant to shadows. The right face of the cube should be playing no part at all.
Just a note: what minecraft calls a “chunk” is a 16x16x256 region divided into sections (what you’re calling “chunks”) of 16x16x16, not 8x8x8. See http://minecraft.gamepedia.com/Chunk_format
Wow. Rather than try to correct ALL of that, I just eliminated that footnote. This is actually a good topic for a full-blown post in the future.
I look forward to the possibility of a nice chunky post in the future.
That’s the current “double world height” chunks, BTW. Originally they were 16x16x128 with the maximum mountain height being a little under a dozen less than the absolute world ceiling, and the top of the Nether being right at (at the time) maximum height.
Then, when figuring out how to double absolute world height without slowing everything down, they discovered the trick that they could simply tell the program “most upper halves of chunks (129 to 255) are empty by default” and the default world generator values would generate worlds just as fast as before. And then you could then build far, far up into the sky, and separate “super tall” settings were possible, though slower of course, and so on.
EDIT: Oh never mind, the wiki explains it better anyway.
I also didn’t go into the existence of the “Cubic Chunks” mod, which (IIRC) changed the chunk format to something more like Shamus’ example in order to remove the height limit on the world (or at least extend it to an impractical to reach distance). I’m not sure if that mod is still around for post-Anvil MC versions.
Are you sure Minecraft uses 8x8x8 ? In Minecraft parlance, a “chunk” is 16x16xworldheight (height being configurable at the server level, but generally 128 or 256). This is how MC chunks are stored on disk, and loaded/unloaded. Graphically, I suppose they could be subdivided into 8^3 sections, but if so, “chunk” is a confusing word choice there.
Edit: Beat to the punch.
Want to talk about cutting things up into too small chunks, this article was done as soon as I started reading >.<
All jokes aside, interesting article as always, though I do some stuff in my free time here and there I use a completed engine, so these articles help me think about how the engine works.
Thankfully I am not particularly patient or talented when it comes to 3d modeling (so they are generally low count) and have done no work in procedural content (though I'd like to) so I generally don't have to deal with much performance issues; but still looking forward to the next article.
Yeah, airtight solids have been a conundrum for me in the past. When I’m making 3d scenes, I used to always make everything “real” in the sense that all sides were modeled and textured. Then, after a while, this became too much work, so I started making open objects just to save time. Even later, when I started doing 3d printing (which also requires airtight solids), my style shifted again. Now-a-days I basically do whatever is the least amount of work required. I’d say duplicating geometry to make a closed volume isn’t really that hard, and is probably worth the effort. At worst, you’re… what, doubling your vertex count? Maybe a bit more? Anyway, the spatial partitioning solution you arrived at seems like a good one.
Also, props on sneaking some marching cubes texture shenanigans into the last screenshot as if it’s no big deal. Looks good! I like how it makes the contours visible!
Even if you switch to using shadowmaps instead of shadow volumes, you will still need to make “chunks” of landscape rather than leaving back faces open, if you’re going to allow sunsets and sunrises.
Basically if you’re going to allow the light rays to get perpendicular to the terrain, you’re going to find that a) you get the “looking into the level from outside” problem because the shadowmap camera’s frustum sees the back faces of the landscape polygons and b) you have to draw a lot of terrain when the light source is near the horizon because a much larger number of chunks of landscape will fall between the near and far planes of the shadowmap camera.
This is exacerbated by the fact that you have to position the shadowmap camera reasonably close to the player to get decent depth precision, so you often end up with the camera inside a mountain or something. One way to deal with this is to give your landscape chunks “skirts” which extend down to minus-a-lot, which works but eats a lot of fillrate.
(I could write about how badly broken cascaded shadowmaps become in this situation but we’d be here all night :-) )
*argh, I meant parallel to the terrain, not perpendicular.
If “enclosed solids” is a term of art, I’m really disappointed that the coding classes didn’t call them something relevant to their experience, like maybe “Peanut M&Ms.”
I think “manifold” is the technical term for “airtight” or “enclosed”, but if you talk that way the cool kids call you names.
Would calling it a “plasma manifold” or “intake manifold” be too Star Trek?
I thought manifold just meant that no edge ever connected more than 2 faces.
No more, but also no less. If each edge connects exactly two faces, then the surface is manifold.
Even if it doesn’t have many folds?
I don’t think that’s true. A manifold is just a space that looks euclidean if you look close enough.
I’m showing my ignorance here, but for distant geometry couldn’t you have a pass that seals off the open ends at the edge of your drawing area? Sort of a “The line must be drawn here! This far, no further!” algorithm that would fill in any open triangles at the back of the z-buffer (if that’s what it’s called) to make a temporary airtight solid. It’s distant so any errors in shadows could be hand-waved away, moreso than errors in beams of light sneaking through where they shouldn’t be.
If you’d change the illustrations to match the second case from Part 2, where the light source is located not directly in front of the cube but at an angle, it might make the article less baffling.
I think a lot of people have a hard time understanding this whole shadow thing because it wasn’t explained using a Terrible Car Analogyâ„¢.
Well, it’s like if you’ve loaded up the pickup with crates of Fireball Whisky and boatloads of processed American Cheese, but you forgot to secure the tailgate. It might be OK to begin with, but if you hit the wrong angle – e.g. going up a hill – there’s going to be stuff thrown out all over the place.
(Sorry – that was just a car analogy which was terrible, not a Terrible Car Analogyâ„¢!)
If you use an El Camino instead of a generic pickup, it would be a Terrible Car Analogy.
I sneeze every time I get into a Trabant. Err, am I doing this right?
I’m the same whenever I visit a particular small village in the west of Ireland. A terrible Carran allergy.
(By the time I was done with this pun it had admitted there were five lights, betrayed Julia, and was giving very prompt & definite answers to the question ‘is it safe?’)
I know you expressed your disdain for shadow maps at the start, and I realise that this is a project to learn geometry and tessellation shaders, not implement shadows… but…
You should have just used shadow maps! Everybody else does! :-P
There was a weird period in time around Doom 3 coming out where these stencil buffer techniques were best, but in the period since shaders and graphics memory bandwidth have come a long way. Its’ more efficient these days to just do shadow mapping.
For even better juju, light volumes.
Don’t cast shadows, cast light.
Shadowcasting makes it hard to do anything other than omni-directional (point) and infinite-size parallel (sunlight approximation) lights, as you run into special cases very quickly.
Light volumes let you have spotlights, beamlights and other interesting lighting using the exact same algorithm.
It also avoids the “stacked shadowmaps” problem mentioned as an important reason to avoid using shadowmapping.
Shamus already described why shadowmaps are only really suitable for small geometry, like corridors etc.
They don’t work in a world-sized model as you end up needing 2048 or 4096 or even larger textures just for the shadowmap to avoid the rounded hill on the horizon casting a square block on the cliff beside the player avatar.
I’m interested in seeing how this changes for distant light sources (sun, moon) instead of close light sources. It seems like those will be too far away to do a projection exactly the same as for nearby light sources, as you’re either doing the math from literally astronomical distances away or hand-waving it and saying “No matter what, the scale of the projection of shadow is going to have a scaling factor of 1.”
Usually when you calculate lighting for things like the sun, which is very far away and very bright, you don’t bother to use the position of the light. Just use the same direction and brightness for everything, since the difference would be too small to notice anyway (this is usually known as a “directional light”). This makes it easier and faster to calculate.
Of course, if you are working on a planetary scale or bigger, then you would take it’s position into account (and it would be considered a “point light”).
So yea, usually you hand-wave it and just go with fixed direction.
An interesting side-effect of Directional lights is that their position is irrelevant. I’ve often made scenes featuring multiple Direction lights, all of which were located in the center of the scene just because they didn’t need to be anywhere else.
Why would you even give them a position in the first place?
If you’re using a generalized lighting solution then it probably has parameters for direction, position, radius, and cone angle, even if some types of lights don’t use all of them. I tend to set values like that because of all the years I spent in situations where uninitialized variables=bad.
It can be helpful for organization and manipulation. For example, 7 years ago(when I was talking about in the previous post) I would usually light a scene by using one full-brightness direction light with shadow casting turned on, and another low-brightness direction light with shadow casting turned off, facing the opposite direction as an ambient/fill light.(say what you want about physical accuracy, scenes I lit this way at least looked pretty.)
It’s a lot easier to click the lights in the edit window and move them around than finding them in long list of scene items. You can also group lights spatially by color, brightness, direction, purpose, etc…
The sun can be approximated as infinitely far away, meaning you can just use a parallel projection instead.
I’m no programmer by any means, but after reading this I now have a small idea as to why things look like they do after clipping through walls in almost every game I’ve played. This also explains why my Gordon Freeman will never cast a shadow (or have feet for that matter). Programmers are wizards – much respect.
What about Occlusion? If you took your 2d example scene, and placed another one (light and all) right behind it, wouldn’t the shadows just pass through?
I would like to try and answer your question, but unfortunately I can not decipher what you are asking.
If you are suggesting stacking the scenes on top of each other like projector transparencies, that visualization is meaningless in the game-world, since the 2D examples are only representations of the 3D world because it’s easier to present information in that manner.
“The world is sectioned off into 8x8x8 chunks and so all we need to do is have it cap off every chunk. This is really wasteful, but as far as I can tell it's unavoidable if we want to use this shadow technique.”
Here’s a fix:
When calculating geometry for each chunk, cap it off only in the directions which don’t have chunk-neighbors that are also rendered.
This is exactly the same as the way we eliminate internal faces of blocks inside a chunk, so it should be robust. The cost is that when you add or remove a chunk from visibility you have to recompute the geometry of its neighbors. (You could store six-way-capped chunks and zero-way-capped chunks and switch between them to be less optimal in geometry but save time.)
What if you made the backside of polygons solid/visible? Could this work? I mean, normally they are see-through as you showed before, but if they were not, could they be used for this technique? Technically you wouldn’t be adding any more vertices to the scene, just turning on one side of the polygon.
I realize this is an old article, but from what I have seen online (and I have not done lighting, part of why I am checking this out), I have seen where the depth buffer is used to determine shadows. You view the scene from the lights point of view, then the depth buffer generated from that view is used to determine shadow. If the point you wish to light is farther away than the depth buffer for that point, than it is in shadow, don’t light it.
Seemed fairly straight forward to me. (but as I said, I am new to this, I have just used ugly opengl default lighting so far)
And I just noticed I posted about this in the past, but this new technique is one I just learned about.
Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*
You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>
You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?
You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.
You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!
You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>