This article has a companion video. I try to make the article and the video roughly equal in usefulness, but like the old saying: One second of footage is worth 60,000 words. This video shows off some footage from Minecraft with a raytracing mod, and footage from Control with raytracing enabled. There’s something amazing about seeing shadows and reflections update in realtime, and static screenshots can’t really do it justice. As always, you can watch the video or read on for the article…
Before we talk about raytracing, let’s talk about graphics cards. It probably won’t shock you to learn that the people who make graphics cards are always looking for a way to sell more of them. They have two ways of doing this.
The first way is to sell us prettier pixels. New functionality will be added to cards that can handle some new rendering trick. They get game developers to use the new stuff to make their games look more awesome, and then they sell us cards so we can play the game.
The problem for them is that they can’t always sell us prettier pixels. Game developers only want to modify their engines so often, and consumers need to get tired of the old pixels before the new tricks can generate hundreds of dollars worth of excitement in the heart of the consumer.
So when they can’t sell us prettier pixels, they try to sell us more pixels. Maybe that means higher resolutions for bigger monitors, or maybe it means higher framerates. Sometimes a new generation of cards will come out and it’s basically just the last generation except fasterAnd probably with more fans on it..
So for the last 20 years the pendulum has swung between these two. Sometimes they sell us prettier pixels, sometimes they sell us more pixels.
But for the last few years we’ve been stuck in more pixels mode. VR needs more pixels. Huge displays need more pixels. Everyone decided they wanted 60fps, and that requires more pixels. It’s been so long since I saw something genuinely new that I was starting to worry that the industry was running out of tricks.
But a new trick is coming, and it’s a big one. In fact, calling raytracing a “trick” is kind of selling it short. This isn’t just a new effect to add to our games, it’s basically a whole new way of lighting a scene.
I should make it clear that this is a very high-level view of a very complex topic. There are actually two different things being tried out right now. The big one is raytracing, which is mostly limited to special tech demos right now. But then there’s path tracing, which is similar to raytracing. And then we have games use traditional rendering techniques but with raytracing used to create accurate reflections. For the purposes of this article, I’m just going to smear these ideas together into a big pile of new stuff called “”raytracing” and I’m not going to draw a distinction between the various techniques. I’m not trying to confuse you. It’s just complicated and I’m trying to keep this short.
The Problem We Have Now
The problem we have right now is that all of our current rendering techniques are basically a big pile of hacks and shortcuts and workarounds. The behavior of light is complicated. In the past, getting a computer to replicate that behavior on consumer hardware was completely infeasible. So we had to resort to shortcuts.
I’d call it smoke and mirrors, but we haven’t had working mirrors in years. Think about how many games you’ve seen recently where restroom mirrors were replaced with an inexplicable panel of stainless steel. We’ll come back to mirrors in a few minutes, but the important thing for now is that the scene of your average AAA video game is built up from dozens of shortcuts and tricks, trying to make a scene look real without requiring the processing power of a Disney render farm. We could never cover all the tricks, but let’s just talk about one hack: Shadows.
In the 90s, we didn’t have the processing power to do accurate shadows. It looks weird to have no shadow at all, so developers would stick a little blob of darkness around the feet of characters. Mario 64 did it. Unreal Tournament did it. Deus Ex did it.
It was a simple effect, but it was enough to get the idea across and make the character look grounded.
But then we wanted to have more realistic shadows. I hope you don’t mind that I didn’t want to write my own late-90s style graphics engine to show off how this technology evolved over the years, and instead I’m just going to illustrate the process by photoshopping this Deus Ex screenshot:
Let’s say we want the UNATCO agent to cast a shadow. The way to do this is to get a silhouette of the character, from the perspective of the projecting light source. We save that silhouette in a texture.
Then we project that texture into the world, away from the light source.
This gives us a shadow. That’s nice for a little while, but then people start looking at those jaggy pixels on the edge of the shadow and thinking those are pretty ugly. Also, shadows from nearby light sources shouldn’t have a hard edge like this. Nearby lights should create a penumbra, a soft edge where we transition from light to shadowActually, the size of the penumbra varies based on the size of the light source AND distance to target. A large soft-white globe will give soft-edge shadows and a piercing bright point will create sharp-edged shadows..
We can fix both of these problems by just blurring the shadow texture to smooth the edges out and hide those pixel jaggies.
Oh! Except sometimes that blur will crash into the boundaries of the texture and make horrible clipped edges, so we’ll need special code to handle that.
And hang on, shouldn’t all of the other things in this scene be casting shadows? So we need to make a shadow texture for all of these crates and railings and everything, right? Actually, I guess we should project all of this stuff into one texture. Except, then we won’t have enough resolution to cover such a large area and the shadows will be horribly blurry. So we have to take multiple textures at different distances and resolutions and stitch them together to hide the seams.
Everything is now insanely complicated, and we’re only handling one light source! This all gets even more complicated if we have multiple light sources. And this isn’t even a particularly accurate simulation of light. In the real world, shadows aren’t a zone of darkness added to a light surface, they’re an area where light was prevented from reaching the surface in the first placeThis technique really looks odd under saturated lights. A bright red light will wash the entire scene in red, and then the shadow just makes it darker red because it’s not really blocking the light. Maybe you could fix this by using negative color shadows, but I’ve never seen that done..
What a mess. And this is actually one of the older, simpler ways of doing things. Modern AAA games don’t use the above technique anymore as it’s been replaced with a different set of more complicated tricks with a different set of tradeoffs. And that’s just one trick out of dozens. On top of that we have crepuscular rays, water caustics, volumetric fog, ambient occlusion, depth of field, motion blur, bloom lighting, full-screen antialiasing, and so much more. All of these effects come from separate systems with their own drawbacks, pitfalls, and limitations.
As an example of the kinds of tradeoffs we have to worry about:
Detail vs. Dynamism
Above is a screenshot from a demonstration level I made in the Source Engine, which is what was used in Half-Life 2 back in 2004. You can see I’m standing in a hallway that leads to two rooms. Both rooms have an overpowering light in them. The light source is pure white in both rooms. But in the room on the right, the walls are orange. White light is reflecting off those walls and spilling out into the hallway as orange light. This is called radiosity lighting. Light can spill into a room and bounce around corners, talking into account the color of the stuff it’s being reflected off of. It produces very accurate results. The downside is that it was enormously expensive to calculate on the computers of 2004. The computer might need to work for hours to create the lighting, and once it was done the lighting was fixed. You can’t move the light source around while the game is running, and changing the orange walls to blue wouldn’t change the color of the light in the hallway. We call this having the lighting “baked in”. Moving objects in the scene couldn’t create high-detail shadows like this, which means that opening and closing a door won’t change how much light is coming through the doorway. The lighting is great, but it can’t change.
That same year, we also got Doom 3. That game was focused on dynamic lighting. Lights could move. They could change color. Everything – even moving objects – could cast pixel-perfect shadows. The downside is that this couldn’t use any radiosity lighting. Light literally couldn’t bounce off of things to hit other things. Even slender railings would cast crushing pure black shadows. This is why the lighting in that game was so harsh. No matter how powerful a light was, the walls couldn’t scatter it to illuminate other surfaces.
Do you want beautiful lighting that can’t change, or brutal lighting that can? I guess it depends on the game. Eventually programmers came up with systemsI’m vague here because I don’t really understand them. This stuff is too new to fit in my dusty old-school bag of tricks. that could do both at the expense of – you guessed it – being more complicated and having different tradeoffs. Not just more complicated to code, but more complex for the artists who create the environments.
You’ll still see this tradeoff at work today. The more sophisticated the lighting is, the more it tends to be baked in, with a majority of the objects in a scene locked in place because those objects are casting shadows that can’t move. If you’ve ever played a game where you threw a grenade and wondered why it doesn’t blast a chair across the room or obliterate the curtains, it’s probably because the game is using baked lighting. If those objects moved, they’d leave behind nonsense shadows.
Raytracing is an attempt to do away with all those tricks and hacks we’ve been using to make our games look like the real world, and instead just simulate light as it really behaves.
As you read this article, billions of photons are pouring out of the screen. Some of them enter your eyes, allowing you to see the screen. Some of them reflect off the walls around you and then enter your eye, illuminating the room you’re in. As photons bounce around, their paths bend as they move through things like glass, and they change in wavelength depending on what sorts of things they run into.
To be clear, raytracing isn’t an exact 1-to-1 simulation of real light. We actually trace light rays going from the eye and into the scene. That’s backwards from how light travels in reality, but backwards is more efficient because we’re only simulating the 0.01% of photons that hit the camera and not the 99.99% of photons that hit something else. Also, we can’t simulate billions of photons, not even with the amazing graphics hardware available today. So we simulate a few thousand, and we can use the information they give us to extrapolate the rest of the scene. It takes massive horsepower to accomplish this and the cards that can pull it off aren’t cheap here in the back half of 2019, but it is finally possible and the results are amazing.
Earlier I said I’d come back to the topic of mirrors. In the old days, mirrors were basically trick windows. The level designer would put a flipped copy of the room on the other side of the glass and the game would show copies of the characters over there. You weren’t looking into a reflection, you were looking into another space made to look like a reflection. Over the years, most video game mirrors have been variations on this same trick, because calculating a real reflection was too computationally expensive. And then we just gave up, because it wasn’t worth all the extra complexity and performance concerns for things that are basically just decorations with no gameplay value.
But with raytracing, everything is different. In the above screenshot, I set up a Minecraft world with a shader pack that uses raytracing. I placed glass blocks in front of black concrete and that created a mirror. This is basically how you build mirrors in real life. You put a reflective layer against a dark background. This isn’t a trick like in the old days. A programmer didn’t have to spend days writing code to create this specific effect and tweaking it to look just right. This mirror isn’t limited to this single location where the local parameters have been tweaked to make sure everything works. This mirror just emerged naturally from the rules of raytracing.
The same thing happens with radiosity lighting. That’s just the virtual photons bouncing off of stuff, doing their job. And real-time moving shadows? Same thing. A programmer doesn’t need special code to figure out where a shadow should be and then darken that part of the scene, they just send out a bunch of rays. If something blocks the light then there will be a shadow. We don’t need to lock all the furniture in place because it’s casting baked-in shadows. Everything can be dynamic and everything can be lit using the same rules.
In traditional rendering, you have to limit the number of lights in a scene because every shadow-casting light has a cost to it. In Doom 3 the rule was that no wall should have more than 3 lights shining on it, and the less lights overlapped the better. That explains why so much stuff was mostly illuminated by isolated pools of light. But lights in a raytraced scene are basically free. Shadows are free. Reflections are free. Refraction is free. Radiosity lighting is free. Getting raytracing working requires a ton of processing power, but once the system is running all of these expensive effects emerge for little or no additional cost. As someone who’s used to the old way of doing things based on rasterizing triangles, this idea feels so weird.
Raytracing is probably a couple of years away from widespread adoption. (Assuming it catches on, obviously.) This is such a big leap that I’m not sure how AAA games will react to it. This isn’t like earlier tech where people could muddle through on low-end hardware by turning graphics options down. As of right now, you need hardware specifically designed for raytracing to get an acceptable framerate. This creates the same chicken-and-egg trap that faces VR, where developers don’t want to fully commit to hardware with a limited install base, and people don’t want to commit to expensive hardware that doesn’t have a lot of games. On the other hand, the Playstation 5 will reportedly support raytracing, and that makes the technology a pretty safe investment for developers. I’m hoping it takes off. I haven’t been excited about graphics in ages, but I really dig raytracing.
I guess we’ll see in the next couple of years.
 And probably with more fans on it.
 Actually, the size of the penumbra varies based on the size of the light source AND distance to target. A large soft-white globe will give soft-edge shadows and a piercing bright point will create sharp-edged shadows.
 This technique really looks odd under saturated lights. A bright red light will wash the entire scene in red, and then the shadow just makes it darker red because it’s not really blocking the light. Maybe you could fix this by using negative color shadows, but I’ve never seen that done.
 I’m vague here because I don’t really understand them. This stuff is too new to fit in my dusty old-school bag of tricks.
Artless in Alderaan
People were so worried about the boring gameplay of The Old Republic they overlooked just how boring and amateur the art is.
How to Forum
Dear people of the internet: Please stop doing these horrible idiotic things when you talk to each other.
What Does a Robot Want?
No, self-aware robots aren't going to turn on us, Skynet-style. Not unless we designed them to.
Let's ruin everyone's fun by listing all the ways in which zombies can't work, couldn't happen, and don't make sense.
The Best of 2013
My picks for what was important, awesome, or worth talking about in 2013.