This Dumb Industry: Raytracing

By Shamus Posted Tuesday Sep 17, 2019

Filed under: Column 118 comments

This article has a companion video. I try to make the article and the video roughly equal in usefulness, but like the old saying: One second of footage is worth 60,000 words. This video shows off some footage from Minecraft with a raytracing mod, and footage from Control with raytracing enabled. There’s something amazing about seeing shadows and reflections update in realtime, and static screenshots can’t really do it justice. As always, you can watch the video or read on for the article…

Raytracing


Link (YouTube)

Before we talk about raytracing, let’s talk about graphics cards.  It probably won’t shock you to learn that the people who make graphics cards are always looking for a way to sell more of them. They have two ways of doing this.

The first way is to sell us prettier pixels. New functionality will be added to cards that can handle some new rendering trick. They get game developers to use the new stuff to make their games look more awesome, and then they sell us cards so we can play the game.

The problem for them is that they can’t always sell us prettier pixels. Game developers only want to modify their engines so often, and consumers need to get tired of the old pixels before the new tricks can generate hundreds of dollars worth of excitement in the heart of the consumer.

So when they can’t sell us prettier pixels, they try to sell us more pixels. Maybe that means higher resolutions for bigger monitors, or maybe it means higher framerates. Sometimes a new generation of cards will come out and it’s basically just the last generation except fasterAnd probably with more fans on it..

So for the last 20 years the pendulum has swung between these two. Sometimes they sell us prettier pixels, sometimes they sell us more pixels.

That is a LOT of pixels!
That is a LOT of pixels!

But for the last few years we’ve been stuck in more pixels mode. VR needs more pixels. Huge displays need more pixels. Everyone decided they wanted 60fps, and that requires more pixels. It’s been so long since I saw something genuinely new that I was starting to worry that the industry was running out of tricks.

But a new trick is coming, and it’s a big one. In fact, calling raytracing a “trick” is kind of selling it short. This isn’t just a new effect to add to our games, it’s basically a whole new way of lighting a scene.

I should make it clear that this is a very high-level view of a very complex topic. There are actually two different things being tried out right now. The big one is raytracing, which is mostly limited to special tech demos right now. But then there’s path tracing, which is similar to raytracing. And then we have games use traditional rendering techniques but with raytracing used to create accurate reflections. For the purposes of this article, I’m just going to smear these ideas together into a big pile of new stuff called “”raytracing” and I’m not going to draw a distinction between the various techniques. I’m not trying to confuse you. It’s just complicated and I’m trying to keep this short.

The Problem We Have Now

Prey 2017: If you're a super-expert on graphics programming and you study this image very closely, you might notice that the mirror isn't 100% correct in terms of mirroring things.
Prey 2017: If you're a super-expert on graphics programming and you study this image very closely, you might notice that the mirror isn't 100% correct in terms of mirroring things.

The problem we have right now is that all of our current rendering techniques are basically a big pile of hacks and shortcuts and workarounds. The behavior of light is complicated. In the past, getting a computer to replicate that behavior on consumer hardware was completely infeasible. So we had to resort to shortcuts.

I’d call it smoke and mirrors, but we haven’t had working mirrors in years. Think about how many games you’ve seen recently where restroom mirrors were replaced with an inexplicable panel of stainless steel. We’ll come back to mirrors in a few minutes, but the important thing for now is that the scene of your average AAA video game is built up from dozens of shortcuts and tricks, trying to make a scene look real without requiring the processing power of a Disney render farm. We could never cover all the tricks, but let’s just talk about one hack: Shadows.

In the 90s, we didn’t have the processing power to do accurate shadows. It looks weird to have no shadow at all, so developers would stick a little blob of darkness around the feet of characters. Mario 64 did it. Unreal Tournament did it. Deus Ex did it.

Deus Ex: If you're a super-expert on graphics programming and you study this image very closely, you might notice that the shadows are circles and not shaped like the thing supposedly casting the shadow.
Deus Ex: If you're a super-expert on graphics programming and you study this image very closely, you might notice that the shadows are circles and not shaped like the thing supposedly casting the shadow.

It was a simple effect, but it was enough to get the idea across and make the character look grounded.

But then we wanted to have more realistic shadows. I hope you don’t mind that I didn’t want to write my own late-90s style graphics engine to show off how this technology evolved over the years, and instead I’m just going to illustrate the process by photoshopping this Deus Ex screenshot:

This is not the most exciting moment that Deus Ex has to offer, but at least it's not from the sewer section.
This is not the most exciting moment that Deus Ex has to offer, but at least it's not from the sewer section.

Let’s say we want the UNATCO agent to cast a shadow. The way to do this is to get a silhouette of the character, from the perspective of the projecting light source. We save that silhouette in a texture.

The shadow texture on the right would be taken with the camera placed AT the light source. This trick only works for directional lights like spotlights, or the sun. You probably wouldn't use it on an area light like the one in this screenshot.
The shadow texture on the right would be taken with the camera placed AT the light source. This trick only works for directional lights like spotlights, or the sun. You probably wouldn't use it on an area light like the one in this screenshot.

Then we project that texture into the world, away from the light source.

I'm simplifying this concept almost to the point of being misleading, but I'm just trying to drive home the general idea that our graphics tricks are ugly hacks that are trying to look real while not behaving anything like reality.
I'm simplifying this concept almost to the point of being misleading, but I'm just trying to drive home the general idea that our graphics tricks are ugly hacks that are trying to look real while not behaving anything like reality.

This gives us a shadow. That’s nice for a little while, but then people start looking at those jaggy pixels on the edge of the shadow and thinking those are pretty ugly. Also, shadows from nearby light sources shouldn’t have a hard edge like this. Nearby lights should create a penumbra, a soft edge where we transition from light to shadowActually, the size of the penumbra varies based on the size of the light source AND distance to target. A large soft-white globe will give soft-edge shadows and a piercing bright point will create sharp-edged shadows..

We can fix both of these problems by just blurring the shadow texture to smooth the edges out and hide those pixel jaggies.

Making things blurry covers up a LOT of problems, although it creates the new problem that now things are blurry.
Making things blurry covers up a LOT of problems, although it creates the new problem that now things are blurry.

Oh! Except sometimes that blur will crash into the boundaries of the texture and make horrible clipped edges, so we’ll need special code to handle that.

I never saw this happen in a production game, but I've had this sort of thing happen while I was working the kinks out of my shadow code.
I never saw this happen in a production game, but I've had this sort of thing happen while I was working the kinks out of my shadow code.

And hang on, shouldn’t all of the other things in this scene be casting shadows? So we need to make a shadow texture for all of these crates and railings and everything, right? Actually, I guess we should project all of this stuff into one texture. Except, then we won’t have enough resolution to cover such a large area and the shadows will be horribly blurry. So we have to take multiple textures at different distances and resolutions and stitch them together to hide the seams.

Everything is now insanely complicated, and we’re only handling one light source! This all gets even more complicated if we have multiple light sources.  And this isn’t even a particularly accurate simulation of light. In the real world, shadows aren’t a zone of darkness added to a light surface, they’re an area where light was prevented from reaching the surface in the first placeThis technique really looks odd under saturated lights. A bright red light will wash the entire scene in red, and then the shadow just makes it darker red because it’s not really blocking the light. Maybe you could fix this by using negative color shadows, but I’ve never seen that done..

What a mess. And this is actually one of the older, simpler ways of doing things. Modern AAA games don’t use the above technique anymore as it’s been replaced with a different set of more complicated tricks with a different set of tradeoffs. And that’s just one trick out of dozens. On top of that we have crepuscular rays, water caustics,  volumetric fog, ambient occlusion, depth of field, motion blur, bloom lighting, full-screen antialiasing, and so much more. All of these effects come from separate systems with their own drawbacks, pitfalls, and limitations.

As an example of the kinds of tradeoffs we have to worry about:

Detail vs. Dynamism

The radiosity lighting is great, but the shadows are vague and inaccurate. I don't know if this is a limitation of Valve's ancient Source Engine, or if I bungled this demo map.
The radiosity lighting is great, but the shadows are vague and inaccurate. I don't know if this is a limitation of Valve's ancient Source Engine, or if I bungled this demo map.

Above is a screenshot from a demonstration level I made in the Source Engine, which is what was used in Half-Life 2 back in 2004. You can see I’m standing in a hallway that leads to two rooms. Both rooms have an overpowering light in them. The light source is pure white in both rooms. But in the room on the right, the walls are orange. White light is reflecting off those walls and spilling out into the hallway as orange light. This is called radiosity lighting. Light can spill into a room and bounce around corners, talking into account the color of the stuff it’s being reflected off of. It produces very accurate results. The downside is that it was enormously expensive to calculate on the computers of 2004. The computer might need to work for hours to create the lighting, and once it was done the lighting was fixed. You can’t move the light source around while the game is running, and changing the orange walls to blue wouldn’t change the color of the light in the hallway. We call this having the lighting “baked in”. Moving objects in the scene couldn’t create high-detail shadows like this, which means that opening and closing a door won’t change how much light is coming through the doorway. The lighting is great, but it can’t change.

That same year, we also got Doom 3. That game was focused on dynamic lighting. Lights could move. They could change color. Everything – even moving objects – could cast pixel-perfect shadows. The downside is that this couldn’t use any radiosity lighting. Light literally couldn’t bounce off of things to hit other things. Even slender railings would cast crushing pure black shadows. This is why the lighting in that game was so harsh. No matter how powerful a light was, the walls couldn’t scatter it to illuminate other surfaces.

That slender railing is blocking all of the light coming fromthe ceiling, as if all the light was being emitted from an infinitely small point. (Which it is, in this case.)
That slender railing is blocking all of the light coming fromthe ceiling, as if all the light was being emitted from an infinitely small point. (Which it is, in this case.)

Do you want beautiful lighting that can’t change, or brutal lighting that can? I guess it depends on the game. Eventually programmers came up with systemsI’m vague here because I don’t really understand them. This stuff is too new to fit in my dusty old-school bag of tricks. that could do both at the expense of – you guessed it – being more complicated and having different tradeoffs. Not just more complicated to code, but more complex for the artists who create the environments.

You’ll still see this tradeoff at work today. The more sophisticated the lighting is, the more it tends to be baked in, with a majority of the objects in a scene locked in place because those objects are casting shadows that can’t move. If you’ve ever played a game where you threw a grenade and wondered why it doesn’t blast a chair across the room or obliterate the curtains, it’s probably because the game is using baked lighting. If those objects moved, they’d leave behind nonsense shadows.

Raytracing

Those windows are casting real reflections. Same for the floor. And for everthing else. Like in real life, basically everything is kinda a mirror, but most surfaces are too rough for a legible reflection.
Those windows are casting real reflections. Same for the floor. And for everthing else. Like in real life, basically everything is kinda a mirror, but most surfaces are too rough for a legible reflection.

Raytracing is an attempt to do away with all those tricks and hacks we’ve been using to make our games look like the real world, and instead just simulate light as it really behaves.

As you read this article, billions of photons are pouring out of the screen. Some of them enter your eyes, allowing you to see the screen. Some of them reflect off the walls around you and then enter your eye, illuminating the room you’re in. As photons bounce around, their paths bend as they move through things like glass, and they change in wavelength depending on what sorts of things they run into.

To be clear, raytracing isn’t an exact 1-to-1 simulation of real light. We actually trace light rays going from the eye and into the scene. That’s backwards from how light travels in reality, but backwards is more efficient because we’re only simulating the 0.01% of photons that hit the camera and not the 99.99% of photons that hit something else. Also, we can’t simulate billions of photons, not even with the amazing graphics hardware available today. So we simulate a few thousand, and we can use the information they give us to extrapolate the rest of the scene. It takes massive horsepower to accomplish this and the cards that can pull it off aren’t cheap here in the back half of 2019, but it is finally possible and the results are amazing.

If you're a super-expert on graphics programming and you study this technology very carefully, then maybe you can explain to me how to set it up.
If you're a super-expert on graphics programming and you study this technology very carefully, then maybe you can explain to me how to set it up.

Earlier I said I’d come back to the topic of mirrors. In the old days, mirrors were basically trick windows. The level designer would put a flipped copy of the room on the other side of the glass and the game would show copies of the characters over there. You weren’t looking into a reflection, you were looking into another space made to look like a reflection.  Over the years, most video game mirrors have been variations on this same trick, because calculating a real reflection was too computationally expensive. And then we just gave up, because it wasn’t worth all the extra complexity and performance concerns for things that are basically just decorations with no gameplay value.

You could make a better mirror by replacing the glass with something more reflective and less transparent, but these early raytracing demos are mostly crude hacks with a lot of hard-coded stuff. There currently aren't any chrome-like textures available.
You could make a better mirror by replacing the glass with something more reflective and less transparent, but these early raytracing demos are mostly crude hacks with a lot of hard-coded stuff. There currently aren't any chrome-like textures available.

But with raytracing, everything is different. In the above screenshot,  I set up a Minecraft world with a shader pack that uses raytracing. I placed glass blocks in front of black concrete and that created a mirror. This is basically how you build mirrors in real life. You put a reflective layer against a dark background. This isn’t a trick like in the old days. A programmer didn’t have to spend days writing code to create this specific effect and tweaking it to look just right. This mirror isn’t limited to this single location where the local parameters have been tweaked to make sure everything works. This mirror just emerged naturally from the rules of raytracing.

The same thing happens with radiosity lighting. That’s just the virtual photons bouncing off of stuff, doing their job. And real-time moving shadows? Same thing. A programmer doesn’t need special code to figure out where a shadow should be and then darken that part of the scene, they just send out a bunch of rays. If something blocks the light then there will be a shadow. We don’t need to lock all the furniture in place because it’s casting baked-in shadows. Everything can be dynamic and everything can be lit using the same rules.

In traditional rendering, you have to limit the number of lights in a scene because every shadow-casting light has a cost to it. In Doom 3 the rule was that no wall should have more than 3 lights shining on it, and the less lights overlapped the better. That explains why so much stuff was mostly illuminated by isolated pools of light. But lights in a raytraced scene are basically free. Shadows are free. Reflections are free. Refraction is free. Radiosity lighting is free. Getting raytracing working requires a ton of processing power, but once the system is running all of these expensive effects emerge for little or no additional cost. As someone who’s used to the old way of doing things based on rasterizing triangles, this idea feels so weird.

Shadows are always realtime, which means we can have destructible walls and movable furniture! I mean, you still have to CODE those things, but now they don't create intractable lighting problems.
Shadows are always realtime, which means we can have destructible walls and movable furniture! I mean, you still have to CODE those things, but now they don't create intractable lighting problems.

Raytracing is probably a couple of years away from widespread adoption. (Assuming it catches on, obviously.) This is such a big leap that I’m not sure how AAA games will react to it. This isn’t like earlier tech where people could muddle through on low-end hardware by turning graphics options down. As of right now, you need hardware specifically designed for raytracing to get an acceptable framerate. This creates the same chicken-and-egg trap that faces VR, where developers don’t want to fully commit to hardware with a limited install base, and people don’t want to commit to expensive hardware that doesn’t have a lot of games. On the other hand, the Playstation 5 will reportedly support raytracing, and that makes the technology a pretty safe investment for developers. I’m hoping it takes off. I haven’t been excited about graphics in ages, but I really dig raytracing.

I guess we’ll see in the next couple of years.

 

Footnotes:

[1] And probably with more fans on it.

[2] Actually, the size of the penumbra varies based on the size of the light source AND distance to target. A large soft-white globe will give soft-edge shadows and a piercing bright point will create sharp-edged shadows.

[3] This technique really looks odd under saturated lights. A bright red light will wash the entire scene in red, and then the shadow just makes it darker red because it’s not really blocking the light. Maybe you could fix this by using negative color shadows, but I’ve never seen that done.

[4] I’m vague here because I don’t really understand them. This stuff is too new to fit in my dusty old-school bag of tricks.



From The Archives:
 

118 thoughts on “This Dumb Industry: Raytracing

  1. tomato says:

    You have shown Prey as an example of a modern game without mirrors. But that engine also has a “looking glass” tech which allows you to look through a glass wall into a different live rendered scene. It is used everywhere in the game for 3D video recordings and other stuff. Couldn’t the game have used that tech to create reflections in mirrors?

    1. Shamus says:

      Honestly, I wondered the same thing myself.

      My guess: Probably, but it would have been extra work to create a mirror that could preset a flipped version of the world. As always, it’s do-able, but “not worth it”.

      1. Bloodsquirrel says:

        That, or it might create headaches if you can see one mirror in another, or a mirror in one of the recordings, or some other obnoxious edge case that’s easy to avoid if they’re using the tech just for video recordings.

        1. tomato says:

          The same can happen anyway because it’s not just used for video recordings. There are looking glass windows in many different places.

          1. Decius says:

            Can one looking glass window contain another and/or itself?

        2. Shamus says:

          Thinking about this more:

          The other problem is the player model. In a first-person game, you can’t REALLY let players see themselves in a mirror, because their “body” is actually a couple of floating arms, and maybe legs. In Prey, they do indeed have a full 3D model for the player character, suitable for realtime. (Actually, I guess they have two, since you can pick your gender.) But that model is only used with canned animations. (Like in-game cutscenes.) They probably don’t have the animations of Morgan you would need for a player-controlled model. Players love to strafe and hop around in front of mirrors, and there’s a ton of fussy work that you’d need to do to make that look good. You basically have to do 80% of the animation work you’d do in a third-person game, all so the player could see themselves in a bathroom mirror.

          This would also explain why Hitman has mirrors, as DeadlyDark pointed out elsewhere in this thread. That game is already 3rd person, so you can just use that in the reflection. Way easier.

          1. Matthew Downie says:

            So even with ray-tracing, we might not have working mirrors in first-person games.

            1. Decius says:

              No, we just can’t cheat the player model.

          2. EwgB says:

            Yeah, I think that is right. When I was doing computer graphics in college (that must be, what, 10 years ago by now? damn…), we made mirrors in the following way:
            1. Using the plane of the mirror surface, find the mirror position from the player position.
            2. Place a camera in that position.
            3. Render the scene from that position (omitting the mirror polygon itself) into a texture/buffer (not on the screen).
            4. Place the texture on the mirror polygon.
            5. PROFIT
            This basically means a complete render (which might mean multiple passes over the scene, depending on your shadow model) for each mirror. This gets expensive very fast.

            Same goes for the Doom 3 shadows. You basically need two render passes per light source in the scene, plus one (or two? I don’t remember) for the final assembly on the screen. And you only need two because of a nifty trick called Carmack’s Reverse (guess after whom it’s named), the original version of the algorithm needed three per light. And you need all that for every frame you render.

            1. Frank says:

              I use reflection planes for mirrors in my 3DWorld game engine. There are lots of optimizations that can be used to get the reflection pass to a fraction of the main pass’s render time. It can be a lower resolution (no need to have the reflection be higher resolution than the screen area of the mirror). You can use lower quality antialiasing. Many postprocessing effects can be skipped. Visible object culling/batching can be shared across the main and reflection pass, and dynamic data is only sent to the GPU once. Some data can be reused from previous frames. Small/distant objects that aren’t very visible can be skipped. In addition, mirrors are often in smaller rooms were the view distance is lower. I’m able to make the floor and walls have mirror reflections with less than a 2x increase in frame time.

              For projected shadows, the projected geometry is often much simpler than the normal rendered geometry because it can be lower detail, and only edges on the object’s silhouette need to be drawn. Basically, you just extrude a 3D object onto a 2D plane. It’s not very expensive to render unless you have a large number of light sources affecting the same object. Most modern games use shadow mapping now though, which allows for various forms of soft shadows.

              You can do both reflections and shadows without ray tracing, but it’s a different quality vs. performance trade-off. Ray traced scenes look different because reflections tend to be sharper, but shadows tend to be smoother. Ray tracing doesn’t technically add any new features, what it does is implement these features in a different way that gives these games a new look. The real power is in combining ray tracing with traditional techniques (or rather selecting one vs. the other based on the situation/environment) to get the best of both systems.

          3. Jabberwok says:

            I could see myself in Portal, and I was totally fine with the way it looked. And like every game prior to that as well. This is another issue I have with photorealism. The amount of extra work needed to make things look realistic is sort of ridiculous. A player model and some basic animations used to be enough, but I’m assuming the level of fidelity required is why it no longer is.

            1. Sleeping Dragon says:

              I’m four days late to the party so probably nobody but Shamus is going to see this but here’s a thought: light… actually passing through portals…

          4. Paul Spooner says:

            I like how Raytraced Minecraft fixed this. Player Character is a vampire, does not appear in mirrors. Problem solved.

    2. Will says:

      In the early versions of the Unreal engine, you could apply a reflective effect to basically any surface, but I’m not sure if this was how it was achieved. I do remember setting up real time CCTV cameras in UT using portal zones that allowed you to look in to other areas of the level geometry. You just had to remember to turn off the portal’s ability to transfer projectiles, or things got extra sporty when people shot through the monitors.

    3. Milo Christiansen says:

      For a second I thought you were talking about the original PREY, where one of its claims to fame was portal rendering. You would come up to a portal and you could see where you were going to end up, then you step through and you could look back at where you came from. Cool! (at the time anyway, lol)

      I remember one segment where you step through a portal and end up on the surface of a sphere in a small tank in the middle of the room you were just in, then you go through another portal and up up elsewhere. Portals you can look through, gravity that doesn’t pull in one direction over the whole map, AND changing your size! Cooooool… I’m sure a lot of it was trickery, but it was pretty awesome back then.

      1. Duoae says:

        And unfortunately, it was nowhere near as cool as the *original* original Prey which had its own engine developed to enable the portal tech. In fact, I don’t think that technology has been replicated in a game since…. even Portal is a “hack” by comparison.

        I couldn’t find all of the gameplay videos…. but this E3 demo shows most of the stuff. Portals, reflections and reflective surfaces, skeletal animation, radiocity lighting, real time destructive environments, dynamic shadows…. etc.

        https://www.youtube.com/watch?v=LbeBBUdxLqE

        https://www.youtube.com/watch?v=fmCsGFCV4u0

        It really was an amazing engine for the time (and even today) though they could never get it working quite right…

  2. Dreadjaws says:

    There are a couple of problems with Raytracing adoption. While from a developer’s standpoint it’s a very exciting technology, the tricks to simulate lighting conditions have become so prevalent that from a user standpoint the results are almost imperceptible. In most Raytracing tech demos you’ll find people wondering exactly what are they supposed to be excited about. This is why the most effective show of differences come from old games like Quake II, where the lighting differences are insanely evident.

    Couple this with the excessive price point of dedicated hardware. As you mentioned, for the price of a Raytracing-supporting card you can get three consoles and more, but you still need the extra hardware for your PC. Unless you already have a gigantic, expensive gaming PC chances are you’re gonna have to upgrade the rest of the components as well.

    If VR, which is genuinely an entirely different way to experience games, hasn’t been able to take off due to high price points (even when currently it’s comparatively cheaper than Raytracing hardware) then this is going to get an even harder time. It’s nice to have the technology ready, but I think it’s way too early to get excited about it becoming ubiquitous. Consider that developers can’t just use Raytracing now, they still have to program all their old tricks if they want their games to sell to the larger audience rather than to only the few users who can afford the new tech.

    I’m curious about the PS5. If it’s going to be able to support Raytracing, is it counting on the tech becoming cheaper by release time, is it using a somewhat lighter version of it or is it going to be an expensive console? Maybe they’ll just sell at a loss banking on the fact that supporting this tech from the get-go will make the PS5 the go-to platform for graphic junkies.

    1. Ancillary says:

      I’d argue that the widespread adoption of raytracing is inevitable because its effects are relatively underwhelming from a consumer’s perspective. VR adoption has many barriers—monetary, motion sickness, sensory deprivation, the need for a dedicated space to flail around, etc. On top of that, VR can’t easily be ported into existing games without a complete rethink of gameplay and balance.

      Meanwhile, raytracing’s only barrier is monetary. A console like the PS5 could adopt it without fearing accusations of adopting a Kinect-like gimmick. And once there’s console buy-in, developers and manufacturers no longer need to worry about a market base. Eventually, the tech will simply be ubiquitous.

    2. Bloodsquirrel says:

      There’s three things that I think separate VR from raytracing:

      1) VR has more than just monetary costs. It’s more complicated for the user, takes up a lot of space, and makes some people sick. Right now, raytracing is expensive, but only in the same sense that high-end graphics cards (which VR also requires) have always been expensive. VR has gotten cheaper, but there are still other issues holding it back. Raytracing won’t have the problem- once the graphics cards are cheap, it’ll be just plain better.

      2) Raytracing doesn’t require an extra install base. Comparatively speaking, it’s much easier to put out a traditional-engine game with raytracing being optional. Once the next generation of graphics engines comes along it might even be automatic- Unreal 6 or 7 or whatever they’ll be up to might just come with both paths. VR is more difficult- it affects the way the game is played more fundamentally. You can sort of hack VR onto a non-VR game, but you’re really just using the VR as a full-vision monitor at that point. You don’t get the motion controls for free.

      The truth is that VR still hasn’t had a killer app for it. Developers have yet to produce a game that’s good enough to take over public conversation on its own merits for VR yet. Everything still has that “tech-demoy” feel to it. That’s something that VR needs, since it’s such a different upgrade path than “Faster graphics card, prettier graphics”.

      Raytracing won’t have that problem. All we need is to be able to make graphics cards faster for cheaper (which we’ve been doing… although we are hitting the limits of what tricks we can pull out of silicon) and it’ll be an easy switchover for the consumer. Hell, he won’t even have to know anything about Raytracing. Once the middle-of-the-road graphics cards all support it it’ll just show up on his screen. He’ll just know that he has prettier graphics.

      1. Ancillary says:

        What was the third thing?

          1. The Rocketeer says:

            The third thing is “????????”. The fourth thing is Profit.

      2. Xeorm says:

        Third thing: Raytracing makes producing the game far easier. You can get very good lighting through ray tracing fairly simply, compared to traditional methods. Assuming it gets to the point where non-ratracing customers can safely be mostly ignored, that’s a lot of savings to be had.

        But really, I think the biggest thing holding back VR is that it’s a different experience. I’ve played it a bit and the games do have that feeling of being tech demos, but also just play really differently. Even without a monetary cost to VR games I don’t think I’d be very eager to play them much. Having to buy all the equipment too? No thanks.

        But I am eager to see the improvements from raytracing.

        1. Dreadjaws says:

          Third thing: Raytracing makes producing the game far easier.

          Well, this ties into the point I was making up there. Sure, it’s easier… if you dedicate exclusively to it. That means no emulation of light conditions by the use of trickery. That means the games will only work for dedicated raytracing hardware. That means the great majority of users won’t be able to play them until this hardware becomes affordable. That means developers won’t bother with it for a while.

          1. Bubble181 says:

            Not if some console supports it. Of the PS5 supports Raytracing, you can bet PS5 exclusives will use it. Which will make PC gamers call for it, etc. There’ll be a few years of games offering both, perhaps, but they’ll simply push through the idea that you need to upgrade.
            As a consumer, I’m not a fan – i don’t think raytracing is the end-all-be-all honestly.

      3. Rack says:

        VRs biggest problem is still reporting. Gaming media in general just stopped reporting on VR a few years ago so this impression that there’s nothing but tech demos has persisted. The best titles in VR are still at the AA stage with a couple of potentially low tier AAAs apparently incoming. That’s still a really hard sell for people who are used to buying hardware to play the games rather than for what that hardware can do.

    3. Cubic says:

      I have to admit I’m not a great graphics detail appreciator myself, I was basically satisfied with the PS3.

      One question is whether ray tracing is an endpoint to graphics engine hacking (no idea, is it? how flexible is the ray tracing hardware w.r.t. materials and stuff?). If so, I guess technical efforts will over time shift to other topics. Just let the hardware improve.

      1. Nimrandir says:

        I go even further back on the graphical fidelity scale; I’ve been fine with things since Final Fantasy VIII, which most only remember now through cave paintings of the first Kingdom Hearts.

        I acknowledge my status as an outlier, but I really would like to get to the bottom of my lack of concern. As I said in a comment a while back (apologies for the repetition), when I picked up Morrowind earlier this year, multiple people started recommending graphical overhaul mods through a Slack chat. Words like ‘playable’ and ‘tolerable’ were used. I looked from my tablet to my game display, shrugged, closed the chat window, and hopped on the silt strider to Balmora.

        Maybe I need therapy.

        1. modus0 says:

          To me, graphics are the distant third most important feature of a game, behind story and gameplay.

          I love Thief II: The Metal Age, which has (IMO) great story and gameplay, but everything is super polygonal. In contrast, while Thief (2014) had great graphics, the gameplay was so-so, and the story was a patchwork disaster. I’m far more likely to jump into the former than the latter.

          I’m also far more likely to start up Morrowind again (without a bunch of graphical overhaul mods), than I am Fallout 4, despite the latter being far more visually impressive.

          1. John says:

            I’m the kind of person who tells himself that graphics are less important than story and gameplay. It’s possibly even mostly true. I mean, I don’t play a lot of graphically demanding games, partly because my PC couldn’t handle them anyway and partly because graphically demanding genres aren’t my particular favorites. But there are times–oh, there are times–when my inner graphics snob asserts himself. I can’t play Privateer anymore. I wish I could. I love that game. (Loved that game?) But I cannot go back to the blotchy, sprite-based graphics. I just can’t. And when I played Disgaea 2 PC last year the PS2-era polygons and textures were painful to look upon. Funnily enough, Disgaea 2 might have been okay if the 3D graphics hadn’t contrasted so unfavorably with the crisp, clear 2D graphics and interface. It’s a strange case where making parts of the game look better somehow made the game look worse overall.

            So, yeah, graphics are totally less important than story or gameplay. Just as long as the graphics are good enough. If the graphics aren’t good enough . . . well, that’s when the knives come out.

      2. Bloodsquirrel says:

        I’m fine with the graphical fidelity that we had with that generation, but as has been noted, to get that level of fidelity on the existing hardware required a lot of gameplay compromises (ie, more static scenery, level size/layout restrictions, etc). Even on the artistic side, one of the reasons that we went through the age of brown was because it was easier to hide the limitations of the lighting engines at the time.

        That’s the real advantage of better rendering techniques to me- removing those limitations so that we can get things like massive continuous play spaces, moving light sources, etc.

    4. Matthew Downie says:

      “the tricks to simulate lighting conditions have become so prevalent that from a user standpoint the results are almost imperceptible”. Don’t most of these tricks rely on the scenery and lighting being pretty fixed and limited? If the game is built with raytracing in mind, it will allow for more interactive environmental gameplay.

      Though in that case, you might have trouble producing a low-end version of your game for cheaper hardware.

      1. Dreadjaws says:

        Yeah, but, again, producing a game for only raytracing will require the hardware, and we’re not at a point where the great majority of users can afford it. We won’t be for several years if current standards are any indication.

        1. Cubic says:

          Assuming PS5 supports ray tracing, Sony already has a lineup of capable studios that produce Playstation-exclusive games.

    5. Duoae says:

      I’m curious about the PS5. If it’s going to be able to support Raytracing, is it counting on the tech becoming cheaper by release time, is it using a somewhat lighter version of it or is it going to be an expensive console? Maybe they’ll just sell at a loss banking on the fact that supporting this tech from the get-go will make the PS5 the go-to platform for graphic junkies.

      Maybe I’m incorrect in my understanding of how things are but my thought wasn’t that it’s expensive to make the Raytracing parts of the GTX cards, at least not moreso than the normal graphics processing parts, but that Nvidia think they can charge for the privilege. (And obviously the development of the tech probably wasn’t “cheap” either! – Nvidia’s R&D budget is quite large, IIRC)

      Since the chipset in the PS5 will be from AMD and since AMD are not really able to compete with Nvidia in pure performance, it seems logical to assume that AMD will release cheap Navi-based cards that have Raytracing that is more powerful or as powerful as the 2060/2070. It would be a good way for them to compete. In which case, the cost for the PS5 from including such tech is no more than the cost for the production area of silicon it will be etched on. This isn’t like having a disc drive or not where adding one takes up more cost and space.

      1. Geebs says:

        The price hike was more to do with Ethereum and some of the other altcoins being mineable on nVidia hardware, and the consequent GPU shortage and price gouging, than die area – although I believe the RTX processors are pretty big.

        1. Duoae says:

          Erm, I don’t think that is the case? The GPU mining craze was pretty much over by the time of the RTX 20XX line. They had surplus GTX 10XX series cards still in the channel before the 20 series was released because people had stopped buying them and this was linked with their slight delay at getting the newer generation out. Plus, that theory doesn’t work out in terms of increasing GPU cost because AMD cards were vastly preferred for hashing early on until FPGA implementations took the lead because the AMD cards had the most favourable cost/return.

          Let’s see about die size:
          16 nm process
          1060 – 4400 m transistors – 200 mm^2
          1070 – 7200 m transistors – 314 mm^2
          1080 – 11800 m transistors – 471 mm^2
          12 nm process
          2060/2070 – 10800 m transistors – 445 mm^2
          2080 Ti – 18600 m transistors – 754 mm^2

          This image seems to indicate that approximately 1/4 of the die is for the ray tracing portion – leaving the remainder as approximately 565 mm^2.

          This is plausible to my eyes because 0.75*18600 transistors results in 13950 transistors across the tensor and compute/shading sections. This is approximately 118 % difference in transistor count, and looking at the relative performance of the cards, we see approximately 22-28 % increase in performance for the 2080 Ti over the 1080 Ti.

          Considering that the tensor core section takes up approximately the same space, I’d say that the cost of having the RTX functionality is not proportional to the die size. Also, given Nvidia dropping the prices of the equivalent of the 2070/2070.5 in terms of TFLOPs with the two SUPER cards for competition with the 5700 series from AMD, it seems pretty clear that graphics card prices are quite divorced from their manufacturing costs.

          1. Duoae says:

            Forgot the link to ethereum mining hardware preferences:

            https://99bitcoins.com/ethereum/ethereum-mining/

            1. Geebs says:

              I agree with you re: price vs. cost of manufacture; I’m just saying that nVidia saw that people had got used to paying ludicrous prices for GPUs due to the crypto boom, and opportunistically hiked their prices to the “new normal”.

              1. Duoae says:

                Ah, okay. Then we’re in agreement! :D

    6. Agammamon says:

      Supposedly Scarlett will also have raytracing – *if* AMD can figure out how to implement it since Scarlett will be using AMD GPU’s.

      1. Duoae says:

        Looking a the coverage of AMD’s ray tracing patent it seems that they want an “in-line” solution as opposed to a dedicated ray tracing section of the chipset.

        In theory, this could mean that the ray tracing scaling could be more efficient than in Nvidia’s implementation (especially with the historically normal method of disabling certain parts of the die due to damage/binning). Of course, to my eyes, this could also mean more power and TDP than in Nvidia chips (which is already the case with AMD’s chips compared to the equivalent Nvidia process nodes and die sizes) since you have more silicon “active” at all times.

        https://www.tomshardware.com/news/amd-patents-hybrid-ray-tracing-solution,39761.html

        AMD’s implementation also appears to imply that the in-line operation could be more efficient than Nvidia’s “divide and process” route simply due to the need to have shader data calculated at the same time to be applied in parallel – which I imagine can be tricky to manage.

        Aside from the obvious answer of years of optimizing code, there are some specific new elements that help make RTX possible. First, to get maximum performance, shaders for all the objects in the scene need to be loaded into GPU memory and ready to go when intersections need to be calculated.

        https://www.extremetech.com/extreme/266600-nvidias-rtx-promises-real-time-ray-tracing

  3. DeadlyDark says:

    Mirrors are just decorations with no gameplay value? Since should play Hitman 2

    I am still very impressed, how they still reflect stuff even if you break them

      1. DeadlyDark says:

        Oh yeah, it’s a trick.

        What I meant, is that mirror’s are used by NPCs to spot Agent 47. It’s not just a decoration

  4. Asdasd says:

    I’ve never had a problem with abstractions and hacks. The imagination can do wonders at filling in the gaps. In fact it’s when a game tries to sell me on ‘face value’ realism that my brain starts invisibly noticing the differences.

    So I guess I’m much more of a ‘more pixels’ guy than a ‘shinier pixels’ guy. When I finally get around to my next big PC upgrade, I’ll be looking to sustain the current level of shiny at a higher resolution and a higher refresh rate. Of course, I have an idiot-level understanding of graphics technology, real-world physics and what not, so there’s also every chance that my standards have been unconsciously raised by graphical improvements over the past decade without me ever noticing.

    Case in point: I started playing WoW Classic recently (groan) and I think its graphics are perfectly acceptable. But I have no idea if this is how it looked when I first played it in 2004! For all I know there could be any number of graphical upgrades that Blizzard have applied. And my PC is obviously a lot better than it was back then. Just basic stuff like it running at 1080p60 obviously makes it a more 2019-proof experience.

    While I know when I stopped actively caring about tech advances in games (shortly after we got working ragdoll/object physics and independently poseable facial expressions/lip synching I was pretty much good), I have no idea to what extent I subconsciously stopped caring, such that I’d now miss modern conveniences that I don’t even know the names of.

    In summary: raytracing. It’s good, probably?

    1. KillerAngel says:

      Just FYI for Classic WoW in the graphics settings you can choose a preset on the slider. By default “3” is 2004 graphics and “10” is modern WoW graphics.

      So you can choose. I mostly play on the ’04 graphics but sometimes I’ll slide it over to see how things would look.

  5. Mephane says:

    Raytracing is probably a couple of years away from widespread adoption. (Assuming it catches on, obviously.)

    Oh it will absolutely catch on. The reason is that raytracing scales better with scene complexity, and therefore there exists a sweet spot beyond which raytracing would perform better than rasterization. I can’t say exactly how far we are away from that point right now, but I expect it to be less than a decade. The goal is ultimately to switch entirely to raytracing only, because beyond that point it wouldn’t make sense to use anything but raytracing any more.

    1. Droid says:

      They kind of dug themselves into a bit of a hole there, imho. They tried to push hard for 4K right before raytracing started to seem like it would finally be viable for real, and now that people who wanted to have bleeding-edge technology bought 4K monitors and equipment, they need to back out and go back to 1080p because while raytracing scales very well with scene complexity, it scales horribly with resolution.

      1. Simplex says:

        True, but then there are tricks like DLSS to work around this. I know that DLSS worked really bad in previous games, but in Control it works surprisingly well: https://www.youtube.com/watch?v=yG5NLl85pPo

        1. kdansky says:

          AFAIK In all games so far, DLSS is just worse than rendering at the same lower internal resolution then using TAA in combination with a decent sharpening filter.

          DLSS either slower or uglier (or sometimes both) than that much simpler approach, as it’s per-frame cost is massive, especially compared to how fast modern sharpening filters are. Hell, you can even use the way faster FXAA with sharpening to get a higher framerate than DLSS.

          Personally I run reshade on many modern games anyway to fix ridiculous colour grading (Vermintide and Monster Hunter World), and adding a sharpening filter costs about 1-2 fps at an average above 60 Hz.

          Of course that video never talked about that comparison. Here’s one: https://www.youtube.com/watch?v=dADD1I1ihSQ

      2. Mephane says:

        But does it scale with resolution worse than rasterization? Afaik they both scale linearly with the number of pixels.

        1. Droid says:

          I think due to the methods involved in processing the image, that is not entirely true for either rasterization or raytracing. Just think of Anti-Aliasing and Anisotropic Filtering (both things that you need with both techniques, right?): if you go from 1080p @ 72 PPI to 4K @ 144 PPI, you have the same screen size, but due to higher pixel density, a 4x AA and 4x AF will have a much more localised effect. So to get roughly the same effect, you’d have to increase to 8x AA and 8x AF (or even 16x, can’t remember what exactly the number does off the top of my head), increasing the processing cost.

          But to get back to the main point, while I admit that I know little about the actual cost of increasing resolution with raytracing, I’m pretty confident that raytracing currently still requires more powerful computers than rasterisation at the same resolution and graphical fidelity. So you have to knock down the resolution simply because while you might be able to get acceptable framerates on 4K with rasterization, it’s just ‘acceptable’ and not ‘great’ most of the time, and the increased cost of raytracing has a good chance of making it unplayable, forcing you to decrease the single most impactful setting, resolution.

    2. Exasperation says:

      It’s tough to say exactly how far away that is because it depends on the relative efficiency and sheer amount of computing resources you’re throwing at rasterization vs. RT (i.e. doing both on the CPU is much more favorable for RT than doing RT on CPU and rasterization on a dedicated GPU because the GPU both has more raw power and is designed to do rasterization very efficiently). A few years ago (before RTX, but after the advent of things like CUDA and OpenCL to allow most of the power of a GPU to be used – albeit inefficiently – for other types of computation) the crossover point was somewhere on the order of 100 million triangles (I’m getting this number from research into using RT to render highly-detailed CAD models for engineering purposes, where simplifying the model’s polygon count is unacceptable due to the resulting loss of accuracy). I would hazard a guess that dedicated RT hardware like RTX would knock an order of magnitude or so off of that, so maybe ~10 million triangle scenes?

  6. Dev Null says:

    “As you read this article, billions of photons are pouring out of the screen.”

    Actually, that’s the demo of raytacing I want to see. Person in a dark, cluttered room watching – and lit by – a dynamic video on a screen. Sit them next to a window, or a second dark monitor just to throw some reflections in too.

    1. Geebs says:

      For that scene, it would be much cheaper just to project the dynamic video as a texture, though, like Control does. If you tried to trace rays instead, the temporal reconstruction techniques needed for performance, in combination with video compression, would probably turn everything into an Aphex Twin promo.

  7. Ninety-Three says:

    Wait, was that Control footage with raytracing on? It looked so much like my mid-tier graphics card experience that I assumed it was raytracing off and the tech was going to be switched on at some point for impressive contrast. I spent the first few minutes looking for the raytracing fanciness but all I noticed was that the lighting still did that thing where it gets weirdly bloomy depending on exactly where you’re viewing it from.

    I’ve seen demos like Quake or Minecraft where raytracing is crazy impressive, but boy was Control underwhelming.

    1. Janus says:

      You can see Shamus moving the character in around some windows for a dark office, and they reflect light from the rest of the scene at certain angles.

      1. Hector says:

        I’m sure that is amazing to graphics geeks, but I don’t think that many people cared about it in the actual market.

        1. Nimrandir says:

          I’d agree that it doesn’t have an impact for the typical user during play, but the issue is the market doesn’t get exposed to to these games without marketing. While I’m pretty insulated from advertising and hype culture, I presume this stuff comes up when marketers try to whip the public into a frenzy over their new releases.

          I’m not sure how to separate the variables here, apart from a well-funded developer stating overtly, “we’re happy with the graphical standards of 20XX, so we’ll use our budget primarily on gameplay development and quality narrative.”

    2. Dreadjaws says:

      It’s more or less the point I was trying to make up there. Developers have become so good at emulating light conditions bit by bit that this new technology entirely fails to impress in games with current graphics. I also didn’t know the footage of Control had raytracing enabled until I read it here.

    3. Simplex says:

      If you want to see how raytracing affects the look of Control, I recommend this article and accompanying videos: https://www.eurogamer.net/articles/digitalfoundry-2019-control-pc-a-vision-for-next-gen-rendering

    4. kdansky says:

      Prime example of bad art direction ruining otherwise impressive effects.

      The Minecraft example honestly looked so much better than Control.

  8. Hector says:

    Man, this has got to be really hard on Ray, getting all that tracing done in real time. Dude really puts in the hours y’know? The constant Crunch Time has got to be getting to him but he soldiers on.

    1. Joshua says:

      What did you do, Ray?

      1. Nimrandir says:

        Somewhere, Dan Aykroyd looked really guilty for a moment, and he had no idea why.

      2. The Rocketeer says:

        “I couldn’t help myself. It just… popped in there. I just tried to think of something novel but harmless, something that could never, ever result in an immense coding nightmare that’s counterintuitively disastrous for immersion.”

        *giant mirror with reflection of player strafing and crouch-hopping lumbers out of the obscuring skyline*

        1. Hector says:

          “We can do more damage that way.”

  9. Armstrong says:

    Holy crap, Minecraft looks a lot better than I remember!

    1. Matthew Downie says:

      Still looks kinda blocky to me. Haven’t they fixed that bug where everything looks like it’s made of cubes yet?

  10. Carlo T says:

    Slightly ot, but I really like the new post + video double format – allows to fill both a short and a long break, depending on the need. Hope it works and drives interest!

    Also, is it just me or dies shamus sounds a lot like Jim Sterling (only much more “well-mannered”?)

  11. Anorak says:

    Ray tracing actually doesn’t get you radiosity and soft shadows for free. Ray tracing traditionally calculates what an individual object (either a primitive, like a sphere, plane or cube, or a triangle) will look like by checking if it intersects with a Ray, then looping through all of the light sources in the scene. Light sources in traditional ray tracing are all point light sources, and so cast hard shadows.
    Adding more light sources massively increases the number of calculations you have to do, and every mirrored surface will spawn a new ray, which then in turn needs to go check visibility of all objects, and their visiblity to a light source…..and so on.
    The technique has been around for decades, but it’s much too expensive.
    Like I said, this only really works with point light sources. For soft shadows, from larger light sources, you have to do an entire extra set of hacks, tricks, workarounds.

    Path tracing gets you much closer to what you want. This uses the same basic set of algorithms as Ray tracing, but instead of shooting a ray out of the camera and checking what it intersects and how it’s lit, what you do instead is:

    a) Shoot ray out of camera/through the pixel.
    b) check what it intersects
    c) check the material properties. “Pick up” modifiers about this material
    d) randomly shoot a new ray into the scene, from the intersection found at b)
    e) repeat b-d until either a set depth of recursion is reached, OR you reach a light source. Then you calculate the value of the pixel.

    I built one of these things a few years ago, just because.
    Path tracing:
    https://blog.daft-ideas.co.uk/?p=170
    Ray tracing:
    https://blog.daft-ideas.co.uk/?p=162

    1. Timothy Coish says:

      What I don’t understand is why Path Tracing doesn’t scale poorly with increased polygons. It’s clear why Raytracing does, after all you have to do more intersection tests, but why not Path Tracing as well?

      1. Anorak says:

        Oh, Path tracing also scales poorly in the same way. It’s masssivley slower because you have to keep bouncing the ray around the scene, until the end condition is met. Each bounce means retesting against every polygon or primitive all over again.

        1. Exasperation says:

          Neither ray tracing nor path tracing scales poorly with the number of polygons. In fact, they both scale better with polygon count than rasterization. In terms of complexity, the time needed to render a scene with N polygons is directly proportional to N for rasterization, but proportional to log(N) for ray/path tracing. The issue is with the base complexity for small values of N, where the difference between N and log(N) is small enough not to really matter. As N grows larger, the difference between N and log(N) also grows larger, until it eventually reaches a point where it overwhelms the starting advantage that rasterization has (ray tracing has been looked at as an alternative rendering method for CAD work that involves very large detailed models in part because the data sets involved are so large that ray tracing can actually provide more frames per second than rasterization on the same hardware).

          1. Anorak says:

            I’ll have to take you word for it, I don’t know much about rasterization.

            It’s been a while since formal education, so I might get this wrong:
            The ray / scene intersection algorithm is O(log n) because you stop after finding the intersection.

            But each intersection might trigger a new search, so each reflective bounce, or refraction calculation, would square the time complexity, right? Which is why the path tracing / global illumination is so much slower.

            That’s for the naive version, anyway. Sensible data structures for polygons (bounding boxes, trees) help immensely.

            1. Exasperation says:

              As I understand it (simple version*), having b bounces increases the time taken by a factor of b. If your maximum allowed bounces are B, then you have O(B log n), and since constant multiples are ignored in big-O notation, this is just O(log n) (basically, allowing more bounces increases the starting constant, but not the scaling factor).
              Another way to look at it: you can reach your maximum allowed bounces for all cast rays very easily with just a few reflective polygons, placed in a box around the viewpoint; adding more polygons to the scene doesn’t make the worst-case scenario (which is all that big-O cares about) worse than this in terms of bounces (i.e. in terms of the number of rays you have to cast), it just makes it harder (that is, more computationally expensive) to figure out where each individual ray intersects the geometry. This also demonstrates just why you need to limit your maximum bounces; in the degenerate-mirror-box case (or even just a viewpoint between two parallel mirrors), having no maximum bounce limit would lead to a non-termination condition.

              * not-so-simple version: you probably want to cast more than one ray per bounce, so having b bounces increases the amount of time by a factor of roughly (rays cast per bounce)^(number of bounces), but to big-O, this still just gets absorbed into the (admittedly giant) constant factor, leaving O(log n) in number of polygons.

    2. DGM says:

      >> “Adding more light sources massively increases the number of calculations you have to do, and every mirrored surface will spawn a new ray, which then in turn needs to go check visibility of all objects, and their visiblity to a light source…..and so on.”

      I was about to point this out but you beat me to it. But you missed something: you not only have to cast a reflection ray when you hit a surface that’s even slightly reflective, you also have to cast a refraction ray when you hit anything even slightly transparent. So reflection, transparency AND more lights are all expensive.

      If you assume that ALL objects will be both reflective and refractive you could then optimize a bit for that (by eliminating a ton of if/then checks, if nothing else). You’d no doubt have a much higher overhead but reflection and refraction would be free afterwards. But I don’t know that you can do anything about additional light sources.

      1. Anorak says:

        Correct, you have to check for transparency of the object, and what it’s refractive index is, if the program you’re building is meant to support transparent objects. It’s not really that much more expensive than reflection though, if I recall correctly.
        I did some of that as well: https://daft-ideas.co.uk/ipatbak/public_html/c++/Raytracer/Images/rabbit_and_glass_sphere.png

        The nice thing about these is really that it’s just physics, so implementing something like refraction is really easy.

  12. Janus says:

    One point I’d assumed you’d make in the article/video, was coming back to the “more pixels vs shinier pixels” thing. Once general-purpose ray-tracing hardware becomes cheaply available and powerful, it’ll be the last time we need to swing the pendulum, since everything should just be “for free” with the ray-tracing, instead of needing more hacks for special cases. After that, it will just be more and more rays, so you can have multiple reflections off of other surfaces, higher resolution of the reflections, etc.

  13. ccesarano says:

    One minor bit of feedback regarding the video: if possible, have “without” and “with” comparison footage. I wasn’t sure if the Control footage had ray-tracing on or not because I haven’t spent enough time with the game, and even if I had I’ve spent it on a PS4 sitting in my bed across the room so it’s not going to feel the same as watching a YouTube video on a monitor right in front of me. In order to clearly demonstrate the differences the lighting makes, it would have been useful to see a particular room without ray-tracing with you pointing out some of the shortcuts (like how you noted them for Doom 3), and then view that same room with ray-tracing on and point out the differences. I began to get a feel for the ray-tracing impact from certain lighting effects visible on the pathway to and from the Hotline, but it wasn’t immediately apparent. Even when I realized what I was looking at, I wanted to see the “without” to help clearly demonstrate the difference.

    Kind of like when I was a teenager, I couldn’t see much improvement in DVD over VHS until I went back to watch an old VHS tape a year after watching DVDs regularly. It was then that the gap in picture and audio quality became obvious.

    As for the topic itself: I’m curious if some design choices will still be made in favor of breaking realism. One of the reasons I believe Nintendo does shadows as circles directly beneath the character, at least in Mario games, is to provide a visible marker of where the player will land. In real life, the shadow isn’t directly beneath the person as they fall. So you have to break the laws of reality and portray the shadow as a marker so that the player can more accurately position themselves in tricky sections.

    Now, theoretically you can just add some kind of marker that highlights where the player will land. However, the question there lies in which is more immersion breaking. A familiar concept like the shadow behaving unrealistically or some kind of HUD element that only appears in certain moments, and does that moment change based on how high the player has jumped or some other matter. You’d need something beneath the player to indicate where they land, and perhaps it becomes more opaque the higher they are from a surface beneath them, but it’s still an unfamiliar idea that’s clearly there for mechanical purposes.

    I’m not a fan of “let’s make things less video-gamey”, and there aren’t a lot of franchises more video-gamey than Mario, but I feel like the shadow is one of those concepts where you’re able to break reality just enough that the player’s willing to go along with it. You don’t question using the shadow to gauge a jump. It never feels like an outright game mechanic, a tool specifically placed and manipulated in such a way for a practical purpose. But you readily use it just as the designer intends.

    So even if Nintendo makes a device in 15 years that finally uses ray-tracing, will Mario still have that shadow beneath him to best convey where the player is jumping towards?

  14. Karma The Alligator says:

    Have to say, as a 3D artist, it’s confusing to hear about raytracing being new (I know it’s meant to apply to games only in this case, but still).

    Also, how do people manage with several monitors like that picture above? The lines in between (the monitor’s edges) would drive me mad.

    Anyway, great video, made me want to play Control even more!

    Oh, and the explanation of how lights and shadows work then and now is good, too.

    1. Simplex says:

      “it’s confusing to hear about raytracing being new (I know it’s meant to apply to games only in this case, but still).”

      What’s new is that this raytracing is rendered real time at for example 60 fps.

      1. Karma The Alligator says:

        Yes I know, hence my “apply to games”.

    2. Agammamon says:

      Its easier to get used to than you might think.

  15. Alberek says:

    That minecraft looks very pretty… all those sharp colors…

  16. Jabberwok says:

    The funny thing is, I’ve been playing Control for the past few days; on the lowest graphics settings with ray tracing off, of course. And I genuinely cannot tell the difference between these screenshots and the game I’ve been playing. I think I would have to look at the scenes side by side to notice at all. The mirror-like reflections should be obvious, (though I don’t know if there’s some sort of low-end replacement for that), but I haven’t noticed anything lacking.

    It’s also funny that a game that can show off ray tracing chose to not put any mirrors in the bathrooms. Having functional bathroom mirrors again is like 80% of what I want from this tech.

  17. Michael says:

    I placed glass blocks in front of black concrete and that created a mirror. This is basically how you build mirrors in real life. You put a reflective layer against a dark background.

    This is how you construct unintentional mirrors — your window with lighting inside and night outside will show a reflection of the inside. But it isn’t how you build mirrors that are meant to be mirrors. Those are just an opaque reflective layer; as I understand it, an ordinary bathroom mirror is generally a sheet of silver (to show the reflection) with a layer of transparent glass in front of it (for physical protection — although glass is reflective, that is a disadvantage, not an advantage, in its use as the outer layer of a mirror, for the same reason that the reflectivity of glass is a disadvantage in its use as the substance of a window). You can see the same effect without the unnecessary glass by looking at your reflection in a stainless steel knife or spoon.

    Something similar happens with gemstones. A gemstone setting provides the gem with a reflective background, so that more light will bounce back out of the top of the gem, making it look extra sparkly. You could use a dark background instead, but that would make the stone look muted.

    1. Philadelphus says:

      I was going to make the same comment, though I’d wager bathroom mirrors are aluminum rather than the much-more-expensive silver (but I’m not completely sure on this point). At least, the mirrors in telescopes are usually aluminum (for visible and near-infrared telescopes, anyway). And for them they actually layer the aluminum on top of the glass to remove any light losses from it. (You have to remove and reapply the aluminum every so often due to corrosion from the atmosphere over time, but that’s an acceptable trade-off.)

    2. Duoae says:

      I was actually going to make the same point but then I really took a look into how they construct mirrors and found that modern mirrors are still “transparent” through the silvering layer (whatever material it is) since it’s a deposition of metal particles from solution. In which case, you need a cover or dark background to stop light from entering through from the other side. As Philadelphus mentions, this layer also protects from oxidation of the metal.

      I figured this gave Shamus enough leeway in his description.

  18. Hey Shamus, regarding the “One second of footage is worth 60,000 words” challenge of making the article as useful as the video – hybrid articles/videos is something I’ve been experimenting with as well, and it’s worth noting that there’s a middle ground here through the use of embedded video. Thanks to modern video compression and the HTML5 video tag, you can embed short videos and have them act like animated GIFs, only with more colors and taking less bandwidth (commonly called “gifv” though this is a bit misleading as no GIFs are involved). If you’re gathering and cutting the relevant footage for the video anyway, it’s not much extra work to also cut versions for the article, and then you can have a couple of seconds of footage strategically placed to illustrate what you’re writing about.

    You can see an example in an article of mine which is about rhythm games and really needed to show things in motion, so it’s got a bunch of mute couple-second video loops instead of static images. I have them acting as GIF-like as possible – no sound, autoplay, autoloop – but you can configure all of this in the HTML. (I went back and forth on the autoplay – some readers find the motion distracting, but since the videos don’t have the stuff you normally see in a YouTube embed, if they don’t autoplay some readers won’t even realize they’re videos. It’s a trade-off and I polled several friends before reluctantly deciding to have them autoplay.)

    1. tmtvl says:

      That produces a lot of lag on my smartphone. You may wanna rethink it.

      1. Thanks for the data point – my audience is pretty small so I don’t get much of this.

        It’s also worth noting that my example article has fully fifteen embedded gifv videos. Presumably the impact is also lessened if there are fewer.

        Another solution might be to disable autoplay and basically put in YouTube-style title and play button overlays on the videos, so people know they are playable. (I haven’t looked into this in detail yet so while I’m fairly confident it’s possible I’m not yet sure how difficult it is.) Maybe with a clearly marked timeline showing the videos are short, and a mute icon if they have no sound (as in my example)? I’m not sure how many readers would be reluctant to play a video in the middle of an article if, for all they know, it might be several minutes long and require sound.

        1. Janus says:

          It works fine on my Pixel, although it takes a few seconds to load when I scroll.

  19. Agammamon says:

    This is why the lighting in that game was so harsh.

    Stuff like this is why artstyle is still important – in Doom3 the harsh lighting works because it fits the the way the environment is made. Its an industrial type of area without much consideration for aesthetics or human comfort.

    In the end, I think this is one of the reasons why this version of Id’s engine lost out to Source – there were all sorts of neat tricks you could do with dynamic lighting in it but you were much more limited in the sorts of good-looking environments you could create with it compared to Source.

  20. kdansky says:

    “Everybody decided that they wanted 60 frames per second.”

    No. Seriously no. We didn’t suddenly want it. We wanted it back.

    Everybody *had* 60 Hz in the 80’s. Super Mario ran at 60 Hz! The SNES, Gamecube and N64 ran (nearly) all their games at 60 Hz (sometimes 50 Hz in PAL mode). It was basically unheard of for games to be choppy 30 Hz before the PS2 on which some games ran at 30 (Shadow of the Colossus), and some at 60 (Devil May Cry). Only in the next generation, the Xbox360 and PS3, we saw all game developers (except Nintendo) throw reason out the window and produce these awful 30 Hz “cinematic experiences” for ten years before coming to their senses again recently when PC gaming pushed for 120 and 144 Hz instead, and kept giving developers so much shit over bad ports that things are finally looking up again.

    Or rather, the issue was probably marketing. 30 Hz is enough for screenshots and advertisement (at the time), and so responsiveness and fluidity took the backseat to engines that looked good in adverts. That and gross incompetence (FromSoft and Capcom both produced games that ran at 30 Hz when the machines were capable of pushing 60 easily).

    Also on a different note: Those clips from Control look awful. Super blurry (motion blur + too much AA + low resolution it seems to me), super washed out (do the devs know that there are more colours than blue and red?), and the choppy 30 Hz of the recording doesn’t help either.

    1. Timothy Coish says:

      For the second time in as many days, I wish this website had an upvote feature. Yes, the idea that gamers ever stopped caring about 60 fps is just absurd. It’s actually even worse than that, because we used to get 60 Hz on CRT’s, which are vastly better for any given framerate, because the low persistence of the image means that you’re brain doesn’t get confused and think something is stop-starting when you jerk the camera around, which causes a bad feeling, even if you can’t put your finger on it. Modern LCD’s with 120+ Hz and special low-persistence modes get some of this back, but actually, they’re still not nearly as nice as the old CRT’s, even at lower Hz.

      https://www.youtube.com/watch?v=V8BVTHxc4LM

      Frankly, I never thought that 60 FPS was good enough, and I would get eye strain really quickly from games, so I almost stopped playing video games entirely for almost a decade. It just physically feels bad. 30 FPS is just absolute garbage, and as far as I’m concerned, is a broken experience.

      But you’re absolutely right, the actual purpose was to sell games in screenshots. John Carmack once said something to the effect of “the priorities of these modern games are so unbelievably wrong, there’s some guy talking about the lighting being physically accurate while there’s a tear running down the screen and it’s running at 60 FPS.” Framerate and resolution are absolutely the most important graphical features of a game, but they’re also nearly impossible to sell on the back of the box. The same is true for visual tearing and aliasing.

      I new a guy who works for a mobile game studio, who said they have a screenshot feature, where you hit a button and the game saves a frame that’s 8K resolution. You then can do whatever you want with that image, downscaling it even in 4k shots to get a totally unrealistic impression of the game, with all aliasing and imperfections removed. But hey, if it sells. These are marketing based decisions, totally divorced from user experience.

      1. Nimrandir says:

        Again, I find myself baffled by my own lack of standards. I spent years jumping from 60-fps fighters to 30-fps shooters without ever registering a distinction. I’m not even sure I could have told you the difference if people hadn’t brought up the frame rate discrepancy. If the frame rate suddenly drops, that I would notice, but internal consistency has always been sufficient for me.

        I guess that means you can blame me for all these pokey frame rates? It’s okay. I’m a math teacher; I’m used to it.

  21. Paul Spooner says:

    Also, shadows from nearby light sources shouldn’t have a hard edge like this. Nearby lights should create a penumbra, a soft edge where we transition from light to shadow.

    So you know how, when you know something about something, and then someone says something wrong about it because they can’t be bothered to become an expert in what you’re an expert in because it just doesn’t matter, and it bothers you because what they said was demonstrably wrong, and you feel kind of offended that they implicitly devalued your expertise by demonstrating that they think it doesn’t matter because they couldn’t be bothered to even think about what they just said?
    Do you know how that feels Shamus?

    Because I do. I know how that feels.

    1. Janus says:

      I can’t follow this joke…

      1. Michael says:

        A penumbra is created by a large, non-point light source. Conceptually, for each point within the area of the light source, that point causes every object to cast a faint but perfectly crisp shadow, and then, for every point in any part of any shadow, you add up the darkness of each shadow that falls on that point to get the local darkness. If no light from any point on the light source reaches a local point, that local point is perfectly black. If no point on the light source experiences an obstructed path to a particular local point, that local point is perfectly lit. If half the points on the light source are obstructed from a particular local point, that local point is half lit.

        In practice, what you do in physics class is take one point on either edge of the light source and use those two points to cast two shadows. You call the area where those two shadows overlap the “umbra” and you call the area where one shadow reaches but the other doesn’t the “penumbra”. You can see that the penumbra in this model is an area between two lines that intersect at the edge of the shadow-casting object, projected onto the surface on which the shadow falls. The following affect how much of a penumbra you see:

        – The physical size of the light matters a lot. A smaller light casts crisper shadows.

        – Distance to the light source lessens the penumbra by making the light source smaller, a better approximation to a point.

        – Distance to the shaded surface embiggens the penumbra, because two intersecting lines grow farther apart the farther you go from the intersection.

        – On the other hand, if the shaded surface is far away, nobody cares what kind of shadow you should be drawing on it, because they can’t see it anyway.

        It’s easy to make crisp shadows by playing with a nearby-but-tiny light source. And it’s easy to create shadows that look like they have sharp boundaries (though they don’t; no real shadow has a sharp boundary if you look finely enough) by similar means. But overall I don’t see the problem with Shamus’ description. Closer light sources really do create more penumbra than distant sources, and penumbras really are soft edges, and… that’s the entirety of the pulled quote.

        1. Paul Spooner says:

          The difference between a close and a distant source is not the sharpness of the shadow, but the amount of divergence in the light rays, and therefore the size of the shadow. For the sun, which is very far away, the light rays are practically parallel, so the shadow is the same size as the casting object. For close light sources, you need to make the shadow get bigger as it is cast further from the occluding object, which is more of a dilate operation than a blur. It can also overrun the edge of the shadow mask, and also makes things more complicated, but is distinct from penumbral blur.

          1. Abnaxis says:

            The difference between a close and a distant source is not the sharpness of the shadow, but the amount of divergence in the light rays, and therefore the size of the shadow.

            Shouldn’t it be some of both when you’re talking about area light sources (which we should be talking about if we’re worrying about penumbras)?

          2. Michael says:

            The difference between a close and a distant source is not the sharpness of the shadow,

            …this is just wrong.

            but the amount of divergence in the light rays, and therefore the size of the shadow. For the sun, which is very far away, the light rays are practically parallel, so the shadow is the same size as the casting object.

            This depends heavily on the distance to the shaded surface and the angles between the light source, the shadow-casting object, and the shaded surface. For the common case where a person is standing upright on flat ground, lit by the sun or an overhead light, the shadow is vanishingly unlikely to be the same size as the person. You’d need the sun to be at an exact 45 degree angle.

    2. Duoae says:

      I think you have a point but I think it was a little harsh to say he “didn’t even think about what he just said”. It’s not a simple concept and a rather small mistake in conceptualisation*. Your comment actually sounds really angry so I hope it’s a joke that’s lost somewhat in the transition to text. As a scientist, I get frustrated with the majority of people all the time for their lack of understanding of various principles but I’d never say it was implicit that my knowledge was devalued through their misconception or lack of knowledge.

      In fact, in some instances, it’s an opporuntiy to correct or englighten someone!

      *Because depending on how you visualise the problem/experimental setup, you’ll get a similar result.

      1. Duoae says:

        You know, on further reflection it’s more obvious it’s a joke to me. But MAN that is super dry, even by my standards!

  22. Borer says:

    As a person who is usually excited for new tech I just can’t seem to get excited for ray tracing. (And yes, I know that only the hardware is new and that ray tracing itself has been around for a long time – just not in real time.) Game devs have gotten so good at faking good lighting and good reflections that the gain from real time ray tracing seems so minimal to me; at least on the consumer side. The only reason those RTX on / RTX off comparison videos got made is because we’d not have seen any difference without the direct comparison. Like some people mentioned above: Were those scenes from Control ray traced? I just assumed so because that’s what the video is about. But honestly, I have no clue.

    Sure, ray tracing might make the developers’ lives so much easier in the long run but as a consumer I feel absolutely no need to replace my current graphics card with a new GPU that costs more than a good office PC. There just isn’t a single recent game where I’m sitting there thinking that the experience would be improved if I (and the game) had ray tracing available. And buying a $1200 GPU for Quake 2? I’m sure I could get that to run on a $40 Raspberry Pi.

    Or maybe I just haven’t seen the killer app yet? Yeah, I think I’m just going to wait to see what happens next generation.

  23. Narkis says:

    If I’m understanding the implications of the technology correctly, it will become MUCH easier for indy games to have AAA graphics once raytracing cards become widespread. If so, I’m very excited about the indies of 2025.

    1. Dude Guyman says:

      Remember that raytracing will only give indies access to better lighting, reflections, and other tricks like that. They still wouldn’t have the budget to produce 20,000 photorealistic background assets with 7 layer PBR textures and normal maps taken from 2 billion polygon sculpts and characters with hundreds of thousands of hours of animation work done to blend between their dozens of states of physically based inverse kinematic movement.

      So don’t expect miracles out of this, saying that indie games can have AAA graphics might be a bit of a stretch. But it certainly can’t hurt to have more tricks up their sleeve.

      1. Timothy Coish says:

        Yeah, also the shaders themselves. If you want character x to glow colour y according to some sin wave pattern over time, you’re gonna need to write that shader yourself. You can’t just have the rendering engine magically do that for you.

        Actually, I’m not sure this is even good for Indies. It’s good for the ones writing their own game engines, since the hacks used for Rasterization are complicated. However, those hacks are already done for you if you’re using a licensed game engine.

        1. Exasperation says:

          It’s worth noting that licensed game engines are already starting to move to ray tracing; I know that both Unreal Engine and Unity have started including ray tracing support in their engines.

  24. Higher_Peanut says:

    It feels weird for me to see raytracing in a programming context. I got too used to seeing it in physics and having to calculate stuff like lenses or thin layer diffraction.

    I think it will catch on faster than VR. VR still has to deal with having a terrible control scheme and making large numbers of people nauseous. Selling prettier looking screens worked before, I don’t see why it can’t again.

  25. D-Frame says:

    I usually don’t really care too much about graphics in videogames (in fact, I enjoy low-poly flat-shaded stuff), but the results of raytracing sure are fascinating.
    Also: Time to reinstall Deus Ex.

  26. Ayrshark says:

    Honestly, I’m not sure I care much about raytracing. I’d much rather have better directional audio and ai.

    1. RichardW says:

      Some of the most immersive sound design done in the last several years employed forms of raytracing, it’s becoming pretty standard for games to do collision checks against the environment to determine how things should sound. There’s different degrees of this, with some approaches being automatic but less nuanced and others requiring really in-depth asset preparation.

      Used to be that everything was based on essentially trigger-volumes, drag a box around the room and set how much of the sound inside that box is audible outside and vice versa. It’s sort of shifted to a hybrid approach lately, with more being done realtime and a focus on being accurate. Things like Steam Audio now have developers assign surface properties to props and walls to determine the responses they have, physically simulating the behaviour of sounds depending on the size of the space, the thickness of a surface etc. That’s probably the more work-intensive solution since it actually requires you to “build audio” the way you would build lighting with radiosity, but realtime methods are in the works and function almost as well. VR was a big driver for that.

      Funnily enough Unreal Engine is resurrecting DSP effects with its latest updates, something that kinda fell out of fashion when sound cards went away. Occlusion is a big deal, and something they have a bunch of options for (even if it’s still a bit dodgy). The Last of Us used this a lot with its directional audio, dynamically muffling the sounds of people who are behind a wall in realtime, without the hard transitions you can hear when needing to rely on brushes placed in the level to modify what you’re hearing. So, there’s been plenty of innovation over the last generation with sound, it’s just time-consuming to do and using raytracing for things like occlusion only became practical once we’d all moved to multi-core machines at the tail end of the PS3’s life cycle. It’s a pity the CPUs in the PS4 / Xbone were so weak or we might’ve seen more of this stuff, they ended up offloading audio to the graphics chip on TLOU Remastered.

  27. Sleeping Dragon says:

    Four days late to the party but I’d like to thank you for the article, I’m usually less interested in your technical stuff than your game analysis or industry commentary but this actually fairly succintly (and I imagine with some simplification) explained to me the effects this could have on games, which is interesting.

    Generally while I’m not immune to a pretty game I claim to be a “graphics don’t matter” person, part of it being that I grew up with old games (when they were still new games), part of it probably being unable to afford frequent PC upgrades or consoles for most of my life and part of it being into other aspects of games than just looks. Still, at the risk of being overly optimistic I think raytracing as you expalined it has potential to affect at least some aspects of the game development. Maybe some of the things that were dedicated to cutscenes or QTE sequences could be shifted into gameplay since it’d be easier to still make them look cool without using pre-rendered shots. Maybe, like some people above claimed, some indies could do interesting things with it, especially since I’ll point out realistic lighting doesn’t necessarily mean realistic graphics, or at the very least it could make achieving certain effects easier for them, which is something I always consider a good thing.

    1. Nimrandir says:

      You may be right, but I thought the cutscene/QTE issue stemmed from either (1) the inability to animate the player character properly, or (2) the controls not allowing for the execution of fancy-pants moves. Would raytracing potentially free up resources for these? I don’t know, as my programming skills are nowhere near sufficient to judge.

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply

Your email address will not be published. Required fields are marked *