Ray Tracing Is Here. Apparently.

By Shamus Posted Sunday Nov 25, 2018

Filed under: Programming 64 comments

I just got done recording the Diecast. At one point Paul and I answered a question regarding where game engines might go next. On the show, I predicted that ray tracing was the next big thing, but we weren’t there yet.  I was thinking of a couple of videos from a few years ago like this one and this one. Those are from 2013 and 2014. It looked promising, but it was clear the technology wasn’t quite ready yet.

But then I discovered that I was completely wrong. Ray tracing isn’t a couple of years off. It’s here. It’s happening right now. Check out this video, which shows off some footage from Battlefield V and Metro: Exodus:


Link (YouTube)

To be fair, this isn’t full ray tracing. From what the devs have said, this is a sort of half-step between what we’ve been doing and full ray tracing.

Here’s how it’s worked in the past… 

Call me a graphics hipster, but Half-Life 2 still looks fantastic to me.
Call me a graphics hipster, but Half-Life 2 still looks fantastic to me.

We used triangles. All the crap in the world is made out of triangles, and we draw them to the screen. It’s that simple. Well, not “simple”. It’s still a monumentally complex process that’s challenging even to people with genius-level IQs, so I guess calling it simple isn’t really fair. A lot of sorcery goes on between the point where the game loads all the polygons into a buffer and the moment where it shows up on your monitor. Still, it’s conceptually simple. You can think of those triangles like brush strokes on a canvas. Some of them are drawn over others, and the image is built up like this until it’s ready for your eyeballs.

For contrast, ray tracingThere’s ray tracing and path tracing, but the differences between the two are too subtle for this discussion. (Also, I’m too lazy to research those differences right now.) draws each pixel on the screen exactly once. For each pixel, it projects a ray outwards from the camera, seeing if that ray collides with any of the stuff in the scene. It bounces this ray from surface to surface, adding information to this particular pixel’s lighting calculation.

Based on what the developers are saying in these presentations, it sounds like they’re combining these two methods. The lighting is ray traced, but the polygons… aren’t? I don’t know. I’m having trouble picturing how this is supposed to work because I’m so far out of my depth.

I will say that while Battlefield V and Metro: Exodus look impressive, they don’t have that same impact you get from fully ray traced scenes.  This stuff looks cutting edge, but you can tell it’s not a photograph. I see two big improvements with this new system:

  1. The reflections are as sharp as the rest of the scene and aren’t approximations. I guess this means games might start giving us mirrors again.
  2. As the Metro demo showed, dark corners won’t be over-illuminated. The way we do things now, we have a system based on hard shadows. If you look under a bed, then the area hit by the room light will be bright and the area in shadow will be uniform. In the real world, areas directly under the bed should be very dark, while the area near the edge should be a little brighter due to secondary illumination. Light bounces off the walls and lights up those shadows, but the deeper you go the less of that secondary light will reach.This probably sounds like obsessing over a trivial detail, but this is one of those things that makes the lighting look a bit… odd. It also sucks the contrast out of the scene because you don’t get any deep dark corners. The spot of floor at the back of the closet is getting just as much ambient light as the spot on the wall where your hand is casting a tiny shadow. Programmers have spent the last few years cooking up ways to hide this problem. If you’ve ever seen “ambient occlusion” in the graphics options, that’s what this was about. Between that and making shadows blurry, they’ve managed to make a system that looks pretty good.

On the left is the simple brute-force lighting system like we've been using. (Except today programmers would fuzz the shadow edges a bit.) On the right is the more nuanced system that takes the environment into account.
On the left is the simple brute-force lighting system like we've been using. (Except today programmers would fuzz the shadow edges a bit.) On the right is the more nuanced system that takes the environment into account.

Anyway, just a heads up that my talk during tomorrow’s podcast is going to be drawing from information that’s about 4 years out of date.

Dear games industry: Good job. That’s nice, but don’t make me upgrade my graphics card for this. It’s nice and all, but it’s not “five hundred and ninety-nine U.S. dollars” nice.

 

 

Footnotes:

[1] There’s ray tracing and path tracing, but the differences between the two are too subtle for this discussion. (Also, I’m too lazy to research those differences right now.)



From The Archives:
 

64 thoughts on “Ray Tracing Is Here. Apparently.

  1. Weirdly, I kind of reached my threshold for caring about better graphics some years ago. Don’t get me wrong, it’s cool, but I find the aesthetic style of the game to be much more important than raw graphics.

    Now, on the other hand, if they can animate people and things in a way that they aren’t stiff and janky and full of clipping issues . . .

    1. Echo Tango says:

      I’d much rather have a game that looks like WindWaker or Okami, than a fully bling-rendered game like shown in the video Shamus linked. Sure, it looks cool, but the game has no character – it just looks like I’m walking through an old museum of army artifacts.

    2. Jabberwok says:

      We’re getting to the point now where I can’t even tell the difference. I could tell something changed when he turned ray tracing off in the video, but I would not have even noticed the change in game if I wasn’t looking. With a lot of recent games, I can turn settings halfway down for a significant performance boost and barely notice that anything is missing.

      1. Echo Tango says:

        I had to squint pretty hard, and listen to the commentator point out the specific artifacts, before I noticed. It’s like the transitions from VHS -> DVD -> Bluray -> 4k.

        Even the DVD -> Bluray transition was getting to be less noticable, and I literally cannot tell the difference between 1080p and 4k at normal viewing distances. I was in a TV shop a few years ago, helping my mother pick out a new TV for her house, looking at two 5-foot screens (not huge, but big enough to show small details). I couldn’t see the difference between the two screens / versions of the same nature documentary, without getting my eyeballs within 6 inches of the screen. I was about 30 years old at the time, with laser-surgery eye vision. Me, I just use a “cheap” 1080p board-room / business projector as a TV, and it looks fine. :P

        1. Jabberwok says:

          The only thing I notice about modern HD TV is that some combination of the frame rate and resolution makes every show look like a 90s soap opera to me.

          1. The interpolated frames thing?
            Or are you talking about the glowing blurry filter effect they used?

            What you are talking about is the “video effect”, people complained about the Hobbit too at 48 FPS. We’ve gotten too used to 24 FPS, judder during panning in movies, and high amounts of motion blur. We’re trained from an early age to think it’s cinematic.

            IF 24 FPS was so much better we’d be seeing games running at 144Hz (for low input latency) with 24FPS and with “AI” motion blur applied to simulate low camera shutter speeds and panning judder.

            Personally Id’ rather not see that. I’m the kind that turn off motion blur in games. My eyes un-focus and “blur” stuff in my periphery naturally. I also turn off Depth Of Field as no game knows exactly what I’m looking at (my eye’s aren’t glued to the mouse pointer/screen center point). I guess displays would eye tracking could improve this but they are expensive and rare.

            1. Jabberwok says:

              I always turn off motion blur in games. But framerate is really only an issue in games for me, not movies or TV.

            2. Mephane says:

              Same here. I always disable motion blur and depth of field. I don’t even care whether they are used for artistic effect, because I think that only works really well for photography (and thus in games which have it: photo mode). For movies and video games, it’s more like a vestige from times when it was the result of the limits of the technology.

              1. Paul Spooner says:

                DoF can help communicate where the game expects you to be concentrating. Motion blur can help to more quickly assess velocities. But they only work if they are informed, instead of just throwing the filter over the whole scene. Full-screen motion blur is terrible IMO, because it only tells you how fast you’re turning, which isn’t new information.

                1. Zak McKracken says:

                  I’d say that both of them have their place in cutscenes (as they do in movies). In gameplay, the only justified use of limited depth of field I could imagine is if there’s a cutscene-like moment where the game designer needs to get the player to focus on something particular, and at the same time knows that nothing of importance is hidden by the blur. Similar, for motion blur, the only justified use is maybe to “simulate” being drunk or in some similar state, or possibly for very fast-moving small objects, to make their trajectory visible if they’d otherwise just spend 3 frames or so on-screen. Otherwise it’s always better not to motion-blur and increase framerate, then let the players’ eyes do the blurring.

                  Actually maybe depth of field could be used as a limiter on drawing distance? Replace the fog with limited depth of field. so you can blur the background and have it less detailed? Might work, but only if all the relevant action is still in focus.

      2. Agammamon says:

        Given the performance hit BFV is taking with RTX on (up to 50% FPS loss) – its funny that it has been at the forefront of this. You want to play MP? You’re turning it off. And nobody plays the SP campaign. Metro and Mechwarrior are better candidates – both games benefit from high FPS but you’re not being hampered if you’re ‘only’ playing at 60.

        But the real advantage here is not to us, the players. As you guys point out – you’re seeing improvements in marginal areas and often areas where you’re distracted by other things anyway. Who’s looking at the fire’s reflection in the side of the car? You’re being shot at, shooting, taking cover, etc. Ain’t nobody got time for that.

        It does help developers immensely. Its easier to do the lighting for a place when you just need to place point emitters and don’t have to worry about shadows and ‘fake lights’ (ambient lights with no visible source). Just build, put your lights in, and go.

        1. “Just build, put your lights in, and go” except it’s not though as you have to have a non-RTX mode in the games as RTX cards is only a tiny percentage of the hardware out there. You’ll need renderer paths for DX11 and DX12 and DX12 w/DXR (raytracing). That’s an increase of 33% in development work.

          You also can’t skip doing stuff in non-RTX mode either as people will call you out if a “RTX” game they buy looks worse than non-RTX games does today. So they can’t just plug and play.

          You can see this is one of he Nvidia presentations, they toggle RTX on and off and with RTX off you can clearly see a bunch of shaders and other effects missing for the RTF-off picture, missing several of the tricks devs do to simulate various things.

          Unless Nvidias RTX SDK comes with a “plug’n’play” fallback path (which I haven’t seen anything about yet), devs are still gonna have to maintain a non-RTX renderer path.

          1. Agammamon says:

            You have to do that *now* – because, as you say, RTX capability has no penetration yet.

            Yet.

            As it becomes more ubiquitous then you’ll see the non-RTX art pipelines dry up.

            1. Mephane says:

              Yeah, I think the point of RTX capabilities in the 2070/2080 and various new and upcoming games is not so much about improving their image quality, because the difference isn’t not that big with this hybrid approach, it’s to build momentum for the technology. NVidia are fully aware that eventually, raytracing will replace rasterization (see my comment below regarding performance considerations for the future), eventually, and when that time comes, they want to be the market leader in that tech. It’s smart some long-term planning, really.

        2. Nick Powell says:

          I think the thing people are forgetting about BFV’s performance is that the RTX stuff was tacked onto a game that was already pushing the limits of what modern hardware can do. So you’re running one of the most hardware-intensive games ever made, and also trying to run raytracing on top of that (since it doesn’t really replace anything besides SSR in their implementation).

          Basically what I’m trying to say is that if the game had been designed from the ground up to use raytracing, it would probably be designed to make better use of it and run better as a result

    3. TouToTheHouYo says:

      +1 to this. I’ve always personally preferred aesthetics myself as well. Raw graphical prowess is cool for the initial “wow” factor but then videogame jank inevitably sets in and breaks the immersion. The more realistic the game looks, the more jarring and easily noticeable the jank becomes. It also ages far faster and worse then games that rely primarily on abstract aesthetics.

    4. SKD says:

      I’m of the same opinion regarding the graphics debate. A coherent art style is more important than fancy tricks and gimmicks.

      1. beleester says:

        This is always true, but some aesthetics require a certain amount of graphics power to make them work. If you want a game like Mirror’s Edge, set in a super-modern city of glass and steel, then you’d better be able to render some shiny glass and gleaming steel. Those brightly-painted colors wouldn’t work as an aesthetic without the lighting engine that makes them convincingly warm.

        Similarly, if you want a grim and gritty war game, it helps if you can convincingly render some grit and grime. Abstract aesthetics may age less quickly or require less power, but that doesn’t make good, realistic graphics a “gimmick.”

    5. Wouter says:

      True, but mirrors would be awesome. I recall having so much fun with that in Duke Nukem 3D.

  2. Nick Powell says:

    Based on what the developers are saying in these presentations, it sounds like they’re combining these two methods. The lighting is ray traced, but the polygons… aren’t? I don’t know. I’m having trouble picturing how this is supposed to work because I’m so far out of my depth.

    I think an important thing to point out is that rasterisation already is raytracing, just in a very constrained form. Since you know that every ray cast out from the screen will be parallel and arranged neatly in a grid (aligned to the grid of pixels on the screen), you can cache the result of all of those raycasts into a depth/image buffer and you don’t have to test every single ray individually against every single triangle in the scene. However, this doesn’t work as soon as the rays diverge from the neat grid you originally cast them in, which is why so hard to simulate what the rays are doing after their first bounce.

    What this new hardware offers is a much faster way to do those second/third/fourth/etc. bounces where the rays are all shooting off in random directions, but rasterisation is still by far the fastest way to do the initial set of pixel-aligned raycasts. So that’s why games still rasterise the whole scene in a first pass, then add further reflections with the new hardware.

    This doesn’t exactly apply to everything you can do with the new hardware but it hopefully explains part of it

  3. Awetugiw says:

    I must admit that I don’t quite get the contrast you seem to set up between “ray tracing” on the one hand and “triangles” on the other. When drawing triangles on the screen, you still need to determine, for every pixel, which triangle is the closest triangle in that direction. In other words, you have to trace “rays” to or from the camere perspective.

    Certainly, using triangles makes this computation easier; as long as everything is a triangle (or, more generally, a polygon), determining the first object in a direction can be done by solving a set of linear equations, which is a lot easier than the non-linear equations you would get with other shapes. But I don’t see how triangles would be in contrast to ray tracing.

    1. Albert Hexer says:

      That’s because Shamus botches the explanation quite a bit :/

      Also, there is not one raytracing scheme, but several (heck, just look at the POV-Ray manual), and what the current graphics cards do is not really fully what traditional raytracers do. Full traditional raytracing also makes no sense for games, for various reasons.

      How lights are traced is a second different topics (old render pipeline did it differently than current shaders, and raytracers have several different ways as well), how shadows are generated a third (similar to light, different models exists), soft/hard shadows a fourth (you can simulate it with sending multiple rays per light, or by a different lighting model altogether), radiosity a fifth and ambient occlusion a sixth (though radiosity and ambient occlusion try to solve similar issues).

      Raytracing is also not the magic switch to make everything suddenly photorealistic, as it’s often portrayed. It too relies on tricks to get around gaps in its models, just different than the ones we’ve used in real-time 3D. For example, proper caustics need special handling; that’s what POV-Ray uses “photon mapping” for.

      1. “Raytracing is also not the magic switch to make everything suddenly photorealistic”

        True. take Battlefield. The reflections there are way to clear/crisp. It seems the windows/lass and puddles lack a material for the reflection.

        Ironically what we see in the Battlefield tests now seems more suited for mirrors, yet Hitman 2 (2018) does mirrors really well by more or less mirroring the gameworld (possibly using a second camera).

        This does mean Hitman renders things twice so it uses twice as much GPU power. Then again, turning RTX on in Battlefield halves the framerate. So each method seems equally GPU intensive. Only that the way Hitman 2 does it works on all current DX12 and DX11 hardware, while RTX is only on the top three cards from Nvidia.

        So if you can fake it but have it work on all hardware then that makes much more sense and it’s only one renderer path.

        Nvidia is fighting a uphill battle. Ideally Nvidia should have coordinated with AMD and Intel and Sony and Microsoft and made sure all (now soon three) GPU makers launch with raytracing together and that the new upcoming consoles also has it, and that the entire card range supports it from the high end to the low end. The entire market needs to move together for stuff like this.

        Vulkan now has support for Nvidia RTX via a extension https://www.phoronix.com/scan.php?page=news_item&px=Vulkan-1.1.85-Released
        This is still proprietary. I guess once AMD (and Intel) Catch up Vulkan will get a official raytracing API. When that happens is when the industry in general will start picking up the pace.

        I mean. Not everybody who have Windows 10 bothered to updated to the fall 2018 version, which means they can’t use RTX even if they have a RTX card. This product launch has been just really weird IMO.

  4. (I hope others will chirp in if I’m wrong about stuff here.)

    The Geforce RTX 2080ti, RTX 2080, RTX 2070 has “raytracing”, it is unknown if the (RTX/GTX) 2060 will have this.
    This makes raytracing a premium feature, only one game supports it (Battlefield, though Metro should have it but it’s not done yet, and who knows when tom Raider get their RTX patch out).

    Somebody asked one of the heads at AMD about DXR (DirectX Raytracing) support, and he said that yeah he’d like to see AMD answer, but it would only make sense if the entire line from the top to the bottom supported it. So “Raytracing” won’t be affordable for the masses until AMD answers with a new line that supports DXR (I’m guessing Vulkan will get raytracing support too), this will put pressure on Nvidia to release midrange/lowend cards with RTX. The number of people that have RTX cards is smaller than a mousefart in the market since they are high end enthusiast cards only, so unless Nvidia throws a bag of money around few game developers will add support for it (it’s not !Plug’n’Play!, it doesn’t “Just Work”).

    Another thing to note is that the RTX card isn’t doing real raytracing, Pathtracing was mentioned and I seem to recall this is correct, the RTX cards uses a hybrid raster+pathtracing method. Real ray tracing trace from each light source.
    Nvidia’s method start at the camera and trace from there out to all surfaces, they use one or two “rays” until they hit a polygon (rather than a pixel as raytracing would). The rest of the details I’m unsure about, they do at least one more bounce from that polygon to the next, they might do a third color/light bounce (but not reflection) so I don’t think you can place two mirrors in a game and get infinite reflections, I’m guessing you’d get once bounce of the reflection but maybe one more on lighting/color/tint.

    Once issue with the RTX cards and Battlefield (the test we’ve seen so far) is that the framerate tanks, you loose half your FPS with raytracing enabled, outch. WHich means that RTX 2080ti now becomes a GTX 1080 with slightly “better” reflections.

    I use better in quotes as only one or two rays are used and there are grain/dust or stripes and other weird artifacts. I saw one video where the raytraced flame reflection on a gun was floating in midair (unless Battlefield is pretending the camera/player view is through a film camera and that was supposed to be a glass reflection). Considering lenseflare is used (which never happens with your eyeballs) this could be the case.

    Battlefield had a bug (has that been fixed?) where the raytracing setting of medium and high was the same (there is low, med, high, ultra I think?)

    Another drawback is that if you move you see this grainy effect (especially in water), I call these “raytracing dots” you can see this in real raytracing software, as the raytracing progresses you’ll see these dots vanish as the passes are completed.
    I forget if it was Digital Foundry or somebody else that say that two passes is just too little for it to look as good as it could.
    If you stand still Battlefield looks great, but do you ever stand still in a game like Battlefield? In competitive play you’d turn it off, and maybe use RTX on Low on single player.

    One can speculate that Nvidia planned a full RTX lineup but due to overstock due to overproduction due to the mining craze they had to get rid of old stock, so they delay or dropped the mid to low end (and the mid range is the mass market and where AMD is competitive) which is unfortunate and a mistake IMO. They should have delayed the RTX launch, that way game devs would have been better prepared and the drivers better (I doubt more optimized drivers or a better optimized Battlefield can regain that 50% FPS loss though). Battlefield also seems to not be able to do RTX on particle effects (maybe this will get fixed but that might drop the FPS even more).

    Modern games are so good at faking it that one often do not reflect (pun intended) on the reflections, most games use screenspace reflections. Battlefield does this when RTX is off and if your gun blocks a part of a truck then that part of the truck vanish from the reflection in the puddle. GTA V has a tad odd ones if you get too close to a window (it looks like a skybox/worldbox that is zoomed in a tad wrong).

    Also note that Hitman 2 (2018) has reflective mirrors that reflect the part of the world you don’t see (i.e. it’s not screen s pace reflection), but they are not using Raytracing.

    AMD will answer this call to the tech arms race, and Intel will (if they are smart) launch their first cards with support for this stuff (would be silly to spend millions if not billions and making GPUs and not do that). Intel probably won’t have anything until like 2020. AMD will hopefully have something new next year (2019), but it could slide to 2020 as well.
    2020 will be a weird year, there might be new consoles around the corner (Xbox and Playstation, but with “Navi” and Ryzen based APUs from AMD) but they probably won’t have raytracing support, We might have Cyberpunk 2077 (it probably won’t have RTX support though).
    Next-Next gen will be 3-5 years after that, and I’m gonna guess they will have DXR/Vulkan RT (VRT?) support, at that point there should be full GPU lineups from AMD and Nvidia and Intel supporting it, and more than a dozen AAA games using it.

    One thing that is interesting that many gloss over though is the RT cores in the RTX cards. Sure they can be used for real time raytracing in games, but rendering software can utilize this (if they support it) to make lightning fast raytracing previews, or speed up the full scene raytracing itself. I think (or speculate) that the RTX consumer cards (and the RTX Quadro equivalent) will be more popular among professionals.

    Now don’t get me wrong, the RTX cards are good cards (on paper), a RTX 2080 is the same as a GTX 1080ti I think, and a RTX 2070 the same as a GTX 1080. So you do get a performance upgrade for non-RTX gaming. But you also pay the price, I don’t see Nvidia dropping the price of both lines. The RTX cards just seemingly extended the roof of the HEDT gaming market instead. If anything the “used 1080ti” became the most lucrative card in the market as enthusiasts (try to) get the 2080 or 2080ti and sell their old 1080ti,

    Oh and AMD will come with a response to the 1080 and 1080ti in 2019 (possibly early 2019, which might explain why Nvidia jumped the gun in my opinion on the RTX launch).
    So if your wallet is itching, maybe wait and let it itch until late winter/early spring and see what AMD has or if the Nvidia 1000 or 2000 series drop in price (which they should).

    More Freesync 2 displays are appearing (which has stricter spec requirements that original Freesync), and Freesync 2 also guarantees some form of HDR, and more importantly requires LFC (low frame rate compensation, which means the minimum Hz to maximum Hz is at least 2x, allowing frame doubling/repeating when your FPS drops below the minimum Hz your monitor can do).
    Current consoles (like Xbox One and Playstation 4, supports Freesync and Xbox One X (/S?) supports this over HDMI and some games support this now. Since these consoles are both AMD powered the PS4 should get this soon too I hope, and the next gen consoles from both brands will have Freesync or HDMI VRR “out of the box”.
    Intel’s upcoming GPUs will support Freesync (or their own branded variant), I highly doubt Intel want’s to license G-sync.

    In Norway right now there are about 339 “Freesync” displays, of these 6 are “Freesync 2”, while there are only 51 G-sync displays. Next spring/summer the Freesync 2 numbers might match the G-sync number.
    Oh and Nvidia could technically do Freesync (in the Displayport standard it’s called adaptive Sync) and in the HDMI 2.1 standard it’s VRR (Variable Refresh Rate). On laptops with G-sync there is no G-sync module, it’s all done using eDP (laptop variant of DIsplayPort which Freesync originate from).

    Speaking of G-sync modules, Nvidia does have G-sync 2, but I’ve yet to see any displays in Norway with it. Maybe because the G-Sync 2 module is a small GFX card with a fan and 3GB or RAM now, that’s insane.

    One interesting thing that is better than RTX in my opinion is DLSS, the AI based anti-aliasing. Which should be able to provide better antialiasing at the same, or equal antialiasing at less FPS cost compared to Supersampling or Downsampling. This is also something I’m guessing AMD will answer in the new cards and might have more impact on gaming than RTX.

  5. Addie says:

    The other thing to notice about the radiosity picture is that the light being cast on the walls is slightly pink, because it has reflected off of the red floor – this makes the whole shadowing and illumination much more convincing in the scene. It’s generally good enough to determine radiosity by the ‘monte carlo’ method – do as many rays as you have time for at random, and the result should look ‘good enough’.

    From what I’ve read of these cards, they’re nothing like fast enough to raytrace an entire scene, at a resolution and framerate that would be acceptable to someone who’s spent £££ on a graphics card. So we shouldn’t be expecting full-screen reflections off of curved surfaces, refraction, ‘caustics’ (eg. the focussing of light through a glass of water) or anything mind-blowing. Cheap radiosity effects, and maybe ray-tracing on limited areas – on small puddles, which can change to screen-space reflection when you get closer – or on better looking eyeballs, which take up a small part of the frame, but which make a character model look much more alive and convincing when they’re done well.

    On the other hand, if AMD respond with graphics cards which just contain more and faster compute units, which benefit all kinds of rendering without having to faff around with special lines of code, well, I think that would be a lot more interesting to most people. Lose the ridiculous price of the new 20XX series, too.

    1. Agammamon says:

      It would be interesting to see if they could break this off, similar to how if you have two GPU’s you can dedicate one to PhysX. Not that it makes a noticeable difference in framerates to do that today.

      One GPU for rendering, one for raytracing.

      1. Who the hell can afford two 2080tis ?
        Certainly not the average gamer. The average gamer (aka mainstream) has a 1080p monitor (65%+ according to steam survey IIRC), and I’m gonna guess one of the most popular cards is the GTX 1060.

        The upcoming RTX 2060, which may or may not be a GTX 2060 if they disable the RTX cores won’t replace the 1060, instead it’ll be in-between the 1070 and the 1080, sorta like a 1070ti, that will give Vega 56 a good fight I think.

        Nvidia seems in no hurry to end the 1000 line.

        1. Agammamon says:

          Not *now*. But a few generations down the line a lot of people will be upgrading and sitting on an older generation card with this capability. Be nice to be able to use it in addition to trying to claw back as much as you can in the second hand market.

          Its what I did with an older card when I upgraded my rig – 1080ti as the promary GPU, 970 running PhysX. At least for a while – only a handful of FPS increase so I eventually passed it on to a nephew.

          And I don’t think the 2060 will ship without RTX. The 2050 might – every generation’s *50 card is effectively a rip-off and a scam where, for the money, you’re better off buying an *70 of a generation or two earlier.

  6. C__ says:

    i wonder how much better graphics still matter in a saleswise perspective. New consumers care about this or game industry is preaching to the choir ?

    1. Cubic says:

      It will matter when it makes existing games look old and busted.

    2. Echo Tango says:

      I’ve definitely got some co-workers, who value graphics at least as much as gameplay, if not more. I think for the target audience of these games, it matters a lot.

    3. Agammamon says:

      The sort of gamers these games market to really do care about this stuff. They won’t *notice* it in-game – indeed, with that framerate hit they’ll be leaving it turned off – but they can brag about how much better their preferred game looks compared to some other one.

    4. shoeboxjeddy says:

      Hitting benchmarks matters for marketing. Saying your console FPS runs at 60 FPS and 4k is a marketing boon even if the game is otherwise not very special. Very specific graphical improvements (like the hair tech Tomb Raider bragged about a while back) are mostly interesting to high spec hobbyists.

  7. Geebs says:

    I think the RTX is doing a single bounce as a post-processing effect; basically like screen space reflections, but instead of getting the colour information from the main buffer (with fallback to a cube map), it’s actually drawing and lighting the scene from the perspective of the reflected ray for each reflective pixel.

    Which is pretty impressive in itself, but also explains why it’s imprecise, noisy, limited to surfaces with a particular specularity, fill-rate constrained, and kills the framerate.

    1. I think Digital Foundry mentioned two rays.

      Not sure if they meant bounces or pixel subsampling, I’m guessing subsampling as it was in relation to the black dots seen on reflective water if you moved the camera a lot while looking at it.

      I “think” they use 2-3 bounces. The first one is the ray from the camera to the first polygon (or pixel?), then a ray to get the reflection, then a ray from the reflection to lighting.
      In other words, 1 (“mirror”) reflection, and two light bounces. I can’t recall seeing more than a single reflection in the Battlefield stuff.

      But the Metro stuff seemed to indicate at least 2 light bounces in dark corners (so light can go around a corner, bounce off something then cast that color on the wall that is in the shadow of the light source).

      I wonder what a secondary reflection would look like with RTX as seen in Battlefield? just a light “blob” or just nothing at all!?

      I just find it amusing that Battlefield looses around 50% FPS with RTX on, while Hitman 2 uses a secondary camera to render the world twice to show in mirrors that you look at. I wonder if that scales the same? I.e. turn off mirror effect in Hitman 2 and you’ll double your FPS?

      Sure, Hitman 2 is a very tailored experience. But the single player in Battlefield could also be very tailored.

      1. Nick Powell says:

        I wonder what a secondary reflection would look like with RTX as seen in Battlefield?

        Like this: https://www.youtube.com/watch?v=1IIiQZw_p_E&t=135

  8. Agammamon says:

    Dear games industry: Good job. That’s nice, but don’t make me upgrade my graphics card for this. It’s nice and all, but it’s not “five hundred and ninety-nine U.S. dollars” nice.

    We’ve gotten lucky Shamus – this tech won’t be *able* to become mandatory for another half decade. Once the next console iteration hits and maybe that integrate it. Until then developers will still have to design games around the non-ray-tracing paradigm and we’ll see it as an option in the PC versions.

    1. “to become mandatory for another half decade. Once the next console iteration hits”

      Sensible guestimate. By that the tech would have matured. And Vulkan will also have a official API for raytracing. AMD will have their thing too, and Intel will actually have GPUs for sale (how weird will that be?). And prices for a raytrace capable card will be suitable for mainstream wallets (I’m on the poor site of mainstream even).

  9. Dreadjaws says:

    This kind of thing drives me crazy; it’s self-defeating. If you need to show me a comparison video for me to be able to tell the difference between your “impressive” new tech and the lack of it then it’s certainly not impressive enough to be worth my time and money. Hell, most of the time I can’t even tell the difference anyway.

    Like, show me a game running in 60 fps when I’m used to 30 and I can easily tell it looks different, even if you don’t show them side-by-side. Hitman Absolution’s graphics impressed after the previous games (or even other games outside the franchise) because of its ability to show a huge, realistic crowd of people with no noticeable slowdown or any other detriment. Those kinds of things are palpable; they can be instantly noticed. Those are changes worthy of an upgrade (assuming you care enough about graphics), but this? I can live without it. Hell, I’ll be hard pressed to even remember it exists a week from now.

    1. Viktor says:

      Yeah, a mere graphical upgrade doesn’t do it for me at this point. Once you hit the 360/PS3 era, that was about when photorealistic got good enough that art style outweighs processing power. Now, there’s definitely some things that matter here. If a game can make mirrors a real thing again, sure, that will be a big advantage once it becomes a part of the average creator’s toolkit. And if this makes lighting a level easier on the devs, then I support it since anything to reduce dev time is a good thing. But for the average user? Back when Skyrim was released there was a gif floating around of one of the more picturesque waterfalls. Someone captioned it with a standard “Look at the beauty of what God has created and the love that went into it” line. Skyrim 2011 was realistic enough to fool a casual viewer into thinking it’s a real image. There’s a limit to how much “better graphics” actually matter.

      1. Dreadjaws says:

        Indeed. Art style is where it’s at. Though it’d be nice to have mirrors again here and there, they’re not something that worries me.

  10. Kdansky says:

    It’s not there yet. If you buy the top end $1500 GPU by nVidia, the 2080 Ti, then you get barely playable frames per second, just about scratching 60 at 1080p – The same hardware, or the last-gen hardware which is essentially the same, manage an easy 60 at 4K, or 150+ at 1080p, or 100+ at 1440p, all much nicer variants. The last gen cards are also $500 to $1000 cheaper.

    They also sell two “cheaper” models (the 2080 and the 2070), which hardly deserve the moniker of “raytracing capable”, as they produce an unplayably laggy mess, even at 1080p. Especially on the 2070 you have framerates in the stuttery 20s if you turn it on.

    So no, ray tracing is not there yet. It’s barely playable, there are a total of five (?) titles that support it, and it requires a GPU so expensive you could build a full gaming PC for the same price instead. Bonus: It’s hard to spot the difference because the effects are quite subtle: Some surfaces now reflect.

    1. The funny thing is that Nvidia could have waited with the RTX stuff and just released the 2000 series to directly replace the 1000 series at roughly the same price. Giving people a 25% performance increase. Right now the RTX card are priced way too high. And not only is AMD still competing with Nvidia in the mid/low end, but Nvidia is kinda competing with themselves in the high end with the 1080ti which still kicks ass as AMD has no answer out for that until next year.

      It feels like Nvidia launched RTX one year too early.

    2. Duoae says:

      Yeah, I came to the comments to make the same point about the price. $599 gets you almost nothing with respect to ray tracing.

      Given the price increases for “equivalent level” hardware over the last few years, it looks like it’s going to be a long time before the mass market will be gaming at 1440p or higher.

  11. Retsam says:

    IIRC, the difference between path tracing and ray tracing is that ray tracing essentially starts at the viewers “eye” and works backwards, while path tracing starts at the light source and works forwards.

    Ray tracing is cheaper – you only cast one ray per pixel in the image – but less accurate as that’s not how light actually works: (eyes collect light that bounces into them, they don’t shoot out vision beams).

    Path tracing is far more accurate, but more expensive: just a simple lightbulb shoots an immeasurable number of light rays in all directions, obviously you can’t trace all of them, so in practice a random sampling is used. But it still needs to be a pretty large number: for even a single light source you’re probably going to need to trace more rays than the one-per-pixel necessary for ray tracing, and I’m pretty sure it just gets more expensive as you add more lights to the scene.

    1. I thought it was the other way around?! With ray tracing, tracing the ray of light from the source. While path tracing reverse the path it took (just rendering what the camera saw).

      These things really should have different names, this gets confusing quickly.

  12. What is a huge shame about “RTX” is that it can’t be used to improve old games.
    Just looking at this guy doing a retrospect by evaluating Oblivion now so may years later https://www.youtube.com/watch?v=eapSJjo-XmQ

    I could only think that if RTX could have been applied to enhance old titles with no code change to the old games, people would have gone nuts. Even Nvidias DLSS which is really promising as far as a anti-aliasing method suffer from the need for a developer to submit game footage/data to Nvidia, then wait for a driver update to roll out with the game profile for the game, and only then can you turn on DLSS in a game that supports it.

    There is an old saying “Build it an they will come” only issue here is developers kinda need to build the roads to “get there”.

    One interesting thing though is that some devs may take this as a challenge and try to make current tech games try and emulate the RTX stuff (Hitman 2’s fully functional mirrors for example, despite a few oddities like the reflection of Hitman’s glasses are missing a shader effect they are still realtime reflected along with the rest of the game world).

    I’m sure there will be ways to cheat away some of the flaws of screen space reflections or other ways to do that (like in Hitman 2), and ways to make ambient occlusions (without halos around player characters) and shadows better etc.

    Kingdom Come Deliverance’s landscape looks gorgeous in the videos I’ve seen, and I can’t say that ray tracing would make it look better. The lighting in that game is really well done.

    The one tech I wish Nvidia had jumped on instead is Freesync 2. If NVidia had supported Freesync I’d have gotten a Nvidia card instead of AMD. But I can’t afford a new monitor yet, and when I do I can’t afford a G-sync one, so Freesync it is.

  13. (my last comment on this stuff so I don’t flood this page)

    I found this thread (on RTX) rather interesting on the Unreal engine forums https://forums.unrealengine.com/development-discussion/rendering/1517922-the-rtx-elephant-in-the-room?p=1518000#post1518000

    Post #5 by Frenetic Pony in particular.

  14. default_ex says:

    My problem with this whole ray tracing nonsense is that Nvidia and the enthusiast are treating it like tracing out light rays magically makes the scene hyper realistic when clearly it doesn’t seeing the comparisons. It’s really not hard to figure out why it doesn’t either. Light doesn’t work like that.

    There are many processes that govern how light works. The absorption and emission of photons by atoms. The wave-like characteristics of electromagnetic radiation. Properties of quantum mechanics. We can simulate all of those and produce incredibly life-like imagery but computational power to do so for more than incredibly simple scenes just isn’t there yet. In real time programs, we are using nothing more than naive approximations equivalent to simulating a black hole with a sheet of rubber.

    My point being, it’s going to take a hell of a lot more than simply tracing the rays to get that jump in realism everyone dreamed up when hearing the words “ray tracing”.

    1. Tom says:

      I live in the hope that it eventually becomes so cost prohibitive to do this that they’ll resort to cheaper methods of increasing realism. Like writing more empathetic, nuanced characters and more plausible dialogue. Maybe we can even just distract people from our embarrassing lack of perfect realism by putting really engaging new concepts into our plotlines…. I know, I know, it’s a cheap trick, but it might work, if our audience can ever forgive us for forcing them to, you know, sort of “suspend their disbelief.”

  15. Mephane says:

    There’s ray tracing and path tracing, but the differences between the two are too subtle for this discussion.

    I think the difference is actually quite important in the context of gaming. The way path tracing works is that the algorithm is given a limited amount of time to calculate as many rays as possible. When the time is up, whatever is done is done, and shown on the screen. This way, the frame rate can be kept stable, at the cost of the image becoming more grainy when the scene becomes too busy. The first video you linked showed exactly that.

    The cool thing about this is that while not as pretty, we can work with fairly grainy pictures unless we have to read small text, while FPS drops (especially in VR) are much more jarring. It’s a bit like the game would dynamically adjust the quality image to maintain a stable framerate.

    Also, some games might employ the effect for artistic reasons, e.g. in horror games.

    ——————————–

    That said, I think what is huge about raytracing/path tracing is that it scales better than rasterization. Basically, as the scene becomes more and more complex (more polygons and light sources), there is a break-even-point where raytracing is more efficient than rasterization. We are now approaching this point, however I am not going to try and estimate when this would actually happen. But it will happen.

    Beyond that point, raytracing can be employed as a means to actually improve performance, and that is when I think the tech will become mainstream and eventually replace rasterization almost entirely. (I say almost because I am sure Bethesda will still stick to their Gamebryo/Creation/Whatever engine way past the point when it really ought to be replaced with something new.)

    Also, a while 3D rendering generally lends itself to parallelization, as far as I know, raytracing does that even better, to the point where hypothetically you could have one rendering core per pixel for maximum performance.

    1. Mephane says:

      P.S.: I think where the current RTX shines is flashy stuff like that plain crashing and exploding, and seeing it reflected in all the windows, puddles etc. It looks so realistic that I get a kind of uncanny valley effect there, because it is missing one crucial detail that people who haven’t witnessed an explosion* from such a distance tend to forget: heat. An explosion like that sends out a lot of heat by radiation, in reality you can feel that and how it suddenly comes and goes.

      That makes me think, maybe there would be a market for a peripheral which is just a fast-reacting IR heater attached to the screen, aimed at the viewer, that could simulate this. I guess the biggest hurdle to this is that game makers would have to integrate support for sich a device into their games for it to work.

      *In a theme park at a stunt show.

  16. Tom says:

    Oh, great, now we can make our game characters’ implausible dialogue, lack of characterisation, paper-thin plot arcs, clunky, robotic movements and barely-existent AI (not to mention the outright death of emergent gameplay. I really miss the days when devs got excited by things like emergent gameplay…) stand out in even sharper contrast to their photorealistic chin stubble… unless we go for MOCAP, of course, in which case we might as well just drop the pretence entirely and admit that every triple-A game developer these days really wants to be a film director, at which point we’ll just resurrect the FMV genre but in 4K, everyone will be just fine with built-in motherboard graphics, and GPUs will be sold exclusively to bitcoin miners…. /s (can you tell what era I grew up gaming in? :-P Your article on the Golden Age is probably my favourite…)

    Seriously, though, the way you phrased this post, to me and my semi-lay understanding (that is, as an engineer but in a different field), raises a fascinating question. I always had the vague impression that a per-polygon rendering algorithm was preferred over a per-pixel one for real-time graphics because it greatly optimised calculation speed, presumably because the whole area of a triangular face (covering many screen pixels) can be “mapped” straight through a calculation pipeline as a single matrix, or some such similar mathemagical trickery. Since the trend in graphics is towards ever more complex scenery with high poly counts and just lots and lots of very small things (henceforth “VSTs”) cluttering up the screen (not to mention ever higher draw distances, which will reduce the average number of pixels any random polygon in a scene can cover at once), at what level of VST density does the number of complex things in-scene begin to approach (or maybe even exceed??) the number of screen pixels, at which point does raytracing automatically become a viable option, maybe even the more efficient one? Could one be a classy software engineer and define a dimensionless number to express this? :-)

    1. PPX14 says:

      Agh save us from dimensionless numbers!

    2. Droid says:

      Whatever it is, it’s definitely a O(1).

  17. Anorak says:

    I like Raytracing. The Mathematics behind it are a lot simpler than the transformations that go into a typical 3D image in a game.

    Raytracing: shoot a ray out of a pixel into the scene, see if it hits anything.
    If it hits something:
    Check if it’s hit a reflective surface
    It it did, bounce it off that object using basic gemotery maths. Check if it intersects something, and repeat as needed. The number of bounces determines how many reflections you get between reflective surfaces.

    If it doesn’t hit a reflective surface, you can pick a shading algorithm to colour the pixel. If you did basic flat shading, then shapes are all a uniform colour and have basically no detail.
    Otherwise you can do something to calculate the angle between the point you’ve hit and the light source, multiply that by the colour to get the shading.
    Shadows are easy, and basically free, because if the point you’ve hit can’t see a light source, then it’s shaded by whatever is between it and the light.
    You end up with very hard shadows like this.

    There are a lot of tricks to get things like distributed shadows, by doing the shadow calculation a bunch of times but with some randomization applied to the vector, but that’s computationally expensive.

    Path tracing is different.
    Path tracing, instead of shooting a ray, then calculating what it hit, you shoot the ray, and when it hits something you bounce it at random (for a matte surface), and then you keep going, keep bouncing it around the scene, until it hits a light source. In an enclosed scene this will happen eventually.
    Every time it hits something, you “pick up” a little bit of the properties of the thing you’ve hit, until you finally get to the light source. This allows for colour bleed to happen.
    This is insanely expensive. For a simple scene like this one:
    https://blog.daft-ideas.co.uk/wp-content/uploads/2012/05/Output_120512174024_1431.png
    It took about 10 hours of bouncing rays around.

    You can do it with fewer samples, but then the results look like this:
    https://blog.daft-ideas.co.uk/wp-content/uploads/2012/05/Output_120512171309_12.png

    For raytracing, I stopped my project after doing a raytracer that could handler OBJ files, and implementing various methods for quickly partitioning the space so that I don’t have to check every single triangle every time.
    Because if you don’t, you have to multiply the number of triangles in the model, with the number of lights in the scene, with the number of pixels you want to do. IT’S EXPENSIVE.
    https://blog.daft-ideas.co.uk/wp-content/uploads/2012/05/7124760719_6990f6589e_b.jpg

  18. Simplex says:

    “It’s nice and all, but it’s not “five hundred and ninety-nine U.S. dollars” nice.”

    Oh, you sweet summer child ;)
    https://www.nvidia.com/en-us/shop/geforce/?page=1&limit=9&locale=en-us&gpu=RTX%202080%20Ti

    1. Tim Keating says:

      Hehe my thoughts exactly. “Umm, that card is ON SALE right now for $1400…”

      1. Simplex says:

        Yeah, the prices of GPUs today are just insane. And of Intel CPUs. And of RAM.

        Well, at least SSDs got cheaper.

        1. Tom says:

          Thanks, Bitcoin.

    2. Droid says:

      I think he was going for the difference in price between RTX-enabled cards and non-RTX-enabled cards of roughly the same power.

  19. GTB says:

    I’m really looking forward to this becoming more important because I can’t tell the difference in graphics and I’d like the 10-series cards to go on sale.

  20. Roxor128 says:

    Real-time raytracing is not a new thing. The Demoscene has been doing it since the mid-1990s. Granted, they were using very simple scenes with just a few hundred rays for the entire frame and the resolution was extremely low, but they still pulled it off. See “Transgression 2” by MFX for one of the earliest examples.

    Fast forward to the turn of the century and they’d come up with tricks to get things sharper by being clever about where to actually trace rays and interpolating the rest of the pixels. “Heaven 7” by Exceed uses such an approach.

    Both these examples have recordings available on YouTube, and the demos themselves are archived and available for download, though Transgression 2 is a DOS program (works fine under DOSBox) and Heaven 7 needs either DOS with MMX or Windows 98 to run (also works fine on 64-bit Windows 7).

    These were all being done on the CPU alone. Imagine what you could do if you adapted these techniques for the GPU.

    Problem is, the adaptive sampling technique used by demos like Heaven 7 doesn’t map well onto the GPU’s “each pixel does the same thing with slightly different inputs” model. There might be ways to do it, and I do have a few ideas, but I don’t have enough experience with GPU programming to try them out just yet.

  21. SG says:

    These cards are so expensive, it will take few years for everyone can afford them.

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply to Cubic Cancel reply

Your email address will not be published.