Experienced Points: Has Rendering Technology Stagnated?

By Shamus Posted Wednesday Dec 5, 2018

Filed under: Column 61 comments

My column this week has a title that, on further reflection, could have been better chosen. The title asks “Has Rendering Technology Stagnated?”, but really that question was answered last week. This week the question I’m actually answering is, “Does it Matter?”

I mean, I guess the title kinda works if you think of stagnation in terms of “Not producing significantly better images”. Even so, “Has Visual Fidelity Stagnated?” would have been less ambiguous.

Such are the perils of working with a deadline. I’ve been dividing my time between publishing my book and other important workAssuming that playing Prey: Mooncrash counts as work. and I didn’t get working on last week’s column when I should have. When I finally started writing I realized I was juggling two different ideas:

  1. Gaming hardware hasn’t really advanced very much in the last decade. However…
  2. …it doesn’t matter, because it’s hard to make real gains in visual fidelity and gameplay is far more important at this point.

So I thought I could split the one column into two. That worked out in the sense that it gave me more breathing room this week, but I really should have finished writing both of them before publishing the first.

If this was just a blog post then it wouldn’t be a problem. I could push the articles off for a week and put up some goofy filler post in the interim. That’s fine when you’re publishing stuff to your own site, but it’s not really acceptable when you’re being paid to do a job. The folks at the Escapist are lovely people and very easy to work with, but this isn’t their hobby. They’re trying to run a business, and the last thing anyone needs is an unreliable contributor.

To be clear, nobody really complained about the column. This isn’t a situation where I turned in work that disappointed someone. Rather this is a case where I realize I could have made a much better pair of articles if I’d had another week.

The solution here is for me to knuckle down and build up some lead time. This is the same advice I keep giving to publishers: Don’t try to finish everything in crunch mode. Take your time. Put in half the hours over twice as many days and you’ll get a better product for the same work.


Now that I just spent 300 words telling you it’s not very good, please do be sure to read the column.



[1] Assuming that playing Prey: Mooncrash counts as work.

From The Archives:

61 thoughts on “Experienced Points: Has Rendering Technology Stagnated?

  1. Redrock says:

    I wonder if we’ll see a resurgence in stylized graphics. Breath of the Wild makes a solid case for it. And overall the existence of the Switch is a good incentive to go for style instead of raw horsepower. The Switch has a decent install base that only keeps growing and will likely keep growing, and it seems that anyone for whom it’s remotely possible wants to have their games on it. But there’s only so much Panic Button can do.

    1. Echo Tango says:

      Did something change in the last five-ish years, for what is considered “stylized” grahics? We’ve had photorealistic (within the limits of hardware) graphics for about a decade; There’s not much room for changing that half of the comparison, since it’s already basically maxxed out. However, the other end, with what games are considered stylized seems to have had the bar lowered, such that anything vaguely non-photorealistic, is held up as an example to follow. Breath Of The Wild is stylized, but only very slightly in my opinion. My friends who first showed me the game considered it stylized too, however. I was a bit confused, since the same series has other titles like Wind Waker and Spirit Tracks, which are much more stylized. Even Mass Effect Andromeda seems to be considered stylized, when it looks to me like a photorealistic game. Am I going crazy? There’s many other games I could list with more stylized graphics, without even going into indie games. What’s the dillio?

      1. Chris says:

        Seriously? I haven’t played BotW, but every shot I’ve seen is extremely stylized. Cel-shading style shadows, exaggerated proportions, brush strokes in the textures, etc. It’s far closer to impressionism than realism. I don’t understand how someone could not consider it stylized. It isn’t the Disney animation style of Wind Waker, but it’s clearly very stylized.

        I can understand your confusion over Andomeda. If it fell through a time hole and came out 15 years ago, it almost certainly would have been considered photorealistic, due both to fidelity and what other stuff looked like at the time. But today, when photorealism is stuff like Title of the Tomb Raider and Call of Battlefield and The Witcher 3 and even the earlier Mass Effects, it appears much more stylized. Ignore the aliens and look at the humans: they all have highly exaggerated, unrealistic facial features and odd proportions. There’s a thickness and gumminess to the skin that you don’t see in its more photorealistic peers. The clothing doesn’t look like a The characters all look rather doll-like, in my opinion. Whether or not that can be called stylized comes down to whether you think it was a deliberate creative decision or if you think the artists were shooting for photorealism but failed. Again, I haven’t played it, but it seems to be fairly consistent in all the shots I’ve seen, so I would lean toward it being a stylistic choice.

        1. Echo Tango says:

          I have to dispute Andromeda entirely here. You claim the human faces are all exaggerated, but I just spent 10 minutes looking at screenshots and cutscene videos, and they look normal to me. I work down the hall from some people who look very similar to the men and women I saw in that game, both in faces and in body proportions. They don’t look like supermodels or anything, but they look like normal humans to me.

          As for Breath Of The Wild, I have to make a correction. I originally thought it had only a light or medium amount of stylization, from the limited time my friends showed me the game (over the shoulder camera angle, no cutscenes), and from the screenshots I saw. Most screenshots show off the scenery, and only have your character as a small figure in the middle of the screen, which makes it look very uniform in style. Given that, the game world looks like it’s got a little bit of pastel-shading, and that’s it – very mild as far as visual styles go. Perfectly acceptable for a game, although very confusing when I heard people call it “highly” stylized.

          However, having examined some better screenshots, I now have a larger complaint with the game – it’s very inconsistent in its visual style, both in quality, and actual aesthetic. Let’s use these screenshots from IGN, since they cover a good cross-section of the game’s environments, characters, and other assets. Picture 4/37 (sorry, I can’t link directly), is basically what I originally saw – mild pastel coloring, and not a lot of visual style being shown. Nothing to complain about, but nothing that made me especially want to play the game for myself. Lets move on to the good. Pictures 5 and 26 are fantastic – they look like stills from an anime film! The princess, and that woman both look great. However, the rest of the game has some problems. First, is the greatly varying quality of assets on display. The first picture shows a common problem, that can be solved easily if your budget is in the hundreds of millions[1][2]. You can see to Link’s right, that the rock texture is stretched out badly. In a low-fidelity game it wouldn’t stand out much, but so much of the game is high-resolution or high-polygon, that it sticks out like a sore thumb. You can see the same problem in picture 10, with the rock Link’s standing on. Another problem is inconsistency in detail; Some characters or objects are extremely high-fidelity, and some very low. In and of itself, that’s not necessarily a problem, but high detail draws the attention of the viewer, so important objects, locations, characters, etc, should be high-detail, and background things should be low-detail. However, there doesn’t seem to be any rhyme or reason to this game. Picture 3 shows Link in a medium-detail set of clothing, which matches the detail level of the princess in picture 5. However, picture 34 (and some others) show Link in extremely fine detail; It looks like about double or triple the detail level, and looks out of place by comparison. Picture 25 shows a fairly bad example of a location, since it’s got high-fidelity in half of the location, and very low fidelity two feet away, on the same building/structure. Most of the structure seems to be aiming for a high level of detail, exemplified by where Link is standing – details in the floor’s mural, colors, and the rest of the texture. The edge of the platform however, looks like something from the N64 – low polygons, low texture detail, and they didn’t even smooth out the surface normals[3], on what’s clearly supposed to be a rounded structure! I’ll admit, the game is stylized, but the execution is very poor, given that this is a Zelda game, with decades of experience and a huge budget behind it – very disappointing!

          [1] Apparently, that’s the ballpark estimate.

          [2] Use a texture generator / blender, that ensures that landscapes have roughly equal distances between vertices and textures, so that it doesn’t get stretched. Generating textures on the fly is a programming exercise for undergraduate comp-sci majors, so Nintendo and their huge budget has no excuse for textures that look this bad.

          [3] This is again, a simple exercise, for undergrads. If you don’t have the polygon budget for a very round surface, you can fake a lot of it by making your texture look decent, and smoothing out the surface normals to hide the polygons. Instead of looking like a heavily angled set of triangles that each reflect light at completely different angles with a hard transition, it looks like one continuous surface. The polygons between the object and its background will still stand out, but it’s usually not as jarring, since light between different objects usually isn’t as different, as the lighting difference from two highly reflective flat surfaces at different angles (one, very bright, the other, very dark, with a hard line in between).

          1. Geebs says:

            they look normal to me

            *Camera slowly pans out to reveal that Echo Tango lives and works in Fortnite*

            I actually think that everybody is simultaneously correct here. I still think that Andromeda is going for a very slightly cartoony art style, just a couple of degrees shy of, say, Uncharted or the new Spider-Man. The devs of each of these games are all trying to avoid the Uncanny Valley by making characters who are just far away enough from 1:1 depictions of actual human beings to not appear creepy. Given that the cues that make something not-quite-human so unnerving are extremely subtle, it’s hardly surprising that we all view them differently.

            It’s also possible that we’re all looking at different things. For my part I played Andromeda on a PS4 Pro in HDR mode, using a fairly cheap TV with really-not-that-great HDR capabilities. It’s quite possible that there are differences in the skin shaders between the PS4 and the PC versions of the game*. In particular, my-face-is-tired lady looks almost cel-shaded on my screen and the standard Bioware human NPC paper-cut-out hairstyles look disconcertingly two-dimensional in the version I played.

            * I was actually considering signing up for the Origin subscription thing in order to take a look at Andromeda on the PC, but then the stuff about loot boxes actually causing an increase in childhood gambling hit the news and I’m really not in a mood to give any money to EA right now.

            1. Chris says:

              Yeah, the line is really fuzzy and subjective, and Andromeda is practically sitting right in the thickest part of the overlap. I think it’s entirely reasonable to make the argument that it’s badly executed photorealism. I don’t happen to agree, but I can totally understand that viewpoint. I don’t think Echo’s wrong for disagreeing that it’s stylized, but I can’t look at Shamus’s screenshots of Addison without thinking that they’re deliberately trying to evoke the old 80’s Star Wars action figures.

              I guess my succinct response to Echo’s original question about what is considered stylized is that yeah, it’s anything that is not attempting photorealism (adjusted for the tech level at the time). It’s not any one specific look.

    2. Christopher says:

      Probably just gonna be Nintendo and Blizzard that who keeps doing that while everyone else keeps chasing photorealism. Sony’s big first party output is all cinematic stuff at this point, and Activision, EA and Ubisoft have been chasing that for all of last gen too.

    3. slug camargo says:

      I dream of such a day. And I’m talking about the entire gaming world, not just Nintendo stuff. I’m convinced that Dark Souls would work wonderfully with a visual style similar to BotW (slightly darker, probably). A horror game taking inspiration from Sin City’s visual style? That would be the shit.

      Unfortunately, Nvidia is selling their upcoming 1000+ US dollans videocard on the promise of how realistically it will render the reflection of an explosion in an NPC’s eye. So I don’t see that happening anywhere in the near future.

      1. Echo Tango says:

        Dark Souls with a visual style like Darkest Dungeon is a natural fit, in my opinion. :)

      2. Fizban says:

        I’m actually kinda the opposite. Eventually I played Wind Waker and sure, it was a good game, but now we’ve had Skyward Sword and Breath of the Wild both in the cell-shaded style. Where’s the next Twilight Princess? But apparently Nintentdo has decided that all Zelda games are cell-shaded now.

        Dark Souls isn’t what I’d call fully photorealistic, but whatever it is, you get the idea. And in addition to the gameplay, that was one of the things I really liked about it, not being even more cel-shaded stylized stuff. One of the things that annoyed me about DS3 so much (and what I saw of Bloodoborne before it) were the oversized spindly enemies that are supposedly human-ish hollows despite clearly not being the same thing as you, and the zomg stylized gothic visualizations.

  2. Dreadjaws says:

    “…if you look at the indie scene you’ll see people are inventing new ideas faster than anyone can play them.”
    “…to the playable black hole of Donut County…”

    This isn’t a new idea, though. It’s just another take on an old gameplay mechanic, seen in games like Katamary Damacy or Tasty Planet. Hell, even Spore had something like it at the beginning. I feel that the Superhot example is more appropriate, since while games using “bullet time” or time stopping exist (whether as a game mechanic, like in Max Payne or the V.A.T.S. in Fallout, or as a turn-based thing, like in X-COM), they just don’t play the same way.

    Now you’ve got me thinking. Where does exactly a “a refinement” or “a different take” end and “a new idea” begins? Maybe this is just subjective.

    1. Chris says:

      My take is that everything is a refinement. Truly original, bolt from the blue ideas don’t exist because our brains don’t work that way. We refine and remix and tweak and adjust. If something seems truly new, it’s only because we haven’t seen the intermediate steps.

      1. Echo Tango says:

        Can you tell who’s the person in the board meeting, voting to change the brand colors from light blue to slightly lighter blue? :D

    2. Echo Tango says:

      It’s all a matter of degree; Some games will have radically different gameplay than what’s currently on the market, and others will have smaller changes to existing forumlae. It takes work (i.e. software programming) to make new mechanics, so most games will be very similar, because that’s what’s more readily creatable, with the tools of the day.

  3. John says:

    Speaking as the owner of a budget gaming PC, I sure hope graphics have stagnated. It’s not the end of the world if they don’t, I suppose. From roughly 2010 to 2016, I did all my gaming on an antiquated Celeron machine with integrated graphics and clearly I survived. There will always be indie games to play and older games that I’ve never played before but always meant or wanted to. But for the first time since the mid 90s I’ve got a machine that has a plausible chance of playing anything new that comes along and strikes my fancy. Call me selfish, but I’d like the moment to last as long as possible.

    1. Echo Tango says:

      I have a similar opinion, but from a slightly different angle. For me, it’s because I want people to start making games truly cross-platform. Imagine, for example, a new 3D platformer game like Mario or Zelda in outer space, but which was available for purchase on Switch, Playstation, XBox, Raspberry Pi, Ouya, Windows, Apple, Linux, …, all on day one, because it had simple, cartoony graphics, something like Mario Galaxy or Wind Waker. Now imagine the majority of Nintendo-esque, all-audiences games are available cross-platform like that. People wouldn’t be locked to one specific vendor of plastic!

  4. Geebs says:

    I think it’s a bit like saying that nobody needs a 4K monitor (apparently nobody works with text these days); it’s not obvious how much things have improved until you go back and see how awful the old rendering methods look, even if they’re being used in new games.

    Case in point; compare The Order 1886’s graphics with DontNod’s new game, Vampyr. The latter looks great – for an Xbox 360 game. In modern terms, though, the modelling and animation are bad enought that they nearly manage to sabotage some otherwise pretty decent voice acting.

    1. John says:

      What do you mean by “bad” modelling and animation? From the sound of it, you could easily be talking about poorly executed character models and animation sequences rather than dated rendering technology. Years and years ago, I had an animation program for the Apple II. The quality of the animation I produced depended on my skill and the amount of care and effort I put into it. The graphics were always primitive. It couldn’t be helped, not with an Apple II. But as I learned to use the program and got more serious about doing so, my stick-figure ninja fights went from terrifying, janky messes to something resembling smooth.

      1. Geebs says:

        I mean, relatively bad, i.e. not up to the standard of work in the same field. Certainly a lot of it is talent and money, but there are also rendering techniques that DontNod could have used to make their models look better at relatively little cost; skin rendering is a fairly well documented area and makes a big difference.

        The main thrust of my point is, though, that the sort of superficially incremental differences in rendering/general art pipeline quality that have happened between, say, the start of this console generation and the current day*, are actually much bigger than you might think at first glance, and the difference really becomes apparent when you look at even slightly outdated tech.

        This genuinely makes a difference in the experience with and engagement with the game; the latest Tomb Raider, for example, actually looks great but they’re still using what looks decidedly like the animation tech from the 2013 game. For anybody who has played e.g. the recent God of War or Uncharted games, this actually-pretty-decent character animation tech from a big studio looks like total garbage, which I think is part of why people reacted so badly to it (for my money it’s a lot more interesting to play and the plot is a whole bunch less muddled than the previous one, although very obviously indebted to Naughty Dog).

        (* the differences between this generation and the last, which are often downplayed as being unimportant, are actually closer to an order of magnitude in terms of e.g. the amount of geometry in a given scene)

        1. John says:

          Okay. When Shamus said “rendering technology”, I took him to mean hardware. It sounds like you’re including software as well. I think I get what you mean now.

          To be honest, while I have played neither the recent Tomb Raider nor the recent God of War, I’m not sure I could have spotted the qualitative differences in animation if I had. I’d probably have been too distracted by trying not to die. And even if I did it’s possible I wouldn’t consider them important, especially if the Tomb Raider animations are, as you say, actually pretty decent.

          1. Echo Tango says:

            Shamus’ article is taking about the type of software that can run on a given set of hardware, in any given year/generation.

        2. Bloodsquirrel says:

          (* the differences between this generation and the last, which are often downplayed as being unimportant, are actually closer to an order of magnitude in terms of e.g. the amount of geometry in a given scene)

          The point, though, is that it’s becoming really hard for people to notice the difference. A “circle” with 16 sides is obviously better than a “circle” with 8 sides. But it’s a lot harder to tell the difference between one with 100 sides and one with 200 sides. Hell, it’s harder to tell the difference between one with 100 sides and one with 1000 sides.

          I differ slightly from Shamus in that I would name Crysis as the point where games stopped wowing us graphically. Crysis was a big deal when it was just released, even if it’s been largely forgotten about now, but it stayed the benchmark for graphics until people stop expecting there to be a clear-cut benchmark for graphics anymore. Some of the scenes the engine was able to render were genuinely photorealisitc, as in you could place them side-by-side with photographs and people couldn’t pick out which was which (granted, these were still images of foliage, which is easier to fool people with than moving images of people).

          1. Geebs says:

            Honestly, I disagree that the difference in geometry doesn’t matter. Take, for example, the difference in levels between Invisible War, Human Revolution and Mankind Divided; there are definite and tangible benefits for gameplay in terms of both scale and detail in each game.

            I concede that, as a huge graphics nerd, I’m probably looking at the details of the rendering more than is healthy and so modern incremental advances in bling-mapping are still just as impressive to me as eg the Pathways Into Darkness -> Marathon transition.

        3. Gethsemani says:

          I definitely think you are right. I returned to Dragon Age: Inquisition the other week, to finally finish off all the DLC, after taking a 4 year hiatus from it. I remember the animations in it being pretty good, especially during the talking cutscenes. But after having just played Red Dead Redemption 2 and enjoying the insanity that is its fully mo-capped side conversations in open world gameplay, oh boy was I spoiled with great animations. DA:I suddenly looked decidedly second rate, because RDR2 just pushes the envelope when it comes to high quality animation. It is the kind of thing that you don’t really notice until you look at older examples of great animations and realize just how much better the new game does it.

    2. Hyperbole says:

      Buuut- you totally don’t. You got by before, even with text, I guess you only started working with it recently?

      4k is nice is you’ve got the content for it, gaming is one of the few spaces that’s actually suited to it. But really, what these giant resolutions are for is making big monitors, TV, and corporate tech applications look nice. Your giant projector screen needs an 8K output so it still looks crisp. The 27 inch monitor that’s a foot from your eyes is fine.

      1. Echo Tango says:

        A fifteen foot screen at the end of a board room has about the same pixel size as a two foot screen on your desk. The extra pixels are very difficult to see at that distance. If you’re talking about large screens up close, then the extra resolution would be noticable, but then you’ve got the problem, that a person up close to that screen wouldn’t be able to look at the whole thing comfortably anymore.

      2. Geebs says:

        I read a lot of badly-digitised academic papers for work. Compared to my 4K 27” monitor, the text rendering on my equally sized 1440p monitor is much harder on the eyes.

        4K monitors also work really nicely for 1080p gaming, so I save money on GPU upgrades as well!

      3. INH5 says:

        A lot of big-budget Hollywood movies are mastered in 2K, which is just ~8% larger than 1920x1080p, then projected onto theater screens at 2K without anyone noticing or complaining. When these same films are released on “Ultra-HD Blu-Ray,” the footage is just upscaled to 4K, because uncomfortably close viewing distances are required to actually tell the difference between HD and UHD.

        But all of this seems kind of academic when it is 12 years after Blu-Ray came onto the scene, basic BD players cost less than $100, and “movie impulse buy racks” near the aisles of various stores still almost entirely stock DVDs and Blu-Ray+DVD combo packs, with smaller stores like the drug store nearest to my house only stocking DVDs in those racks. A lot of general consumers simply do not care about video quality.

      4. Mephane says:

        But really, what these giant resolutions are for is making big monitors, TV, and corporate tech applications look nice.

        Yes and no. It’s true that they make a big difference on huge screens, projectors etc. However, these high resolutions also shine (maybe even outshine) at normal screen sizes, e.g. 24”. The endgame here is when the human eye cannot distinguish individual pixels any more, and techniques like antialiasing becomes obsolete. 4K is a huge step towards that, albeit probably not the final one.

  5. Chris says:

    I’m no coder/engineer, but I think a big problem is the fact that the teams have become bigger that is a core issue. For wolfstein3d, doom, quake it was all just Carmack. I don’t think a modern engine could be put together by one person. If you have larger team, a bigger code base I can imagine youre spending more time figuring out how everything interacts than that you can hammer away at new features. I’ve read masters of doom and there you could already see the problem emerging. For doom carmack hammered out an engine in a few months and they could spend a couple of months making maps and tools for the engine. For quake however Carmack was spending close to a year just to get 3d to work, so by the time they were going to make a game they just wanted to get something out.

    Also in the article there’s a typo “there are lots of ideas out there Dthat are too unconventional to build a AAA game around.” and Dthat should be that.

    1. CloverMan-88 says:

      There is a programming saying: “If you have a task that takes a single programmer a month, it will take two programmers two months”

  6. MelfinatheBlue says:

    If you write about Prey, playing Mooncrash is totally work! I’d actually love to see an article or even a filler post about it (I can’t remember if you’ve talked about it or someone on YouTube did, but it sounds fascinating).

    Given that I do all my gaming on a Lenovo Y510P that’s what, 4? now, I sincerely hope it’s stalled a bit. At this point I’m upgrading only when I HAVE to (the last machine was replaced because it started overheating any time I played anything (wow and lotro, baldur’s gate, anything really)), and I really give no fs about seeing an explosion reflection in a NPC’s eye. I’m sure it looks amazing on a big screen, but laptop! ESO’s stunning to me, and all I really want is a longer draw distance with less pop-in (sounds like Fallout 76 screwed the pooch on that one).

    It might be that the newest game I’ve played is ESO, and I’m playing on old hardware (both things will change soon as I got a PS3 with Spider-Man due to knowing this computer will bite it and I had enough Amazon points to make the PS3 like 60 bucks…), oh, and all the new games I see via YouTube so I’m guessing I’m not getting the full “awesome” effect, but yeah, I mostly see a difference between Assassin’s Creed 2 and Odyssey in that Odyssey is prettier (better color usage) and looks more like the real world in terms of vegetation and color and people. To use a metaphor, the difference is like going between my 2002 fully-loaded CR-V and a 2005 fully-loaded Subaru Forester. The Forester’s nicer but they’re both good. Going to a new machine used to be like going between the 2002 CR-V and a 2015 fully-loaded Outback, a huge difference, and wow awesome, look at all the bells and whistles and it’s so pretty and I never want to drive anything less!

    (Feel free to fill in cars of choice, these are the ones we own or owned. Both Subarus have leather and heated seats and my CR-V is cloth and even new wasn’t quite as nice)

  7. Rod says:

    You could keep working on the article and release a patch next week!

  8. I’m kind of wondering if we’re about to enter an age where *single games* have longer and longer lives and game companies put out NEW games less often, instead doing bigger “expansion packs” for EXISTING games, keeping the same game alive and kicking for longer since there really isn’t a NEED to trot out a completely re-built core game every couple of years. Hell, my favorite game came out in 2006 and I still find the visuals extremely enjoyable. Yeah, it doesn’t have all the bling bling, but who cares? It’s pretty. I enjoy playing it. I take cool screenshots all the time:


    This model already exists for MMO’s and there isn’t really a reason why it shouldn’t exist for other games. Look at Skyrim. Is there any real REASON why Elder Scrolls 6 HAS to be a completely new and separate game instead of a Skyrim “expansion”? They’ve already made an Enhanced Edition to update the Skyrim graphics engine to pretty much match the Fallout 4 improvements. Think of the dev time and money it’d save if you could just re-use all of your existing assets instead of having to reinvent the wheel every few years.

    This kind of model would have some seriously interesting impact on games, because a “new game” in a franchise wouldn’t just be a new game, it’d potentially INCLUDE an upgrade/remaster of your EXISTING GAME in that franchise. So there’d be serious potential to leverage “brand loyalty” just on that front to get people to buy the “new game” (or “expansion” or whatever you want to call it) because even if the new game isn’t their favorite for the series, it updates their EXISTING favorite.

    This kind of model would also finally put to rest the bitching about DLC meaning that the company released an “unfinished game”. In this new model, there never would be any such thing as a “finished game”.

    Heck, this would make a good article for NEXT week, to finish out the series. Article 1, review the past. Article 2, talk about the present. Article 3, discuss future implications.

    1. John says:

      Paradox has been doing approximately this with games like Crusader Kings II and Europa Universalist IV for the last half-decade. People who play the games mostly don’t mind, but there’s still a lot of bitching about DLC and alleged unfinished-ness from other quarters.

      1. Chris says:

        CKII at launch was a huge game, and the DLC expands that already massive core in new and interesting ways. If the DLC was stuff like portrait packs or if you had to actually buy indulgences from the church with real money to get unexcommunicated, Paradox would get ripped to shreds by the playerbase.

        1. I just think that there’s a potential for a new model here that wouldn’t have worked in the “old days”. When what was *possible* in a game was blowing up every two years, yeah, you were pretty much locked in to the “upgrade or die” paradigm. But since that is slacking off, it doesn’t *just* mean changes in what constitutes value in a game (less focus on pure graphical awesomeness), but it can even mean changes in WHAT constitutes a “game” and WHAT they’re actually selling you as the product.

          1. Oh, btw, Shamus’s new novel is teh ossum. Recommend 1000%.

          2. Philadelphus says:

            I think John’s point is that such a “new model” is already being practiced in the real world by Paradox and has been since CK II came out in 2012. :) If this were another company we’d probably be up to CK IV by now.

            How much the wider games industry will emulate their lead is still anyone’s guess, of course.

            1. I was talking less about “is there some company, somewhere, that does this” . . . because there are several, and MMO’s have ALWAYS done this. What I’m curious is if it might see BROAD adoption as a common paradigm for the industry.

              Paradox is cool, and all, but they’re NOT one of the giant publishers who drive the industry paradigm (whether that paradigm is intelligent or stupid).

              1. Gethsemani says:

                I think this is what the whole “Games as Service”-idea could well become, if it isn’t mismanaged and ends up as an excuse to release unfinished games and charging premium for meager content. Publishers have been catching on to the idea that players are more loyal to a really good game, then a brand of good games, and a lot of them are pushing it in various ways.

                For the major players we have Creative Assembly resuming support for Rome 2, after seeing it still going strong 5 years later, we have EA doing it with both the Sims 4 and Battlefront 2 (the latter getting all its updates for free even), Ubisoft is sort of trying it with Rainbow Six: Siege and did it for a while with the Division (though a sequel was probably a good choice there, considering the Divsions bad rep). Ubi is also sort of pushing it with AC: Odyssey, with its weekly and daily quests and timed events.

                1. Also, it’s not ENTIRELY a new thing, I mean, the two Neverwinter Nights games kinda did this, with new campaigns releasing well after the base campaign. They weren’t exactly DLC–we didn’t really have DLC per se back then because digital distribution wasn’t really a thing (although they had patch servers) so any new content that attached to the same game had to be big enough to fill a full-sized box, pretty much.

                  But both of those games were pretty much designed with the thought in mind that they were a platform for continuing content (company or player created).

                  And, of course, you know, MINECRAFT.

                  1. Oh, and notably Microsoft’s New Big Thing with their backwards compatibility and unified platform mean that even CONSOLE games could start to adopt this paradigm. Certainly one of the limiting factors on this kind of thing has simply been that this TYPE of game only really would work on the PC for almost the entire history of gaming. And early in its lifespan the PC simply wasn’t a very accessible platform for gaming. It could do more with graphics, sure, but the sheer amount of dicking around you had to be willing to do to play games and the expense were pretty astronomical for people just looking to have fun who weren’t also ZOMG COMPUTER SO COOL.

                    Being a dedicated PC gamer all that time was weird because you were simultaneously kind of a snob with your superior graphics and complex games but also out of the loop with what most other people were playing.

                    With what’s been happening over this past year, it very much feels like after pretty much three decades of being the Weird Stuck-Up Kid the rest of the world suddenly discovered where I’ve been all this time and has now decided to beat a path to my door and shower me with presents.

            2. Chris says:

              CKII is also incredibly well-suited to that type of model. There are a million ways you can expand upon it fairly seamlessly without feeling like you stripmined the base game. It’s not the sort of title that would benefit much from a new graphics engine. It’s very long-form and kind of timeless.

              A slightly different take that is also interesting is what’s being done with the Total Warhammer series; they’re releasing full sequels, but then allowing integration with the previous games in subsequent patches, making the world progressively bigger. It’s somewhat similar to Paradoxes approach, but in larger chunks.

      2. Hector says:

        The complaints recently boiled over in the Europa Universalis Community. The long and short of it is that Paradox has been getting greedy with large amounts of poorly-integrated, low-quality content alongside their disgusting array of Horse-Armor-style junk. It got ugly, but Paradox frankly deserved everything they got and more. It’s been pretty obvious for some time that they were treating EU4 as a cash cow while pouring resources and talent into other games.

        Edit: The point is that it’s possible, but you do have to treat your community with respect. And also, it may not actually be financially better for the players. SUre, they don’t have to buy Eu5 or whatever – but the outlay even for the major DLC, let alone the COntent Packs (which do have mechanics), the obnoxious graphics upgrades, and other add-ons is more expensive than a whole new game to the customer.

        1. Liessa says:

          Another complaint I’ve heard – and one reason why I stay away from these games – is that Paradox release patches that add new features to the base game, but lock most of the functionality behind the expansions, meaning that people are basically forced to upgrade in order to play the game properly. That really disgusts me, and it’s yet another reason why I’m wary of Steam and its mandatory patch installs (as opposed to GOG, which lets you download standalone installers).

          1. 4th Dimension says:

            As far as I know, and I’m not playing CK right now but I do follow their dev diaries from time to time, every DLC expansion is accompanied by the free patch. Free patch WILL include some of the DLC content, so with every DLC free comunity still gets an expanded game. Also the free patch is there so people with different versions or different DLCs active can still multiplay together. DLC content on the other hand is never really something groundbreaking.and required “to play the game properly”. It’s things like mechanics of the Muslim rulers and unlocking them for play. You can still spend HUNDREDS of hours playing in Medieval Europe and Eastern Rome and not even notice that something is locked. But if you want to play with the new factions essentially you need to pony up. Or it could be a side thing that applies to all. Like Monks and Mystics. it concerns itself with expanding and deepening the events dealing with faith, religion legend and like. A part of that will be available to free playing players, but a portion of the content will be locked. You can still play the base game fine, and won’t be hamstrung, but if you want to roleplay and deal in heresy and like you might need this.

            Basically the idea of these is to add on deeper mechanics for things that might interest some but not all. So if you don’t care you just ignore it and won’t notice it’s not there really, but if you do care you can get deeper into it. it still won’t be revolutionary, but it can make some interesting stories.

            The key part is that DLCs absolutely aren’t necessary for the “core” and complete experience.

            And I’ll again mention that every patch doesn’t just patch in content locked for you, a sizable chunk of the actual new content WILL be coming for free too. Plus the ususal balancing, bugfixes and UI improvements which are all free.

            PS: What CAN be a problem, is that free patches CAN change the way the game is played for ALL players, and someone the change might not be for the better. BUT they have I think added a lot of customization options to the gamestart so you can disable entire mechanics that might annoy you. And if all else fails, previous versions can always be rolled back to in Steam.

  9. Zcact says:

    This is completely unrelated, but I’m an old time reader from kazakhstan, and ever since the DDoS (was it), I haven’t been able to access the site without proxies.
    What’s up with that?

    1. Is it possible that some of the new security/hosting solutions/whatever Shamus implemented are disliked by your Kazakh ISP or even violate their rules in some way? A lot of governments have been making rules about what ISP’s can and cannot let through. I don’t know anything specific to Kazakhstan.

      1. Zcact says:

        I’m not sure, usually our ISPs just make a blanket redirect on IPs they don’t like (mostly illegal religious sects and the like), and that’s the end of it.
        Usually what I encounter is that some websites just reject connections from outside western countries, sometimes Russia, and almost always China, as a security feature. I guess the assumption is that visiting an English website from those countries is most likely going to be a bot, since nobody speaks English outside of western countries, am I right.

        Although, in the case of China, I feel it’s justified :^)

        1. 4th Dimension says:

          Erm. I’m pretty sure western sites don’t block anyone… well unless they might be fighting a DDoS so they block a slew of IPs. What’s more likely is that your ISP is dropping packets or returning packets claiming the server refused connection going to certain IPs.

    2. Shamus says:

      I have no idea why you’d be blocked. I don’t think it’s anything I can control on my end. I use CloudFlare now, and I’m sort of at the mercy of their rules.

      I could disable CloudFlare, but as the DDos proved, my site isn’t robust enough to stand on its own.

      1. Zcact says:

        Oooh, Cloudflare.
        That explains it.
        I’ve read about kazakh ISPs having to block websites that sell illegal substances (and probably some other), some of which used cloudflare, so certain cloudflare websites are now inaccessible because of IP sharing mumbo jumbo or the likes and our ISPs being incompetent.

        Oh well, another day under an incompetent, heavy handed authoritarian state. I’ll keep using proxies, thanks for the response.

  10. Gurgl says:

    Let’s pick consoles since they have precise release dates: the Dreamcast was released in late 1998, meaning it has been more than 20 years since mainstream home hardware has been able to render high-quality graphics. Dated indeed, but not shocking or laughable.

    Youtube videos of emulated 6th-gen games played in 1080p, or even 2000-2005 PC games in 4K, are very informative in that regard. Titles like Headhunter, Kya Dark Lineage, Chains of Olympus, Metropolis Street Racer, the Dark Alliance family, all of them look very good and would run flawlessly on any 200-dollar potato PC. In fact, as evidenced by Android games, a lousy 100-dollar phone can natively run games that wouldn’t look out of place on a PS3, or at least late-era PS2.

    Thus theorically the market is huge for full-featured games with 6th-gen graphics, shameless clones of the AAA heroes of the early 2000s that could in theory be done post-2010 with a much smaller budget… and yet that absolutely did not happen. That didn’t happen because EA and Ubisoft don’t watch forums and feel-good one-liners from Youtubers afraid of alienating their audience, they watch sales numbers.

    And what do the numbers say? They say that the market decisively voted against intermediate technology, anyone with enough budget goes for high-quality (either realistic or stylized), while others pursue cartoon / pixelart, because anyone going the middle road is on food stamps the next month. Everyone adores such or such PS1 “legendary” game and such or such Gamecube “classic” and clamors that Call of Duty is braindead garbage and emblematic of everything that’s wrong with the industry… until it’s time to pick which they will play next.

    “Kids these days want their shiny graphics, gameplay is where it’s at” might be a very popular upvote-fishing comment, but everything points to it being empty virtue-signaling. Most people who pat themselves on the back for being dismissive of kidz-these-days-and-their-graphics wouldn’t be caught dead playing the Dark Souls trilogy with Gothic 3-level visuals, even if it came with fully remappable keys, ten times the content and split-screen coop.

    1. INH5 says:

      I don’t know about that. A fair number of indie games have 3d graphics with similar levels of detail to 6th Generation AAA games. Sure, they have more polygons and higher texture resolutions than those games ever did, but limiting polygon counts and texture resolutions does not, by itself, actually save any money. Lower resolution graphics saved money back in the day by making it acceptable to not put in too much detail, but the same goal can be accomplished with, like you wrote, stylized art, which generally looks better.

      As for modern AAA games continually chasing shiny graphics, I suspect that that has more to do with them competing with the indie market by doing things that simply can’t be done on smaller budgets, and the expensive things that can be most easily marketed are the graphics and level of detail. The movie industry seems to be in a similar place nowadays, with Hollywood releasing almost nothing but large budget blockbuster spectacles in response to the flood of direct-to-streaming-video movies and shows.

    2. Mephane says:

      Personally, I can’t stand the most of the graphics of that era any more. Textures are usually still too blurry and the models too rough. A big deal for me, personally, is that I want stuff to actually look like the material it is supposedly made of. Metal needs to looks like actual metal, cloth like cloth, etc. For example, I can’t stand the graphics of WoW any more. So much stuff looks like papier mâché with the supposed detail painted on, especially stuff that is supposed to be metal.

      If you want to go low-fi, then imo it is better to go a true minimalist art style approach – stuff like cell shading, or models entirely untextured, solid-color models, look much better imo than yesterday’s attempts at photorealism that clearly aren’t photorealistic.

  11. Diego says:

    I suspect some AI-fueled technique is going to kick in sooner than we might expect.

  12. Richard says:

    VR is the place where the graphics are still rapidly improving.

    It turns out that when you strap two monitors to your face, you need far higher framerates and lower latencies than when the screen isn’t attached to you.

    In the last five years we’ve gone from VR being a bad joke, to there being multiple room-scale and seated solutions that actually work – at the fidelity of a PS2 or so.

    To get to the room-scale VR at PS2 quality we needed both way better hardware and brand-new APIs to talk to it fast enough – hence DirectX 12 and Vulkan.

    (And of course Apple decided that they hadn’t screwed over their desktop users sufficiently, and introduced their own incompatible API)

    The other Great Leap Forward has been energy-efficiency.
    You can now run a near-top-of-the-line desktop GPU in a 250W power budget, and a GPU that’s as good as the ten years ago top-of-the-line on your phone.
    That’s truly awesome.

  13. CPUs seems to have stagnated, look at how AMD has caught up to Intel and how Intel is struggling wit their production.

    Same might happen with Nvidia, sure AMD is behind in the enthusast tier of cards. But Nvidia’s RTX offerings when raytracing is enabled tanks the performance by 50%.
    Sure recently optimizations was made so raytracing performs 50% better, which means raytracing only costs you 25% performance when enabled.

    Also note that the raytracing is done at a reduced resolution (i.e. if the rasterizer is 1920×1080 then the raytracing may be at 1280×720 for example), and note that the recent optimization seems to have reduced the frame rate of raytracing.
    So the raytracing may be at 720p30 while the rasterizing is at 1080p60.

    And only the RTX 2080 and RTX 2080ti (and upcoming RTX Titan) is able to do 1440p at 60fps + if I recall correctly with settings at high and ultra. If you have a RTX 2070 or less you won’t reach 60FPS.

    And If my mind do not deceive me. Nvidias RT (raytracing) cores are basically Tensor cores that are reserved just for raytracing.
    Tensor cores are basically FP16 (16bit floating point) units, and AMD’s cards are much better than Nvidias when it comes to FP16 and FP32.

    If AMD is able to creep up closer to Nvidias top cards in clock speeds (and looking at the upcoming rumoured 3000 series Ryzen it looks like they are able to do that with the 7nm TSMC manufacturing process), then AMD just needs to add more cores to their GPU.

    Sure Nvidia will answer back, but they may not get the same leap as AMD has in tech. Intel and Nvidia have mostly been smaller gradual incremental improvements. Now AMD jumped forward a lot with their CPUs. I’m guessing they will too with the GPUs.

    We’re heading towards a performance wall. Things can’t go much faster in clock speeds. Which means you gotta go wider. AMDs chiplets are the answer and Intel and Nvidia too will probably go this way with their future products.

    The next die shrinks are 5nm, and possibly 3nm, though 1nm could be achievable too. However the yield and stability of the chips can be an issue (you really don’t want electrons hopping across traces). Productions costs will also be insanely high and chips will possibly be too expensive for consumers.

    The AMD chiplet design is how AMD makes it possible to mix 7nm and 14nm to reduce cos but be able to fit more cores in. (one larger control chip/center chip) with chiplets around it.

    I’m not saying CPUs and GPUs can’t get smaller in the future, just that silicon is reaching a practical limit. A different material and design method is needed, graphite (graphene?) is being looked into for example.

    I’m kinda happy things are flattening out this way. If speeds stagnate and more cores become a focus then multi-threaded programming will get more focus. Instead of a game maxing out 1 or 2 cores the load will be spread out across 6-8 cores and without fully maxing them out, thus getting rid to one source of microstuttering and thermal throttling.

    Thanks to AMD’s Ryzen, Intel has started to up the core count. Now if just Intel and Nvidia would drop the prices on their entire lineup; which would benefit all people that use computers as upgrading would be more affordable and he entire PC “market” could take one step up on the performance stairway (maybe folks will upgrade that 5-10 old PC?.

    And it looks like AMD’s APUs (based on Zen 2 (Ryzen 3000 CPU series) and Navi (successor to Vega GPUs)) will be in the next Playstation and Xbox, which is great news for AMD as they need that R&D and production money to keep the pressure on Intel an Nvidia.

    Technologically speaking things are really going to be interesting in 2019 for consumers.

  14. Shamus you mention Grand Theft Auto V in your Escapist article as a example of rendering plateauing. I think you are right.

    A GTA VI will probably not look more different. The performance may be better, the LOD better, the AA better, the shading/shadows/lighting etc. But nothing ground breaking. RDR2 looks awesome but it’s not something you’ve never seen before. IT’s just really well made. I don’t think animated gorse testicles that change size based on how warm or cold it is quality as technological progress in game technology. It’s amazing they are able to do that without tanking the FPS though.

    Heck when Nvidia announced RTX and showed raytracing, what they could do with it looked really good. But developers had been faking that stuff really well for years. (take Hitman 2018 window/mirror reflections for example).
    Nvidia is either too early (the performance isn’t there yet) or too late (it should have been invented/launched years ago, then again Nvidia probably started the engineering work like 4-5 years ago so maybe they couldn’t have been earlier).

    At highspeed game action raytracing make no sense, it tanks your performance and you don’t have the time to enjoy it. And when you could enjoy it, you could fake it using other methods (that work on all hardware) and looks almost as good.

    Graphics cards have gotten to the point that you can render the game world twice (one mirrored) to fake mirrors and reflections. The raytracing stuff is awesome for CGI production though as previews or full renders can be done much faster. But for gaming…

    Myself I kinda like the look of GTA V and Fallout 4 as an example. It’s not too artificial looking, but not too realistic either. they kinda lie in a comfortable spot in the uncanny valley where the look “kinda works”, especially with a nice artstyle combined with it.

    God of War doesn’t look any more realistic either. It’s got a certain style to it’s look (it won best game in The Game Awards btw, I guess that showed the people who said single player games are dead!).
    I’m not sure more realistic looks would make God of War better.

    Kingdom Come Deliverance and Battlefront (EA ones) uses photogrammetry (not sure about KCD but I’m guessing they do).
    At times the lighting makes them look “photorealistic” but then the characters kinda breaks that or some elements isn’t as “real”. Not enough to ruin it but you can tell that some things are struggling a bit.

    What needs to improve more than looks IMO though is animations. It doesn’t help if things “look” real if they look like NPCs rather than characters living in that world, and this requires AI, more threads, more thus more CPU cores (maybe assisted by GPU cores too).

    And maybe some breakthrough in the way textures are stored, games are getting ridiculously large in size now. 100GB+ games, but SSDs are still small (relative to HDDs), many also suffer from datacaps or slow internet coonections.
    For audio things are kinda solved, the Opus audio codec reaches transparency around 96kbps, depending on the audio and how much it will be reprocessed a bitrate from 96 to 224kbps will be really good.

    Not sure what to do with textures. Sure there is PNG (lossless) and WebP (lossless or lossly texture with a optional lossless or lossy alpha channel), but there is the recent AV1 video codec and a image variant of that is planned, so perhaps that can help shrink textures.

    Though maybe using base textures with procedural methods to generate complex textures at load time may be a way to reduce game size.
    A lot of games are also really crappy at using compression. Just applying zlib to everything would add a minimal overhead but potentially reduce sizes a lot for many games.

  15. Anybody else excited for this?
    BTW! Tim Cain also worked on Fallout New Vegas (the article doesn’t mention this) IIRC.

    It’s out in 2019
    It’s an Unreal Engine 4 game
    Your protagonist isn’t voiced
    There’s a special class of “science weapons” that will have special, ridiculous effects, like a shrink ray
    There’s a full character creator even though it’s first-person only (you’ll see your character in the inventory, and if you leave the game idling long)
    Your companions don’t have separate inventories. Taking companions with you just gives you more inventory space to work with yourself
    If companions really dislike the decisions you make, they’ll leave and go back to the ship. You can persuade them to see things your way
    No romancing companions. They considered it, but decided against it.
    Companions each have a special attack (one named Felix does a double drop kick) but you can also equip them with whatever weapons you want
    Hacking and lockpicking don’t have minigames, and are simply based on your attributes
    There are six skills (strength, intelligence etc.) and for every 20 points you put into one (up until 100) you’ll gain a new perk
    As in the creators’ past games, you can play as a “dumb” character with stupid dialogue options. Your companions react appropriately.
    They’re still not sure if it will be possible to play through the game completely pacifist (but you’ll almost definitely have to at least kill some robots)
    Robots aren’t sentient, but your ship’s AI seems to have a strange degree of personality
    Tim Cain wants you to know there are a lot of drugs, but he’s not going to pressure you to take them

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply

Your email address will not be published. Required fields are marked *