Graphics Hardware is Killing PC Games

By Shamus
on Feb 22, 2007
Filed under:
Video Games

“No you fools! You’ll destroy us all!”

That was my reaction to this story at ars technica (via) which talks about new “external” graphics cards. The idea is that users can buy lots of them and stack them high and wide and set up fancy cooling schemes that would not be practical within the confines of the average computer case. I can only conclude that this is some sort of sick scheme to eliminate PC gaming forever.

People made a big deal about the PS3 “sticker shock”.  You know, because the complete game system, including controllers and the blu-ray transmorg-matrix, cost $600.
People made a big deal about the PS3 “sticker shock”. You know, because the complete game system, including controllers and the blu-ray transmorg-matrix, cost $600.
Don’t get me wrong, I like getting fancy new hardware, as budget allows. This would be a nice development if this were something just for framerate junkies, but the way things work right now is that expensive new technology ends up appearing on the side of PC games under the words Minimum System Requirements about three weeks after it gets invented. ATI could come up with a graphics card that costs $10,000 and needs to be continually submersed in liquid nitrogen, and idiot developers would build their next-gen engine on top of it. Advances like this are things that hardcore gamers should be doing to get ahead, not things that average gamers should be doing just to keep up. Sadly, I’m sure that’s where this is going. The only thing more horrifying than seeing a PC game which requires a $500 graphics card is one that requires several of them.

And even if you do pour all that money into your PC, odds are the games will suck anyway, and run like a sick turtle. On an uphill grade. Against the wind. While, like, pulling some heavy stuff or something. You know: Slow.

You’re a coder, working on developing a game for the PC. NVIDIA hands you one of their latest cards, which can do some new rendering feature. Let’s call this new feature “bling-mapping”. The NVIDIA SDK comes with a demo showing off what bling-mapping looks like and how it works. It’s pretty sweet. We HAVE to put this in our upcoming game!

You know in eighteen months “everyone” will have one of these cards. So you add bling-mapping to your graphics engine. Sadly, this is not as easy as dropping it into place and walking away.

Sure, it makes polygons look nicer, but it makes them take longer to render. Is it worth it for little polygons in the distance? Where is the point at which the feature is just slowing things down and not adding to the game visually? A meter from the camera? Ten meters? A hundred? You need to figure this out. Oh, and this distance probably varies based on resolution, so at 800×600 the cutoff is N meters but at 1024×768 the cutoff is (maybe) N*2 meters. You’ll need to work out how this scale works so you know how far away a polygon needs to be before you can safely disable bling-mapping for it.

Looking at the complex calculations that are needed to “bling” a polygon, are there any approximations or shortcuts that will be (say) twice as fast but look nearly the same? Perhaps there is a shortcut that will make all bling-mapped objects render faster, but it causes very ugly distortions and artifacts up close. Perhaps there is another, different optimization that only works well at a distance, and perhaps these two optimizations can’t be combined. Figuring out how to use them properly, and when to use one and when to use the other, is no small task.

What about transparent polygons? Perhaps bling-mapping looks fantastic, but for textures with transparent areas (like grass, or leaves) the effect is ten times slower to render? Is there a way around this? Maybe you should disable bling for these parts of the scene? Or is there some way your artists could build these items that will mitigate this problem?

What happens when you go to put a decal on the polygon? (A decal is another texture slapped over the surface, usually things like adding scorch marks or blood splats to walls, or cracks and bullet holes to a plane of glass.) Maybe bling-mapping and decals don’t look very nice together, or they cause really heavy slowdowns.

The variables are endless. There are many aspects to the scene that need to be considered. I’ve barely scratched the surface, really. This work will take months.

All done? Got all those tradeoffs worked out? Think you can render the scene with bling-mapping enabled and not waste too many GPU cycles? Great. Now go do it all again. ATI has a card that does the same thing, only slightly differently. It doesn’t have the problems with transparent textures like NVIDIA, but it ends up being really, really ugly with polygons which have certain shadow effects applied. So you’ll need to find some way around that.

Done? Great. By the way, NVIDIA just came out with a new card. It speeds up bling-mapping by 50% in certain cases, but only if you do this other optimization over here which is incompatible with other optimizations that you’ve already put into place and calibrated.

Our game should be ready to ship by now. Aren’t you done yet? You were only working on one feature.

Obviously old cards won’t support bling-mapping. But since it is now an intergral part of our render path, we must write an entirely different path that does all of the rendering without the aid of bling-mapping. Ah, screw it. We’ll just drop support for old cards. We’re already four months past our intended release date.

This is what the “Advanced Video Options” dialog looks like to a casual user. Similar to the mysterious devices in Myst, the user has no way of knowing what the controls will do without experimenting with them. Some sliders will do nothing.  Some will make the game look like crap but do nothing for framerate.  Some will cripple performance for little or no visual benefit.  These controls are there because there are so many graphics cards and so many configurations and screen resolutions that nobody has time to wrap their head around it all.  They have to depend on the end user to come in and experiment with the controls until it works right.
This is what the “Advanced Video Options” dialog looks like to a casual user. Similar to the mysterious devices in Myst, the user has no way of knowing what the controls will do without experimenting with them. Some sliders will do nothing. Some will make the game look like crap but do nothing for framerate. Some will cripple performance for little or no visual benefit. These controls are there because there are so many graphics cards and so many configurations and screen resolutions that nobody has time to wrap their head around it all. They have to depend on the end user to come in and experiment with the controls until it works right.
Picture the early Playstation titles, and compare them to the Playstation titles that came out near the end of the console’s lifespan. The latter ran smoother and looked far better, even though they ran on the exact same hardware. This is what you get when coders can have a fixed configuration to deal with: They get good at using it.

What I outlined above isn’t really how things work. It would be great if a coder was free to work and optimize a particular feature for endless weeks or months, but this just isn’t practical. The coder has other work to do, and the rest of the team will need him to stop mucking with the engine so they can finish the rest of the game. The result of this is that by the time coders have come to grips with bling-mapping and have it working right, it will be phased out in favor of some other new feature that comes along. We’re re-inventing the wheel every eighteen months, and for the most part this means that all of our games are built on top of first-generation engines or even rough prototypes. These graphics cards are getting faster and faster, but I’m confident that much of the additional speed is being consumed by sub-optimal code. As just one example, check this thread, where dozens of users with dual-core machines, 2GB of RAM, and one or more high-end graphics cards, all gather to complain about slow framerates. Let’s put this in perspective: If these guys had saved the money they had put into their PC’s, they would have enough cash to buy a PS3 three times over. Or they could buy seven Wiis. And yet they are still having stability, framerate, or esoteric driver issues.

Yes, bling-mapping is great. It makes the player say “wow”. But then they get over it and play the game. They will notice that it is choppy, buggy, has annoying visual glitches, or requires them to muck around with driver and DirectX versions.

It used to be consoles were for the “serious gamers”. They were the ones that shelled out the big bucks for a special computer that just played games, while those of us of more humble means made do with using our PCs, which weren’t as specialized but which we already owned. Now we’ve reached the point where PC games are less numerous, more buggy, and require more expensive hardware. All of this and the games run slower, too.

In the games store, PC games have been relegated to a small shelf at the back, like the porno rack at the bookstore. Yeah, we hate to waste shelf space on that stuff, but there are always a few freaks that like to come in and buy that sort of thing. Of the meager assortment of games they do bother to carry, a handful are probably venerable oldies like Starcraft, Diablo II, and their respective expansion packs.

This is a sad state of affairs. Somewhere in this ridiculous pageant the whole point is lost: Games are supposed to be fun. The main chunk of the blame falls on PC game developers, who insist on riding the bleeding edge instead of hanging back technology-wise and focusing on making something worth playing. Wasn’t that the whole point?

Enjoyed this post? Please share!



2020201070 comments? This post wasn't even all that interesting.

From the Archives:

1 2

  1. Silames says:

    Dude thats some ******* crap if the whole slow downt thing is true. My computer goes slow enough with the graphic cards it already has!

  2. Let’s sue the electronic gaming industry for driving
    hardware requirements to such ridiculous heights, even for simple
    adventure games.
    — Guybrush Threepwood.

    I’ve never understood the mentality of PC game developers when it comes to this stuff. That’s what has driven me primarily to console gaming.

    Why don’t they design specifically for average-quality systems? Those with lesser systems would be a little behind, but a relatively cheap upgrade would catch them up. On the other hand, those with more powerful systems would be ahead of the curve and wouldn’t have to worry about system requirements for quite some time. The widest range of potential buyers is reached, and customers aren’t ticked off by buggy, slow, resource-hogging games — or sticker-shocked by the price of constant upgrades.

    As you said, it’s supposed to be about making fun games and selling them to as many people as possible, not riding the bleeding edge of technology just for its own sake.

  3. MV says:

    I realize this post is mighty old, but even so it still sums up everything that is wrong with PC gaming. I refuse to do more than dip my toes into the realm of PC gaming, since if I wanted to play something made in the past six years I’d have to get a whole new computer. I have a decent graphics card but it’s not the best. I have a good processor but it’s a single-core. But since all PC games HAVE to be made for the newest $500 piece of hardware that comes out, it would cost a fortune just trying to keep up.

    Back in the day, computers could do things consoles could only dream of. When you were playing 16-bit games on your SNES, computer gamers were playing games with cushy graphics and fully voiced dialogue. Nowadays consoles can do pretty much the same things PC games can, so I find little reason to upgrade my PC. I’ll stick with my 360 and my Wii, thanks.

  4. […] Development Environment – I’m going to link to Shamus once again because he explains it better than I ever could. Console developers can rely on a stable environment that never changes. […]

  5. John Magnum says:

    But since all PC games HAVE to be made for the newest $500 piece of hardware that comes out, it would cost a fortune just trying to keep up.

    This is absolutely not the case. It hasn’t been the case for, like, four or five years. Since 2007, there have been a tiny handful of PC games that targeted the extreme high end of graphics hardware, and were barely playable with lower-end stuff. Crysis. Metro 2033. Crysis 2. Battlefield 3. That’s pretty much it.

    Indeed, the opposite is actually true. Games are developed based on the Xbox 360’s capabilities. While games still look nicer than they did last year by a very marginal amount, the 360 is the main benchmark and so performance requirements really haven’t gone up very severely. High-end graphics cards from five years ago play modern games, as do midrange graphics cards from two years ago. We’re at the point where Intel’s most recent integrated graphics can handle modern games at low resolutions and detail settings.

    Somewhat predictably, there is a contingent of hardcore graphics nuts who actually hate this state of affairs. They want graphics to be much more demanding, because they want games to look much nicer.

    I’d like games to look much nicer, because I spend a lot of time playing games and I like looking at extremely pretty things. I think there have been a couple factors causing an absence of quantum leaps. The big one almost certainly is the sheer cost of producing the sheer volume of art assets required to fill a world with appropriate detail. But I think the near-universal focus on building PC games around the Xbox 360 specifications has also contributed.

    P.S. At this point, the 360 seems to constrain design more from RAM than from GPU power. See stuff like how New Vegas handles Vegas. That tends to be less bypassable in the porting process, since you can always layer on higher resolutions, faster framerates, and some special effects that cost GPU time but are “cheap” to implement from a programming perspective without doing much to the core geometry and textures.

1 2

One Trackback

  1. […] Development Environment – I’m going to link to Shamus once again because he explains it better than I ever could. Console developers can rely on a stable environment that never changes. […]

Leave a Reply

Comments are moderated and may not be posted immediately. Required fields are marked *

*
*

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun.

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>