This video is mostly a more organized version of things I said throughout the last console generation. I guess I wanted to give the PS3 a proper send-off by gathering all these points into one video.
I had another whole section in here talking about multi-threading, but it made the video LONG and I don’t think it was needed. (Although at one point I say, “You’ve probably guessed that the big hurdle here is the need for lots of parallel coding.” That doesn’t make total sense to expect non-coders to “guess” anything without the long rambling section on multi-threading.)
Another thing I didn’t get into is the idea of processing latency. As I read it, your program gives tasks to the PPE and it delegates to the SPE. If your program is really parallel friendly then the PPE ought to be able to keep all the little SPEs busy all the time. However, this introduces a middleman to the process. Even assuming you can keep all the little sub-processors busy, you’ve still got this annoying lag.
Let’s say you’re trying to send a package. One courier uses a bike. He can take one box across the city in a day. The other outfit has a van that can take ten packages across the city, but they spend some time sorting packages at the distributions center or whatever. They deliver in two days.
The van has higher throughput: Ten packages in two days instead of one package in one day. But sometimes you don’t want to wait two freakin’ days. So arguments over which one is “faster” are kind of confused. It sort of depends on what you’re trying to send and when you need it.
More importantly, processing latency is really bad for games. It’s fine for rendering farms and crunching primes, but in a videogame latency matters. I don’t care if a game has mega graphics at 200 frames a second, if it takes five seconds between the point where I hit the jump button and the moment when I see my character jump on screen, then this computer is not useful for gaming. Obviously the PS3 wasn’t that bad, but the processing latency is yet another unwanted thing coders might have to worry about. I didn’t bring this up in the video because I have no idea how bad this problem was.
I am really glad these next-gen machines have settled into more or less standard quasi-PC hardware. This ought to make them nice and cheap over time, and should make cross-platform work less of a headache.
So the next console generation is now this console generation, and the Playstation 3 and Xbox 360 are behind us. I’ve been in the awkward position of being a Sony fan but a Playstation 3 critic. Lots of people have criticized the machine as a bad design, but I think it was the result of something far worse that just bad engineering. I think the machine was actually the result of corporate scheming gone horribly wrong.
But to talk about that, we need to talk about programming.
Part 1: Programming
Programming is a strange discipline. We’re very quickly running into the limits of what we can do with our intellect. Sure, lots of things in the universe are hard for us to figure out, but programming is one of the few places where we make MORE things we can’t figure out. Composers don’t accidentally write music that’s too complex to be played and engineers don’t routinely build bridges that are so complex that nobody can build them or drive over them. It’s one thing if you can’t understand quantum mechanics, but it’s another if your coding team can’t understand the code they wrote six months ago.
Countless books, classes, and even entire programming languages have been created to help us cope with the problem of runaway complexity. In movies they al ways show us programmers furiously typing away at high speed, writing thousands of lines of code an hour. In reality, any project beyond the prototype stage will have coders spending more time reading code than writing it.
Let’s say you want to hang some Christmas lights. But not just, like, one string of lights. Let’s say you want some Griswold-level monstrosity that warms the house and keeps the neighbors awake. It’s a major engineering project. You need to limit the amount of power you draw from each outlet to avoid a fire or a blown fuse. There’s a limit to how long chains can run and you need to hook it all up in a system that makes sense and conforms to whatever the specs are for this project.
One coder solves the problem with brute force. There are cords everywhere and the whole thing is a crazy fire hazard. The floors are thick with extension cords and tripping over a plug on the back porch ends up cutting power to lights in the front yard.
Another coder keeps it all clean and organized. The wires are labeled and color-coded with the banks they belong to on the fusebox. From the outside, both of these jobs look the same. It lights up and it seems to work okay. But you can see the difference when someone wants to change the blue string by the upstairs master bedroom or when someone smells hot wiring in the garage. The clean job is just a matter of reading the labels on the wires and tracing where they go, and the messy job is a ridiculous maze where it takes hours just to figure out where the problem is, and longer still to untangle things enough to solve it.
“So hire good coders!” you say. Well, we’ve been doing that. Or trying to. But there’s a finite supply of super geniuses in the world and not all of them want to spend their intellect on video games. And no matter how carefully you label things and how neatly you tie those cables, eventually projects will spiral out of your control. When your Christmas lights need to cover the neighborhood? Or the city? The county? Sooner or later you’ll hit a threshold where it’s impossible to get anything done because your coders spend all their time reading code and trying to find the one outlet where they should plug their new code in at.
John Carmack, former lead programmer of ID software and one of the super-geniuses we were talking about a minute ago, has even said that computer graphics programming is more complex than rocket science. And to be fair, he might be the only person to have done both professionally. I’ll add to that by saying that the level of complexity increases with each new graphics generation.
And what if your development house can’t afford a John Carmack? What if you have to make due with rank-and-file mortals like me? You probably want to ship your game in the next couple of years and don’t want to lose it in development hell. Which means you don’t want to mess with anything that will make the code more complicated.
Part 2: Hardware
I swear we’re going to get to the Playstation eventually. But before we can do that we need to talk about how normal gaming machines are built.
First you’ve got your main processor. That’s a general-purpose processing unit. It handles the operating system, input, networking, all that good stuff. If we’re talking about a PC, then you can lump all these jobs together under the heading of “running Microsoft Windows”.
Somewhere else in the machine you’ve got your GPU. Your graphics processor. But unlike the general-purpose CPU that can do anything, the GPU is really stupid. It’s been specially designed to do one very simple job. Or at least, simple by the standards of computer processing. The only job it has is to turn polygons into pixels. That’s it. It just crunches numbers and renders graphics. The narrow focus of this kind of processing means the chip can do its one job really really efficiently.
So this has been AAA games programming for almost two decades now: A CPU, a GPU, and a pile of memory. PC, Mac, Xbox 360, Wii… doesn’t matter. This basic design premise permeates our development. Some machines have more GPU, some have more memory, sometimes there are other tradeoffs made during design, but the basic design idea still holds. Our graphics engines are built around this idea, our programmers are all familiar with it, and our games are optimized for it.
And then we have the Playstation 3.
Part 3: The Cell Architecture
The Playstation 3 uses the Cell Architecture. I’m not remotely an expert on this, and explaining this properly would get us deep into all sorts of side-discussions. But let me give a simplified view:
The Cell processor has eight (not really eight, it’s complicated, but let’s say eight) of these sub-processors, called the Synergistic Processing Elements, or SPEs.
The SPEs are not as robust as a CPU but they’re more sophisticated than a GPU. They share what’s called a circular memory bus. You can’t get detailed specs on this unless you’re a Playstation developer, but from what I’ve read it almost sounds like they pass blocks of memory back and forth between them, like peers on a network. As a programmer who finds manual memory management challenging enough, the very idea of a circular memory bus fills me with a sense of panic. I’ve read the description several times now, and every time I come back to it I have more questions about how this is supposed to work.
Presiding over this madness is the Power Processing Element (PPE). It’s in charge of giving tasks to the SPEs, getting the results, and passing them back to your program.
The idea is that if you wanted to manufacture a more powerful machine, engineers could add more SPEs to the mix and the load would be distributed among them. If this sounds like a crazy and unwieldy system… well, it is. This is not a variation on the old computer paradigm. This is something new and strange. Processing clusters like the Cell are more commonly used for large-scale things: Locating and factoring massive prime numbers, running huge simulations, and rendering farms. Nobody had ever tried to use one for videogames before, and at first glance it seems like a bad fit.
Wikipedia says that, “The Cell architecture includes a memory coherence architecture that emphasizes efficiency/watt, prioritizes bandwidth over low latency, and
favors peak computational throughput over simplicity of program code. For these reasons, Cell is widely regarded as a challenging environment for software development.” Those are not comforting words for an industry already struggling to write stable, comprehensible code on far simpler systems.
So let’s bring this back to the Playstation 3. I’m sure you’ve guessed that the big hurdle here is the need for lots of parallel coding. Games programming is complex enough as it is, and here we have a machine that demands a whole new level of parallel coding in order to use it properly.
Developers who wanted to make games for the upcoming Playstation 3 found themselves in this difficult position. They had to wrap their heads around this alien new system. At the same time Sony was hyping the device to consumers and talking about how much raw processing power it had. The devs were working just to do basic things in this strange new world, while consumers had paid big money for their consoles and they had been told the machine was “Practically a supercomputer!”
The “supercomputer” line is one of those claims that is incredibly deceptive while also being technically true. It’s like me saying, “In my garage I have twice the horsepower of a NASCAR vehicle!” But then it turns out what I’ve really got is a fleet of 14 Nissan LEAF cars. Yeah, it’s twice the horsepower, but you can see how the claim is kinda misleading.
Part 4: What Happened?
So what happened here? Did the Sony engineers go mad? Are they somehow brilliant enough to design something this complex but too dumb to see the inescapable development problems?
Well, no. I’m inclined to believe that the engineers built exactly the machine they were told to build. I don’t think the Cell was an engineering decision. I think it was a strategic one. I think this machine was hobbled and nearly ruined by corporate politics.
Remember that when the PS3 was designed, Sony was riding high on the glorious Playstation 2, which to this day remains the best selling console of all time. The library was massive and still growing. It had the most clout, the most exclusives, the most fans, and made the most money.
Sony’s only worry was that there was an upstart on the field. The launch of the Xbox was weak, but Microsoft is a scary opponent. Usually they win. (Windows.) Once in a while they lose. (Internet Explorer.) But they always play aggressively and they have a seemingly endless supply of money. You don’t want to sit across the table from Microsoft if you can avoid it.
This is conjecture on my part, but here is what I think happened: Sony decided that they wanted to smother the Xbox in its crib, before it could grow into a serious threat. Sony already had a huge market share, they just needed to lock it down and prevent Microsoft from taking anything away from them.
The Cell design would make the Playstation 3 very, very difficult for people looking to port their games. Once you have your game optimized for the Cell, it will be very hard to port TO another platform without going down into the guts of your engine and doing some major re-engineering. It’s like changing a NASCAR into a semi truck. Even if they have about the same horsepower, you can’t make one into the other with a simple tune-up.
This would effectively put a wall around the Playstation 3. In a world where the Playstation 3 was the dominant platform, most developers would start there. And many wouldn’t want to go through the expense, hassle, and opportunity cost of porting to the PC or whatever the second Xbox console would be called. Sony would get lots of de facto exclusives without needing to negotiate and work for them, and the upstart Xbox would die before it got big enough to be a threat.
My thinking is that Sony didn’t choose t he Cell architecture because they thought it was good for game development, they chose it because they thought it would be a good way to stonewall Microsoft.
It was a perfect plan, except it was ruined by another, mutually exclusive perfect plan, which was to use the Playstation 3 to cement the Blu-Ray as the next-gen format of choice. Remember that at the time the Blu-Ray was in competition with HD DVD, very similar to the VHS versus Betamax war of the 1980’s. There were two standards, only one would survive, and whoever won would get to print money for the next couple of decades by being the gatekeeper on the dominant format. And a great way for Sony to make sure that there were lots of BluRay players was to make it part of their games console.
But the Blu-Ray player made the Playstation 3 far more expensive than the alternatives, and that expense eroded the userbase that their first plan depended on. Then in order to bring down the price, Sony dropped the backwards compatibility. This had the added effect of cutting the machine off from the largest and most impressive games library in the history of consoles, which means that the PS3 needed to be able to survive on the strength of its launch titles. But because the cell was such a pain in the ass to figure out, there wouldn’t be that many launch titles.
Early sales were low. The first round of games were late and failed to justify the huge pricetag the device commanded. Yes, there are people out there who will pay $600 for a machine to play the latest Metal Gear. But there are not enough of those guys to support a multi-billion dollar console. This machine needed to go mainstream to be a success, and everything about it – from the huge price tag to the tiny library to the bulky hardware – kept this machine from having mainstream appeal.
Low sales meant fewer adopters. Fewer adopters, combined with the difficult development environment, meant that developers weren’t eager to make games for the Playstation 3.
The console was in a downward spiral. Developers targeted the Xbox 360 first,with plans to port later. And then they didn’t bother porting at all because the PS3 was a headache and the audience was so small. That wall that was supposed to keep developers in ended up keeping them out instead. Sony was undone by their own avarice. They tried to have everything, and nearly wound up with nothing.
This downward spiral could easily have killed the PS3. It’s hard to imagine what that would have done to Sony. A failure of that magnitude could have had repercussions far beyond the gaming world. It might have changed the outcome of the HD vs. Blu-Ray competition. If the machine had flopped hard enough, it could have done to Sony what the Dreamcast did to Sega in the late 90’s, and knocked them out of the console market for good.
Luckily, Sony was rescued from disaster by the most unlikely party.
Part 5: Microsoft saves the day.
Sony was saved from their foolishness by the more extreme foolishness of Microsoft. The Xbox 360 had shockingly high failure rates. Microsoft tried to downplay it, and in the 90s this might have worked, but with the help of the internet consumers were comparing notes and realizing that the machine had a lifespan barely long enough to push it past the warranty period. It was both a financial and PR disaster for Microsoft, and their PR bungling and spin only made them look worse when the truth of the problem became inescapable.
This failure gave the Playstation 3 time to recover and carve out a viable portion of the market. Consumers might think that a $600 console with a tiny library is a terrible deal, but it’s still better than a $400 brick.
Like I said before, I don’t have any proof that Sony based their system on the Cell in an attempt to kill the nascent Xbox, but I find this explanation more pl ausible than “Sony engineers suddenly went mad”.
Sony did wring a partial victory out of this mess. Their Blu-Ray format became the industry standard. While that doesn’t mean much to games directly, it will provide Sony with a nice steady cashflow for years to come. On the other hand, their overreach probably gave the Xbox the foothold it needed. If the Playstation 3 had been a more conventional device with some backwards compatibility, then Microsoft might have been driven out of the market by the Xbox 360 Red Ring of Death disaster. With an affordable alternative and access to the massive PS 2 library, this alternate universe PS3 would have been perfectly positioned to soak up all those disillusioned Xbox users. Microsoft might have been forced to back out of this crazy console dream and re-focus their attention on core products. You can only lose so many billions before the shareholders will tell you its time to cash out while you’ve still got some chips left.
But maybe this is the best outcome in the long run. We’ve got the big three competing for our attention and dollars. I have much better hopes for this new generation of devices. The hardware makes sense, the prices feel right, the libraries are off to good start, and there haven’t been any major failures or disasters.
DM of the Rings
Both a celebration and an evisceration of tabletop roleplaying games, by twisting the Lord of the Rings films into a D&D game.
Batman v. Superman Wasn't All Bad
It's not a good movie, but it was made with good intentions and if you look closely you can find a few interesting ideas.
The Best of 2016
My picks for what was important, awesome, or worth talking about in 2016.
Spec Ops: The Line
A videogame that judges its audience, criticizes its genre, and hates its premise. How did this thing get made?
The Biggest Game Ever
Just how big IS No Man's Sky? What if you made a map of all of its landmass? How big would it be?