So the next-gen consoles are out. Let’s talk about what we can do with all of that processing power without going broke making hyper-realistic graphics.
So that’s the article. Now let’s go on a tangent. At one point in the article I said…
If you’ve got even a mid-range computer with a decent graphics card, then your computer has more processing power than every computer that existed before the year I was born. (1971) That’s including the supercomputers built by world governments and all the computers involved in sending humans to the moon.
How I arrived at this conclusion:
There’s Moore’s Law, which colloquially is that “computers get twice as fast every two years”. That’s not exactly what the man said, but it’s close enough for most discussions and is a good rule of thumb when measuring performance. He was actually talking about transistor densities, and while more transistors = more power, the relationship is not 1:1 and there’s plenty of room for haggling over what “performance” means.
In any case: It’s been 42 years since I was born, which means things have “doubled” 21 times. 221 is 2,097,152 which means that your computer is supposedly that many times faster than a 1970 computer. Were there 2 million new computers built in 1970? I don’t think so. These were the days of “big iron”, when computers cost millions of dollars and were only owned by large entities. Sales were probably in the thousands or tens of thousands.
We can add in all the computers before that point, but we have to keep in mind that as we go backwards computers continue to get slower. So a computer from 1967 would only count as ¼ of a “1971 computer”, a 1965 computer would count as 1/8 and a 1959 computer would only be 1/64. Adding the computers this way, I think my original (admittedly hyperbolic) claim is true enough: The average desktop computer, if magically transported back to 1971, would be the envy of world governments and would be able to out-perform all the others combined.
I tried to work out just how much more “powerful” that computer would be, but there are so many incomparables. Sure, clock speeds have gone up by 2,097,152, but that’s not the whole story. We also have L1 and L2 cache. Memory isn’t just faster, it’s also larger, increasing how much stuff you can work on before you are obliged to go to the hard drive. Hard drives are orders of magnitude faster, plus they usually have some level of cache of their own. What about the RIDICULOUS processing power of your graphics card? That’s a whole bunch of extra processors that aren’t even included in that 2,097,152 figure.
So the question of “how much faster is it” kind of depends on what you’re trying to do. If you’re doing something completely linear like looking for primes or calculating pi, then your computer will go roughly 2,097,152 times faster. But what if we’re doing something that requires a bunch of memory? What if we’re looking for patterns or repetition in pi? Once the number of digits goes above the memory limit of your typical 1970 computer the speed differential is going to skyrocket as the old computers have to write and read from disk. You could even say it will skyrocket yet again when we hit the limits of the hard drives of the day, since then you’d need to have a bunch of interns running around swapping reels of magnetic tape or whatever.
But even this comparison ignores the GPU power. What would the speed difference be if, instead of calculating pi, we were trying to run Borderlands at 1600×900? We’d have to hand-wave the fact that it’s physically impossible to run the game on those old machines. We could abstract it out and say “how long would it take a 1970 computer to render the typical Borderlands frame?”, which is a bit more comprehensible and lets us ignore stuff like OS, drivers, input devices, etc. Now the problem is simply a matter of reading in a fixed number of polygons from disk, processing them, and saving the resulting image out to disk again. (No point in trying to display the image on the monitors of the day. They don’t have the resolution or color depth.)
Going back to our 1:2,097,152 speed differential: If it takes us 1/30 of a second to render a frame of Borderlands then it will take the scientists of the past in the ballpark of 20 hours of raw processing to make it happen.
But wait! It’s worse!
That’s assuming it’s just one modern processor vs. a single 1970 processor, which VERY much not the case. One modern processor is 2 million times faster than one from 1970, but your graphics card has, what? Dozens of cores? It depends, and I don’t know enough about the core counts in modern cards and how those counts have changed over time. In any case the graphics card is technically a whole pile of computers that have been perfectly optimized for this specific task.
Perusing Wikipedia, it looks like if I treat your typical GPU like 15 regular processors I can run these numbers without being accused of unfairly stacking the deck against the machines of the past. So we’ll think of your computer as 16 CPUs, your GPU+CPU. So your computer isn’t 2 million times faster at this task. It’s 33,554,432 times faster. The processing won’t take the old computer 20 hours, it will take 13 days.
But wait! It’s worse!
The “20 hours / 13 days” figure is only true if they have unlimited memory, which old timers will be happy to tell you was never the case. For the purposes of this comparison, let’s give the people of the past a break and let them use the processor and memory from the 1975 Altair 8800, with whatever industrial-grade hard drives were available at the time, if only because that makes things easier on me. We’ll assume they’re saddled with the throughput and seek times of the day, but their hard drives can be as big as they need to be so we don’t have to run the legs off our interns hauling truckloads of tape drives around.
The high-end Altair had 8k of memory. Your average texture map in Borderlands is probably 5122 pixels, with each pixel requiring 4 bytes of memory. So when it comes time to render (say) Brick‘s face we need to get 1,048,576 bytes of data into our 8,192 bytes of main memory. That’s obviously not going to fit. What we’d have to do is
give up painstakingly read in the first 8,192 bytes of the texture, render as many pixels as we could with it, then completely purge main memory and read the next 8,192 bytes from disk. Repeat that 128 times. Awesome. That’s one polygon.
(We’re ignoring mip maps and antialiasing, which would make this task much harder. We’re also ignoring the fact that we’d have less than 8k to work with, because the rendering program itself would eat it all up.)
But wait! It’s worse!
Where are we storing the final image? It’s 4,320,000 bytes, which is five hundred times larger than our main memory, which is already filled with input data anyway. Ignoring this (or spotting the old computer another 8k of memory just to be sporting) I suppose we’d have to read in the final image one block at a time, draw the polygons that touch that section of the image, then save it back to disk and load in the next block.
In this scenario, it hardly matters what the processor speeds are. Yes, it will take excruciatingly long for the old computer to transform and light those polygons, and longer still for it to calculate those color values. But that processing is a really trivial part of this project. Since we’re doing the final image a single 8k block at a time, the whole thing
is stupid and impossible would need to be rendered about five hundred times. We’d load in a block of image, process everything, save that block out to disk, then load in the next block and repeat. And keep in mind that each round of processing requires us to process each polygon by drawing it a little bit at a time and then swapping in the next block of texture data. The 13 days of processing would need to be repeated hundreds of times.
We could eliminate bits of the “repeated hundreds of times” figure by throwing away polygons when they fall outside the current block we’re rendering, but it hardly matters. The processing is nothing compared to all that disk I/O. It’s hard to find data on how fast (or rather, how SLOW) drives were in 1970. Even this drive from 1991 looks pretty grim. The closest I could come is this chart, which only goes back to 1979 and suggests that a 1979 drive could move about half a megabyte a second, with a seek time (according to this Wikipedia article) of 100ms or so.
Every single read from disk is going to take ~115 milliseconds even with that 1979 drive, and we need to read 128 times. Which means we’re spending fifteen full seconds per polygon just on disk I/O. Plus the time it takes to do the calculations. Multiplied by the number of polygons in the scene. Multiplied by the number of passes it takes to render it in 8k blocks. Oh, and the polygons themselves don’t fit in memory either, so we’ll need to read those in and out of memory as well.
My calculator says the job would take just over 16 years. Computers would double in power eight times while the job was running. If they began the job in 1970, then sometime in March of 1986 the scientists could put down their Rubik’s Cubes and check out the completed image. I suppose they’d have to print it out, since computer displays still wouldn’t be advanced enough to display it.
id Software Coding Style
When the source code for Doom 3 was released, we got a look at some of the style conventions used by the developers. Here I analyze this style and explain what it all means.
The Best of 2016
My picks for what was important, awesome, or worth talking about in 2016.
The No Politics Rule
Here are 6 reasons why I forbid political discussions on this site. #4 will amaze you. Or not.
Two minutes of fun at the expense of a badly-run theme park.
Mass Effect Retrospective
A novel-sized analysis of the Mass Effect series that explains where it all went wrong. Spoiler: It was long before the ending.