Pseudoku is not going well. Part of the problem is that I’m busy with other stuff and I’m only working on this for a few hours a week. The more serious problem is that I’m still having strange compatibility problems that have no business cropping up in a project so simple. I’d be upset about this, but it’s not really hurting me right now. The project is stalled on the other end – the business end. I couldn’t possibly explain the whole stupid story here, but the short version is:
The problem isn’t that you can’t solve this. The problem is that there are a hundred apparent solutions. (Different banks. Different forms. Pay someone to do this for us.) Finding the one solution that will waste the least time and money is the problem.
Part of the confusion is that it’s really hard to parse some of these forms. Usually you can tell when you’ve got the wrong form. If you’re a single person and you’re filling out a form that asks about your spouse and children, then you can be reasonably confident you’re barking up the wrong tree and you need a different form. But in our case ALL of the forms feel wrong. Everything related to doing business was designed in the middle of the last century, where opening a business means you plan to do business locally. Everyone assumes I want to sell pancakes on main street. There’s no concept of a “one-man international business”. Sometimes it’s not possible to truthfully and accurately fill out a form because it asks questions that don’t make sense.
Imagine you’re opening a furniture store, but the business form has a REQUIRED field that wants to know the license plate numbers of the cars you’ll be using to deliver the pizzas. That’s the level of incoherent stupidity we’re dealing with right now.
In any event, the technology problems in Pseudoku don’t matter because the project is stalled by bureaucracy.
I got a cheap ($90) minimalist Win7 box for testing Pseudoku. It’s an HP with integrated graphics. It’s got a virgin install of Win7 and no additional funny business. Let’s put Pseudoku on it and see how it runs…
Here we Go Again
Like last time, the program instantly crashes when it tries to use any OpenGL calls invented after 1994.
This makes no sense. I know integrated graphics systems are wonky, but this stuff really should be available, even on barebones system like this. I fiddle around with different ways of accessing those OpenGL extensions, and I always get the same result.
Is this really a thing? Are there systems out there that support Direct X as it existed in 2010, but were stuck with the 1994 version of OpenGL? If not, what could I be doing wrong here? If so, why haven’t I heard about it and written a long rant about it yet? That’s kind of my thing.
Like I said in a previous entry: I’ve never done deployment so I never had to worry about goofy edge cases and obscure hardware setups. In any case, I am tired of slamming my head into this problem. It’s a dumb waste of time.
Looking at my code, do I really need the OpenGL extensions? This isn’t like Good Robot, where I needed to push potentially tens of thousands of polygons at 60FPSWhich isn’t actually a big deal on modern hardware, but it’s still a couple of orders of magnitude more challenging that what I’m trying to do with this puzzle game.. Let’s see if I can strip the program down to the base OpenGL calls.
As it turns out, I’m using exactly one OpenGL extension. Everything else can fall back to vanilla OpenGL. The only thing that requires the modern stuff is the…
The problem you’re trying to solve is that texture-switching takes time, and excess texture-switching can kill performance. Imagine the graphics card is a painter. I tell him to paint a blue line. Then I tell him to paint a green line. But since his brush is loaded with blue paint, he has to lower the brush, clean off the old paint, and load it up with the new color. Then I ask him to paint another blue line and he has to go through all of that again.
You can mitigate this by sorting all the brush strokes ahead of time. Draw all of the blue lines, then all of the green ones, etc. The problem is that now you’re doing sorting on the CPU. If you’re eating up cycles on your processor to save cycles on your graphics card, then that’s a red flag that you might be approaching the problem the wrong way around. Worse, this creates a moving bottleneck. A slow computer with a great graphics card will exhibit problems you don’t see on a fast computer with a middling graphics card.
The better solution is to make a texture atlas. A texture atlas is when you take all the different textures you’re going to need in a scene and stick them into a single image. It’s like having a paintbrush already loaded with every color you’ll need, so you never have to clean off the brush and get a new color.
The downside is that a texture atlas is huge and the card might not support anything that large. But this is actually a good thing! It gives you a clear pass / fail. You know exactly how much graphics memory the user will need and you can state so in the system requirements for the game. This is far more preferable to those weird-ass situations where a dozen computers will all have different performance problems and the bottlenecks aren’t always obvious.
“Your graphics card must have 3.2 megaboozles.”
Is far easier for Joe Consumer to understand than this one:
“Your graphics card needs 3 kilowappers, UNLESS your computer has less than 100 fizzlers, in which case you need 4.2 kilowappers, UNLESS you’re using the new Smeg class chips that support the next-gen shaders, in which case you can go all the way down to 2.5 kilowappers.”
Making a texture atlas takes all these complex variables with regards to throughput and boils them down to the simple question of “Can this image fit in video memory?” It makes your engine simpler. All you have to do it place all of your textures into a single image.
In Good Robot, I did this manually:
That’s a portion of the Good Robot atlas. It was annoying to maintain. You had to manually arrange items on a grid, and if you were a pixel off in any direction then the resulting sprite would be clipped or have strange edges. Once you had the items placed, you had to tell the program how to find it using a system that was very convenient for the programmer but not convenient for the artist. That was fine when I was working on the game all by myself, but I felt bad for dumping that obtuse and inconvenient system on the artists.
So after Good Robot I added some code that would build the atlas dynamically at launch. The artist puts in their textures, and the game will arrange them into an atlas and work out how to find them. It’s a lot less work. It looks at the sizes of the textures and figures out how to pack them efficiently. It also works out a map so it can find the individual images later, because otherwise what’s the point?
This atlas building is currently the only part of Pseudoku that uses OpenGL extensions. I’m creating a blank atlas texture, then using a GL frambuffer object to render all the little sprites directly into the atlas.
I don’t have to do it that way. Rather than handing the job off to the graphics card, I can manually build the atlas by arranging the images in main memory. Instead of creating a blank texture, I create a blank expanse of memory and copy the texture data into it a block at a time. When I’m done, I hand the memory off to GL and tell it to make a texture out of it. This new way is not as compact and it’s probably slower by some trivial amount that doesn’t matter to humans. But it gets the job done. Here’s the auto-generated atlas:
Getting rid of the framebuffer stuff FINALLY gets me past the crashes. Pseudoku now runs on the virgin machine.
And so at last Pseudoku runs! I mean, it was already working on 90% of the machines out there, but now it’s running on a virgin machine with no redistributable packages, updates, drivers, or anything else. If it runs on this thing, it will run almost anywhere.
It’s slow. I mean really, mindbogglingly slow. It is so slow it would be hilarious if it wasn’t so annoying. On the virgin it gets a frame every other second. That’s half a frame a second.
It takes the game two entire seconds to draw… how many polygons?
On the very first level there are six tiles, each of which are two quads. The slots where you place the tiles make another 6 quads. In the word “Pseudoku”, each letter is another quad. The mouse pointer and the glowing aura around it each count as another quad. The entire gradient background is one more. Then the six menu buttons at the bottom add another 12 quads. That means the entire scene is 41 quads.
“Well Shamus, everyone knows integrated graphics are terrible. You should expect poor performance.”
Let me see if I can put this into perspective. This is Unreal:
Unreal came out in 1998. At the time, it required a 166Mhz computer. My machine was right in that ballpark when I played it. This was before ubiquitous graphics acceleration, so the game could do all of its rendering on your humble little CPU. At the time, the flyby intro would dip down to about 10FPS when the camera pulled back to reveal the entire castle. For the purposes of comparison, let’s make the fairly reasonable assumption that the game was rendering about 400 polygons when you include the castle, the canyon, the little guys milling around in the distance, the particle effects, the sky, and the reflections.
Pseudoku is drawing 1/10th the polygons, on a CPU that’s 20× faster, and yet it’s running twenty times slower. I know integrated graphics are garbage, but they’re not that bad. They’re not “three orders of magnitude slower than 1998 processors” bad. If that was the case, there would literally be no point in having integrated graphics at all.
So it’s probably not the rendering itself that’s slowing it down. On the other hand, I have no idea what’s causing this. I can have the game skip most of the rendering and the framerate stays about the same.
I guess the next step is to go through the program and disable systems one at a time until I find the culprit.
I could fix this by simply requiring the end-user to have a graphics card made in the last 5 years. But damn it, it SHOULD work on this machine and it shouldn’t be this slow.
The truth is, I don’t need to solve this problem. This machine represents such a vanishingly small portion of the market that I don’t need to spend this much time trying to get stuff to work. I guess I’m being stubborn because this is really bugging me, not because this is a good use of my time.
 It’s like a tax number for your business.
 Which isn’t actually a big deal on modern hardware, but it’s still a couple of orders of magnitude more challenging that what I’m trying to do with this puzzle game.
Programming Language for Games
Game developer Jon Blow is making a programming language just for games. Why is he doing this, and what will it mean for game development?
Diablo III Retrospective
We were so upset by the server problems and real money auction that we overlooked just how terrible everything else is.
Quakecon Keynote 2013 Annotated
An interesting but technically dense talk about gaming technology. I translate it for the non-coders.
What is this silly word, why did some people get so irritated by it, and why did it fall out of use?
Could Have Been Great
Here are four games that could have been much better with just a little more work.