|Programming||By Shamus||May 7, 2014||92 comments|
Good Robot has been on my mind lately. I keep coming back to the fact that I put a few months into it and didn’t finish it. But then I remember that on top of gameplay concerns I’ve also got a bunch of annoying technology problems to worry about and I lose my enthusiasm.
The central part of the technology problems come from [my usage of] GLSL – the OpenGL shader language. The rest of my code – talking to the filesystem, input devices, AI, and gameplay – is basically solid, but the shaders are a mess. On one machine robots would strobe, randomly rendering or not from one frame to the next. Another tester reported that walls didn’t render. Another had the robots and walls render fine, but powerups were invisible. Another person had strange slowdowns that should not have been happening on their given hardware.
The center of the problem is that I’m not very knowledgeable regarding GLSL. I’ve only gotten around to messing with it in the last couple of years, and I’ve only learned enough to accomplish the few simple things I need to do. This leaves great big blind spot in my knowledge where problems can hide. It creates situations where I can do something the wrong way and have it work on my machine, and malfunction a dozen different ways elsewhere.
But to correct this problem I have to deal with another one: The GLSL resources are an absolute mess. The language has undergone sweeping changes at least twice since its inception. Programs that were once valid are now wrong, which means nearly all of the basic “hello world” tutorials are now broken to the point where they won’t even compile. The docs are filled with side-notes about what’s “new” in a version people stopped using five years ago. The simple stuff is out of date, and the advanced stuff is written with the assumption that you already know what you’re doing and you just need a few pointers on how things have changed. It’s amazing how many pages of documentation fail to explain how things actually work, but instead explain how things differ from how they used to work. It’s like giving someone directions based on scenery that no longer exists. “Drive until you get to the place that used to be a drugstore, turn left onto old main street, and turn left when you see the empty lot where the recycling center used to be.” You’ll probably be be able to find the place eventually, but those layers of ambiguity and uncertainty are a major hindrance.
Further exacerbating the problem is that the updated and proper way of doing things is a lot more complex. The entire OpenGL matrix stack has been deprecated. In plain language, this means that the built-in systems for managing tables of numbers and performing the spatial transformations needed to take 3D scenery and draw it on a 2D screen are no longer supposed to be used. You’re supposed to write all of those systems yourself. Now, there are a lot of good reasons for that, but it’s also a massive undertaking when you’re just trying to get a handle on the basics. I’m sure there are open-source tools to do the job, but integrating those is a pain and it adds huge levels of complexity to what should otherwise be simple and straightforward learning exercise. (I already have my own matrix code, although it’s not nearly as well-developed as it could be.) When you get a blank screen you have to wonder: “Am I still not doing this shader properly, or am I mis-applying this new matrix system?” You can fall back to using the old matrix stack, but only if you find the directive to do so and only if you can intuit how it works. (It’s not well documented and I have yet to see it appear in example code.)
If you post some short shader code and ask for help you’re likely to get immediate dismissal: “The old matrix stack is deprecated! Don’t use it!” Which is like asking someone why your internal combustion engine doesn’t work and having them give you a hard time because it doesn’t have a gearbox, a catalytic converter, or a starter motor.
As if this wasn’t enough of an adventure, the OpenGL pages are somewhat capricious. In the process of working on this project the official docs would vanish for hours or days. You can download the docs as a PDF, but PDFs generally suck and Google can’t help you find what you’re looking for if they’re on your hard drive. And of course snarky RTFM forum posts all become that much less helpful as fallback documentation, since now they’re just insults with dead links.
Basically: Learning GLSL is is tough hill to climb, it keeps getting steeper, and the topography keeps changing without anyone updating the maps. The people who reached the top years ago don’t really have a concept of how convoluted the road has gotten and so they often give unhelpful advice.
But maybe I’ll have an easier time getting back to Good Robot if I force myself to do something sophisticated with shaders. Right now in Good Robot I’m just using them for simple stuff: I’m drawing basic squares and using the GPU (the graphics card, basically) to rotate them, scale them, and adjust the texture mapping so they draw from the proper section of the sprite sheet. That’s silly easy. My hope is that by doing a bunch of stuff that’s far more complex, I’ll actually get some depth to my GLSL knowledge and be able to spot the mistakes I’m making in Good Robot.
- Aside from the shader work, I want to be really comfortable with what I’m doing. Which means doing something I’ve done before. Which means heightmap-based terrain. That’s easy, it’s familiar, and it looks interesting. (Enough.)
- I want to do something that lets me dump a bunch of difficult processing onto the graphics card. Since we’re working with terrain, I figure erosion is a good choice. I’ll make some real-time erosion code and have it run on the GPU.
- I want to explore how difficult it is to support Oculus Rift levels of performance. Most games run at 30fps. If the scene gets crowded or if the player stumbles into some scripted event, it’s generally acceptable (or tolerated) if the framerate dips down to 20 or so for a few seconds. But if you’re programming for VR, you need incredible performance. You need 60 frames per second. Worse, you need to render everything twice – one for each eye. If that’s not hard enough, it’s extremely uncomfortable to miss frames, so you can’t ever dip below 60fps unless you want to risk making people sick. So now the program has to render twice as much, twice as fast, with no margin for error.
I can’t afford (or justify) getting a Rift dev kit right now, but it should be simple to set up a test to see if my program can keep up with the Rift performance demands.
- I’m going to stop being so stubborn and just use the same dang coordinate system everyone else uses. More on this in another post.
So those are our goals. I’m calling this “Frontier Rebooted” just because I’m re-hashing a lot of the ideas from Project Frontier, although the code is going to be all new. We’re going to have hills and water and grass and trees like before, but they’ll all be made using new techniques with shaders.
Step one: Grab the latest version of my various 3D tools (which come from the Good Robot codebase) and drop them into a new project. I set up an empty scene with a one-meter panel drawn at the origin. Just to make sure I’ve got the axis all pointing the right way, I build a literal axis arrow at the world origin. Red is X, Y is green, blue is Z.
Was that worth reading 1,500 words? I hope it was worth reading 1,500 words to see that. Next time we’ll actually accomplish something.