Early in the project I said (hopefully here on the blog) that one of my goals for the project was to leave the door open for porting to linux. I might release a linux version or I might not, but I wanted the option and that meant I needed to keep the codebase free of Microsoft-specific code.
I’ve been working on Microsoft platforms for my entire professional life. In fact, my history with C begins at the same time as my history with Microsoft. In 1990, my uncle passed along his old IBM running Microsoft DOS, along with an old edition of Borland Turbo C. For you kids saying “C is hard to learn”, I just want to point out that I did it with no teacher and no internet*. I didn’t even have a textbook. Just the Borland reference manuals. In hardcopy. (What? Store an ENTIRE BOOK on disk? That’s crazy talk! You’d need industrial-grade hard drives to store something that big!) All the insane hours I’ve poured into this language, and I’ve never done so outside the context of a Microsoft operating system.
* It IS friggin’ hard to learn and I probably could have learned it ten times faster with the proper materials. Start with something easier.
But check this out:
That’s Good Robot, running on Linux. It’s not running under wine. This is a native build.
I can’t take credit. 95% of the work was done by Peter Olson. You might not recognize the name, but Peter and his brother Clint have done a lot for this site over the years and are basically outstanding people.
Peter loaded up the code on his Linux machine and went through the steps required to get it to compile. For a lot of the process, Peter was streaming his desktop for me to watch and then we hammered out the details in chat.
If you’re curious about what goes into porting software from one place to another, here is the list:
Actually wait. Before we get started, I need to do a disclaimer. Throughout this article I will equate programming on a Microsoft platform and using Microsoft tools. This is not necessarily the case, since you could use (say) the Code::blocks development environment even if you’re developing on Windows. But since Microsoft Visual Studio is really good and Microsoft offers a free version, a lot of people naturally use it. I understand that “usually” is not the same as “always”, but to keep things simple I’m not going to make the distinction every time it comes up. If this really bugs you then:
#ifdef _PEDANTIC #define USING_WINDOWS USING_WINDOWS_AND_ALSO_USING_VISUAL_STUDIO #endif
Okay? Fine. Let’s get on with this.
Turning C or C++ code into a usable program is a two-step process. Step one is the compile stage. That’s were it parses all of your code, checks to make sure everything makes sense, makes sure your syntax is valid, and turns it all into “object code”. If you declare a variable a “Hitpoints” in one place but then refer to it as “hitpoint” later on, the compiler is the thing that will say, “I’ve never heard of this ‘hitpoint’ thing and I have no idea what it is.” This is also the stage where it pulls in headers for external libraries. If I’m using OpenGL, I don’t actually have the source code to OpenGL in my project. Instead I
#include "gl.h", which is just a text file that tells the compiler, “Yeah, all this OpenGL stuff exists elsewhere and here is what it will look like.”
The second step is linking. The linker takes all the object files made by the compiler, adds in all the external libraries (like OpenGL) and then ties them all together. If I miss anything (like if I said I was going to have a function called SpaceMarineDie () but never got around to writing it, or if I included a header saying I’d use OpenGL but didn’t add the OpenGL library to the linker) then the linker will tell me what’s missing. If it has what it needs, it makes an executable file for us.
This is a complex process. It’s actually the thing I hate most about this language. It can be fiddly, tedious, obtuse, and unpredictable. But as bad as it is, it started out a lot worse.
It used to be that you compiled things from the command line. I don’t know how to compile anything from the command line for the same reason I don’t know how to sift wheat or shoe a horse, but I understand that it was done in the past and is still sometimes done by rugged independent types who would rather memorize hundreds of compiler options than resort to something as decadent and ostentatious as a menu, or use something as humiliating as a mouse pointer to select options. The point is, we don’t usually compile things by hand, even when using a terminal window. There are countless little options to control what files the compiler will read, where it will look for them, how it will interpret their contents, how it should report errors, and (if you’re successful) what kind of code it spits out.
Entering all these options every time you wanted to compile would be insane. So you stuff all of those options into a makefile. The makefile will guide the compiler and linker to do their thing so you don’t have to. Then you can give your project to someone else, provide them with the makefile, and they’ll be able to compile it even if they don’t know how it’s all organized. They should be able to compile your project even if they’re on a different platform.
This is all fine, EXCEPT…
If you’re using Microsoft tools, you don’t have a makefile. Microsoft uses project files. It’s the same idea except it’s, you know, different. So step one of porting from Windows to Linux is creating a makefile. This means that everyone can share their code, but it’s a bit harder to share between Windows and non-Windows.
Remember the headers I mentioned earlier? Well, there are a lot of them available. Hundreds of them. Maybe even thousands. I dunno. Now, some of these are standard headers. If you’re using a standards-compliant version of C or C++, then you should have a
stdio header. (Standard Input/Output.) If I #
include <stdio.h> in my code, you should be able to compile that code on your completely different machine with no changes.
But! Different platforms have their own ideas about where to put all those files.
On Linux, the io header is under the sys/ directory but utime isn’t:
#include <sys/io.h> #include <utime.h>
On windows, it’s reversed:
#include <io.h> #include <sys/utime.h>
Why? Why is this different? Why is this not standardized? Who saw the files arranged one way and decided they just HAD to reverse them? I have no idea. Is this a case of Microsoft just doing as they please and expecting the standards to conform to their behavior? (As they did with Internet Explorer 6.) Or is this a case of anarchic Linux environments leading to fragmented systems? Or do we blame the ISO for failing to herd these cats? Beats me. The politics of the language are opaque to me. All I know is that we’re beset by stupid trivial crap that ought to work but doesn’t.
Some things aren’t part of the strict C++ specification, but end up as part of the language anyway. Sort of. Informally. If you want to open a file you use open () to open a file and unlink () to delete it, unless you’re on Windows where you use _open () and _unlink (). Again, we can argue about who to blame but the fact remains that you have to deal with this when porting. A bunch of little stuff might have slightly different names or subtly different ways of being used.
Now, in this particular example we don’t have to use open () and unlink (). There are newer systems with better portability you could use instead. If you’re writing new code, you could use an ifstream or ofstream for file access. But there’s a ton of legacy code out there still using the old way, and “rewrite all your file access code from scratch” isn’t a super-attractive option.
In any event, you’re likely going to have several dozen little points in the code where you’ll need to deal with name conflicts.
We’ve got this little thing called max () that returns the larger of two numbers. There’s a version of it for comparing two floating-point values (like 4.43587 or 0.3429085) and another version of it for comparing integers.
1 2 3
int a = max (10, 52); //a all be set to 52 float b = max (10.1f, 0.52f); //b will be set to 10.1 float c = max (2, 0.001f); //c will be set to 2
Note how in line 3 I didn’t explicitly say that the number 2 was a float. One compiler will see that my code looks like:
float = max (ambiguous, float); and conclude that the ambiguous value is a float. Another compiler will throw a tantrum and refuse to proceed until the ambiguity is removed. You can argue this either way. The former is more permissive, while the latter is more strict. More strictness can save you from making mistakes but can also make code more cluttered, verbose, and hard to read. It depends on what you’re doing. The coding conventions of your project might make these little differences incredibly important or a non-issue.
But what really sucks is moving from a relatively permissive compiler to a stricter one. The compiler will hound you for hours over tiny little bits of inconsequential code like this and make you do many little edits to “fix” code that worked just fine for the other compiler.
On windows, these are equivalent:
1 2 3
open ("mygame\\games\\saves\\savefile1.sav"); open ("mygame/games/saves/savefile1.sav");
On Linux, they are NOT the same. The sad thing is, I’ve known about this distinction for years, but I can never remember which way is the portable way and I’m not inclined to Google it when I’m in the middle of working on something else. So I guess, and apparently I’ve been guessing wrong more often than I was guessing right. This was fine until we tried to port, at which point it caused all these goofy problems.
And after all that work, the game is still broken in many stupid little ways, even on other versions of Windows. One tester is reporting performance WAY lower than what I would expect of a machine with their specs. One person using Windows XP reports that all of the robots get more transparent depending on how bright their colors are, which means there’s some shenanigans going on with the alpha channel, but ONLY on Windows XP. One Linux machine renders fine. Another one has textures and sprites flickering in and out all over the place. Another user reports that music never plays. Another one has the music play, but only if they open and close the main menu.
A lot of these problems can be traced back to the use of my vertex shader. As annoying as it is getting C++ code working on more than one machine, vertex and pixel shaders are far, far worse. NVIDIA and ATI always find a way to interpret the spec differently, and those differences aren’t even consistent across different driver versions. I’m seriously considering pulling out the vertex shader for now. It’s the source of about 80% of all of my technical glitches, with multi-threading making up the other 20%.
So that’s the adventure porting the game to Linux. It “only” took a few hours, which is either amazing or horrible, depending on your expectations.
Trashing the Heap
What does it mean when a program crashes, and why does it happen?
Project Button Masher
I teach myself music composition by imitating the style of various videogame soundtracks. How did it turn out? Listen for yourself.
Why I Hated Resident Evil 4
Ever wonder how seemingly sane people can hate popular games? It can happen!
Trusting the System
How do you know the rules of the game are what the game claims? More importantly, how do the DEVELOPERS know?
There's a wonderful way to balance difficulty in RPGs, and designers try to prevent it. For some reason.