Revisiting a Dead Engine

By Shamus Posted Thursday Apr 16, 2020

Filed under: Programming 56 comments

This is a second entry, following the first one, which makes this a series now.  If you remember, my son Issac made a level for Garry’s mod that I thought was cool, and I foolishly assumed it would be easy to get it translated to a modern engine. Here’s the original level:

This is what it looks like in Garry's Mod.
This is what it looks like in Garry's Mod.

It was unreasonably difficult to extract all of those polygons and render them in another game engine. Let’s talk about why.

The Source Engine 2 has a bit of a murky history. Source is Valve’s in-house engine. It’s the technology driving Half-Life 2 and its attendant episodes. It’s what they used for the Left 4 Dead games and DOTA 2. It was the engine behind Valve’s beloved Portal games. Wikipedia will tell you that Source 2 was officially announced in 2015, but in truth there wasn’t a clear division between Source engine (Original Flavor) and Source Engine 2 (Electric Boogaloo). Rather it was a gradual process of evolution, and Valve arbitrarily decided that 2015 was the cutoff point between old and new. Left 4 Dead 2 levels (2009) could do a ton of things that Half-Life 2 levels (2004) couldn’t. In Portal 2, they finally got around to adding realtime shadowsThe bit where Wheatly guides you through the darkness with his eye-light was a first for the engine. This is apparently significantly different from the shadow-casting flashlights of Left 4 Dead in ways I’ve never bothered to look up. with dynamic light sources, about a decade after everyone else. The Left 4 Dead levels were incompatible with the engine used in Half-Life 2, so I fully expected that to be the recognized demarcation between New and Old, but apparently that didn’t happen until DOTA 2.

The point here is that the original Source Engine was descended from the Quake 2 engine, and many of the underlying data structures are the same. So even though I’m trying to read in a level that was made this week, this involves reaching all the way back to 1997 or so and speaking the language of late-90s game engines. There are things that this engine does that made no damn sense in 2010, and are downright archaic by 2020.

Source engine maps are stored in .BSP files. This site, which I linked in the previous entry, is effectively our Rosetta Stone. Without this, I’d never have any hope of sorting this out. That documents how a .BSP map is put together.

A Throwback Data Structure

The entire file format reflects a very vanilla C design style. It’s very obvious that the level compiler just writes raw data structures to disk. A data structure might look like this:

struct lump_t
  int offset; // offset into file (bytes)
  int length; // length of lump (bytes)
  int version; // lump format version
  char ident[4]; // lump ident code - 4 bytes

Incidentally: This data structure being called a “lump” is somewhat reminiscent of 90s video game data naming schemes. In those days everyone ran into the problem of, “Hey, I need to jam a bunch of unrelated data into a single monolithic file. What should I call these files?” And so we wound up with WAD files (Doom) BLOB filesI can’t remember which engine used blobs. I’m 51% sure it was Descent., and such. This lump structure was the obvious and minimum-effort way to store crap in the days when C ruled the programming world. You open a file, and then shove the contents of a data structure directly to disk.

The file headers contain information like “The vertices are stored 34,532 bytes into the file, and the vertex data is 23,964 bytes long. Since you know vertex data is 12 bytes longA floating-point number is 4 bytes long. You need 3 of these for an X, Y, Z position., you can divide that length by 12 and know that there are 1,997 vertices. This sort of business is perfect if you’re a frugal late-90s programmer and you want to skip around the file, loading only the things you need right now, because you can’t afford to decadently waste a whopping 2 megabytes of main memory on a data structure you’re not currently using.

The lump_t structure defined in the code above would be exactly 16 bytes longEach int is 4 bytes. A char is one byte, but there are four of those.. So you take a raw memory pointer and aim it at your data structure, then you tell the program “Starting at this spot, write the next 16 bytes of memory into this file on disk.” Then when it was time to load the level, you’d do the reverse. You’d aim the pointer at the start of the structure, and take  the next 16 bytes out of the file and shove them into that spot of memory. If you made a mistake and wound up out of alignment with the file during the reading process, then you were doomed. Even being off by a single byte would turn the data into a random mess.

To be honest, I kind of liked the simplicity of those days. Everything either worked perfectly or failed catastrophically. While young people get nervous handling raw memory pointers, I thought it was a great approach in situations like this.

Regardless of how you felt about those days, they are over. Modern languages don’t want you juggling raw memory pointers and randomly writing blocks of bytes into bits of memory. In C#, this sort of behavior isn’t just discouraged, it’s forbiddenDisclaimer: There might be some black-magic hack around it. I wouldn’t know. I’ve only been using the language for a couple of years and there’s a lot I don’t know.. You can’t write to memory like that, filling several variables at once. Moreover, this sort of business would break encapsulation. Even if you were allowed direct, unsupervised memory access, your file-loading code wouldn’t be allowed to write into your level-data structures. And even if you somehow found a way around all of thatYou could make all the data structures public, which is a huge OOP no-no. That’s what I did, and I don’t even feel guilty., you can’t be guaranteed that the memory layouts would match. I don’t understand the rules, but people more familiar with this sort of thing tell me that the compiler might mess with the layout of your data, padding it with a byte here or there for some sort of optimization. And even if you got over all of that, I’m sure there are lots of other problems I’m overlookingI’ll bet array data is stored completely differently..

None of this means you can’t read the file. It just means that reading the file is a little strange and inconvenient. You can’t fill an entire data structure in a single read operation, but you can read it into memory one variable at a time.

The Problem with Fans

I mentioned in the previous entry that this document is – as far as I can tell – the only publicly-available document on how these files work. I also mentioned that it doesn’t tell you everything you need. Don’t get me wrong, this is an incredible document and I’m very glad for the hard work that the author put into it. It’s just that sometimes things get overlooked, possibly because they’re things that “everyone knows”. Which is fine, until you reach the point where everyone forgets about it and all we have left is the document.

In the modern world, we’d generally store geometry as vertices (a bunch of positions in 3D space) and a list of indexes describing how those vertices form triangles. It’s basically a 3D game of connect-the dots.

This was not the case in the world of 1997, before the rise of graphics cards. At the time, you generally had to shove your vertex data at the renderer one at a time. Thus an important optimization was to send as few verts as possible.

Consider this arrangement of polygons:

This is the best illustration I could come up with in 5 minutes or less.
This is the best illustration I could come up with in 5 minutes or less.

Today, we’d give the graphics card all the verts up front. We’d send them all in one bulk load, and then refer back to them.

“Draw the triangle defined by A-B-G”.

“Now do B-C-G.”

“Then do C-D-G.”

And so on, until it’s all drawn.

But in þe Olden Days, you couldn’t “store” verts on the graphics hardware like thatOften because this machine didn’t have any.. Which means that sending triangles like this would be painfully inefficient. You’ll notice in the list above, I sent B twice, C twice, and G 3 times. In fact, to send the entire mesh, I’d have to send every point twice. Except for G, which would get sent once for every triangle, a total of 6 times.

That would be horribly wasteful. But! There’s a way we can avoid this. We can use a triangle fan. To describe a fan, I’ll just paste in one of my comments:

//Now each face has a list if triangle fans, seperated by -1 indexes. We need
//to pull these apart and make them into regular triangles.
//For you kids, a fan is a series of triangles built around a common hub.
//The first vertex in the series is the first vertex for ALL triangles.
//After the first two verticies, each additional vertex adds a new triangle.
//Each triangle is defined as so: (index 0),(previous index), (current index).
//So the series A B C D E would result in 3 triangles:
//ABC , ACD, ADE. The fan system is pointless in a modern hardware context,
//so we just need discrete triangles. The following loop does this.
for (int f = 0; f > _face_count; f++) {
  //complex hoodoo goes here

So the triangle fan for the image above would go G A B C D E F A.  Every vertex gets sent once, except for A which needs to be sent a second time to close the loop.

Not a bad system for 1997, but it’s totally useless today. These days we send all the verts in one bulk load and can refer back to them as many times as we like for no additional costI mean, aside from the cost of drawing the resulting triangles.. The graphics card can go to work on the entire job without needing to wait around to be spoon-fed vertices one at a time.

I know this looks like a big tangle of random lines, but once you do the work to create the view frustum, transform the verts, render the polygons, and calculate the lights, it can really look like something sort of half-assed.
I know this looks like a big tangle of random lines, but once you do the work to create the view frustum, transform the verts, render the polygons, and calculate the lights, it can really look like something sort of half-assed.

What I’m getting at here, is that the vert data inside of a BSP stores all triangles as triangle fans. The document doesn’t tell you, and there’s no way to intuit that from the code. You just have to know it. I had to find the original Quake 2 source and dig around in the code for ages until I found this accidental secret.

Also, the system for putting meshes together is hilariously indirect. It doesn’t just list the verts you need to create the fans. No, it lists everything as edges – as pairs of verts. Then there’s another layer of indirection where it lists edges and tells you if you need to reverse the pair. Then it lists those segments in long chains, with extra data describing the original surface these polygons belong to so you know which way they’re facing. This structure is great if you’re trying to pull the data into memory and use it to directly render crap, but it’s a lot of hassle if you need to translate the data into something a modern engine will accept.

Anyway, I got it sorted out.

This shot is taken from roughly the some position as the header image of this post and the previous wireframe image, if you want to compare the original level to the Unity version. The glowing floor is actually supposed to be water, which I have not yet (and probably won't) implement.
This shot is taken from roughly the some position as the header image of this post and the previous wireframe image, if you want to compare the original level to the Unity version. The glowing floor is actually supposed to be water, which I have not yet (and probably won't) implement.

Also, I finally got path tracing / ray tracing to work, but that’s another post.



[1] The bit where Wheatly guides you through the darkness with his eye-light was a first for the engine. This is apparently significantly different from the shadow-casting flashlights of Left 4 Dead in ways I’ve never bothered to look up.

[2] I can’t remember which engine used blobs. I’m 51% sure it was Descent.

[3] A floating-point number is 4 bytes long. You need 3 of these for an X, Y, Z position.

[4] Each int is 4 bytes. A char is one byte, but there are four of those.

[5] Disclaimer: There might be some black-magic hack around it. I wouldn’t know. I’ve only been using the language for a couple of years and there’s a lot I don’t know.

[6] You could make all the data structures public, which is a huge OOP no-no. That’s what I did, and I don’t even feel guilty.

[7] I’ll bet array data is stored completely differently.

[8] Often because this machine didn’t have any.

[9] I mean, aside from the cost of drawing the resulting triangles.

From The Archives:

56 thoughts on “Revisiting a Dead Engine

  1. Simplex says:

    “To be honest, I kind of liked the simplicity of those days. Everything either worked perfectly or failed catastrophically.”
    I chortled at that fragment.

    Some random factoids about Source engine – It’s heavily modified derivative is used by Titanfall, Titanfall 2 and Apex Legends (all those games were made by Respawn Entertainment).
    Half-Life: Alyx was made on Source 2, obviously. Are you planning to play it, Shamus?

    1. Fizban says:

      Never underestimate the utility of knowing you fucked up.

      1. Mezentine says:

        This is actually honestly something that keeps me from getting more into modern programming. This fact, combined with issues like the Unity documentation nightmare Shamus is also talking about, means that every time I try to dabble in something even remotely more complex than “visualize some data structures” (in Swift or something) I run face first into some issue where something is broken, there’s no documentation for why its breaking, and I do not have the experience with the system to intuit what went wrong

        1. RFS-81 says:

          I don’t know about Swift or game engines, but it’s not my experience that modern languages are shy about letting you know that you fucked up. Python and Rust will reliably crash in situations where C might instead corrupt memory or act on garbage data.

  2. Draklaw says:

    Your data structure is 16 bytes, the char field in an array of 4, so 4 bytes.

    Also I’m not sure that sending vertices in random order is free on modern hardware. Every time you send a vertex, it has to go through the vertex shader. Technically, you GPU will keep a cache of recently processed vertices, but I’m not sure how big it is, so you can probably improve the performance slightly by at least trying to keep vertices used in the same triangles close to each other. (It also probably help to try to access the vertices buffer in order to improve data locality.) But I must admit I’m not totally sure how things work on a such low-level scale.

    1. Shamus says:

      Whoops. Yes, 16 bytes. I fixed this in the post. Thanks.

      1. parkenf says:

        Changed it twice but left the 3rd 13!

    2. Echo Tango says:

      Aren’t vertex (shaders) relatively cheap, compared to all the pixels / fragments that need to get computed? There’s 3 vertices, but those could end up as hundreds of pixels, depending on how much screen-space they’re taking up. All the pixels would have different values (different parts of the textures, lighting maps, etc), so caching the vertices doesn’t seem like it would help much.

      1. Draklaw says:

        Yes and no I guess. Some things done in a vertex shader can be quite expensive, and on mobile hardware they use a higher precision that also make them slower. For instance, skeletal animations can be an issue. A feature known as feedback buffers (in OpenGL) has been introduced in order to “save” the product of vertex shaders for later. That way you can deform a character with skeletal animations and save the result, then use it in several passes (shadow, etc.) without recomputing it.

        But to be honest, I,ve never done such a thing and I’m not sure the impact on performances is that important.

        1. Richard says:

          If you render using an index buffer, the “re-used” vertices are only passed through the vertex shader once per draw call (assuming sufficient GPU memory to store the results).

          Primitive assembly then re-uses the vertex shader results as many times as needed – draw triangle using vert a/b/c, b/c/d, c/d/e

          This is faster than using strips and fans, because you can re-use vertices around the edge of the fan as well as the one in the middle, and you don’t have to work out how to create an ‘optimal’ set of fans and/or strips.
          This uses far more GPU memory, but we don’t care (much) because it’s still less memory than the texture anyway.

          The ‘feedback buffers’ are a way of keeping that intermediate data around for subsequent draw calls.

  3. raifield says:

    I didn’t expect to spend the morning reading about the Anglo-Saxon alphabet, but here I am.

    I’ve never looked at a new post on this blog and thought “Eh, I’ll read it later”. Everything is always interesting, somehow.

    My only topic-relevant point is that I used to design DOOM levels back in my youth and my first level had a humorous mistake in which activating a door somehow lowered the entire ceiling, killing you instantly. Good times. I then tried to get into C programming in the mid 90’s and bounced off hard, abandoning the topic entirely. I do dabble in Python now and again though.

    1. Joshua says:

      I failed Programming in Visual Basic in college back in 1996, so I’m not exactly a great programmer (my total GPA was a whopping 1.0 for that term, so it was more a case of being distracted in other life matters and just not paying any attention to ANY of my classes rather than not understanding things. Back in 2006, I ended up joining as a DM on a Neverwinter Nights Persistent World server, so had to learn a lot about a C type language. As a result, even though I’m a rank beginner, I can at least follow some of the concepts talked about here, even if not fully understanding them.

      1. ElementalAlchemist says:

        so had to learn a lot about a C type language

        Hooray for NWScript!

      2. Kyle Haight says:

        I, on the other hand, taught myself C back in the summer of 1991 from a book I borrowed from a family friend. I remember thinking “Oh, so this is what Pascal was trying to do” as I read. It seemed so natural and obvious to me.

        I have to say that Kernighan & Ritchie’s The C Programming Lanugage may be the single best language book I’ve ever read. It’s incredibly concise, yet in a 25-year programming career I have yet to have a question about the language that it didn’t answer. It’s really hard to essentialize a complex topic that thoroughly and precisely, but they managed it. There’s a reason programmers refer to the pre-ANSI C edition as the Old Testament and the ANSI C edition as the New Testament. (I guess by extension Bjarne Stroustroup’s The C Programming Language would be the Book of Mormon. I’m not sure who gets to be the Quran.)

  4. Geebs says:

    Half-Left 2

    Full Left Consequences!

    1. SidheKnight says:

      John Freeman, who was Gordon Freeman’s brother, was one day at the office typing on a computer.. sorry, I’ll stop.

      1. Drathnoxis says:

        Typo alert.
        *was one day an office typing on a computer.

        1. SidheKnight says:

          Thanks for the correction :)

  5. MithrilGear says:

    Strictly speaking, this mode of writing data-structures was always wrong in C, as well, because ints can be different lengths and endian-ness on different platforms or implementations, potentially making files written by one program on one computer unreadable to the same program on a different computer. The fact that it usually works for many purposes is what you’d call “correct by coincidence”. A bug related to this arose in a version of Photoshop that targeted both x86 (little-endian) and PowerPC (big-endian), but wrote files without regard to portability.

    The portable way to handle this sort of thing with modern C is to use exact-width types for values meant to be serialized, and to read and write using non-endian-dependant means like this.

    Since the compiler will know the platform its targeting, the same source will work for all platforms, and it should be optimized, as well:

    1. Chad says:

      The MS Office formats still have this sort of issue today, which is one reason for the .docx/.xlsx/etc formats. They joined an effort to standardize around xml-based file types at one point, it they did it largely to kill a movement to create open file formats that anyone could use (one of the last big examples of the “embrace, extend, extinguish” strategy of the “old Microsoft”).

      Back in the day, programs were optimized for reading to and writing from storage directly, because processing the data as you do I/O involves switching back and forth between the user code and the system code, which adds noticeable overhead per switch. After a while, disks got fast enough and memory got copious enough that you’d instead just mmap() the file; this is a “new” (but previous millennium) system call that says roughly “make this chunk of memory and the file named that contain the same data”. This could be implemented by copying, but the system could instead just use that chunk of memory for it’s own low-level file handling routines, cutting out the steps that copy and pass around the data in chunks between the OS and the user code. This sort of approach was so successful (and hardware advances were so plentiful) that high-performance server systems still experiment with a similar approach today: if you need your server to handle internet packets as fast as possible, you arrange a system where the networking code is removed from the OS and instead pushed into the user code, let the user code access the network hardware directly, and give up BOTH the security of “all sorts of different users’ different code can share use of the same network device without interfering with each other AND the overhead of “user code asks for network data, then the system switches to os code that does all the interacting with the network device and copies the data somewhere the user code can see it, then switches back to the user code for processing, repeat”. It’s kind of terrible and beautiful at the same time.

    2. lkw says:

      Just to add to that

      I don’t understand the rules, but people more familiar with this sort of thing tell me that the compiler might mess with the layout of your data, padding it with a byte here or there for some sort of optimization. And even if you got over all of that, I’m sure there are lots of other problems I’m overlooking.

      Perhaps I’m misinterpreting this since it’s in a paragraph about higher level languages. But C also has padding in structures. has a good explanation.

      1. Richard says:

        C alignment and padding rules are ‘implementation specific’. That meant people tended to ignore it because “it works on my machine”.

        Each compiler has its own rules for alignment and padding, which means that this type of ‘memory dump’ format does in fact work ‘just fine’.
        Until you change the compiler in any way at all, or you add a field, so the struct is no longer a nice multiple of N (where N is a number you don’t necessarily know).
        Then it blows up in your face for no apparent reason.

        And ‘change compiler’ might mean ‘use the next version from the same vendor’, or even ‘different optimisation setting’.
        – The release build can’t load files saved in the debug build, and vice-versa…
        Changing CPU target is of course almost guaranteed to blow up.

        The link above is pretty good.

        So, these days binary formats specify the endianess, the padding and such things explicitly.
        Each platform’s loader code copies it into properly ‘aligned’ structures, given the features and limitations of the target processor(s).

        On modern systems it’s even more interesting, because the CPU, SSE/AVX and GPU may all have different alignment needs.

  6. tmtvl says:

    […] but once you do the work to create the view frustam […]

    Did you mean frustum?

    1. danielfogli says:

      Obviously related to frustration ;-)

  7. Adam says:

    Also, I finally got path tracing / ray tracing to work, but that’s another post.

    Why do I think that the joke here is that it will never be mentioned again, thus creating yet another Unity-related Google-bait dead-end?

  8. Paulo Marques says:

    In C#, this sort of behavior isn’t just discouraged, it’s forbidden[5].

    As you suspect, it’s not really true anymore, C# now has ways of interacting directly with memory, which would allow the old C trick of reading into structures. I haven’t dabbled much in the language, so I couldn’t tell you exactly how to do it, but I’m impressed on how well thought out and implemented the thing is, especially for MS. The base Framework could be lighter and more transparent, though.
    Of course, the optimization is mostly pointless to this project, and it might not make it any easier to write (and it’s not worth rewriting), but now you know.

    1. James Schend says:

      This is one of those things where the advice should be:

      Don’t use it. (For advanced users only: still probably don’t use it.)

      The reason C# doesn’t have the ability to dump data structures to disk is that like most modern languages, it has reflection. That means C# code can dive into its own program structure to read out how classes, containers, and properties are all laid out– due to this, even on launch in 2001, C# could serialize any arbitrary data object in a one-liner, in either its native serialization format or XML. (Of course now most people use JSON instead, still a one-liner.) In short, it doesn’t have it because it doesn’t *need* it and what it does have is better.

      C couldn’t do anything like that; to serialize an object in C the programmer had to just write tons of error-prone code to tell the language exactly what values to save and where. (And God help you if you add a new value; you’d have to write code to convert older versions when read-in too!) And as another commenter mentioned, it was still incorrect to just dump memory to disk– C is a “portable” language, and those memory dumps are unlikely to be read correctly if the C program is run on a different type of CPU for endianness and alignment reasons.

      So while you can praise how “simple” the Source engine’s saves are, keep in mind the hours and hours and hours it probably took to port this stuff to Macintosh at the time where the CPU was Big Endian and an int was 16 bytes and not 32 bytes. The mistakes in C’s design are also why “portable” now means “runs in a virtual CPU” and not “you can compile the same program on multiple platforms”, since it’s now obvious the latter was never portable in any practical sense.

      1. Paulo Marques says:

        Oh, absolutely. But old formats, especially proprietary ones in embedded software (which aren’t necessarily old), aren’t going away any time soon, so it can be a nice thing to have. It’s better than the string slicing I did on train sensor data, which was not my proudest moment.
        For data produced and consumed in normal CPUs, it will bite you in the ass for no measurable gain, although everyone should have been using proper types anyway.

      2. Taxi says:

        That makes me wonder if that’s one of the reasons why the Quake games engines actually were using virtual machines, that famous QuakeVM.

        I thought that applied only to game logic/mods in order to make it easy to make them and be portable but maybe it extends to reading data formats.

        I mean every id Soft game of old was widely portable and the only part that I kept hearing about that’s widely different between platforms was usually sound code. I wonder how they dealt with map formats then.

  9. Bitmap says:

    “The point here is that the original Source Engine was descended from the Quake 2 engine”
    “had to find the original Quake 2 source and dig around in the code”

    I thought I’ve read several times that Valve licensed the Quake 1 engine and built GoldSrc, the engine for Half-Life 1, from that, and not from Quake 2.

    Turns out it’s more complicated:

    “The basis of GoldSrc is the engine used in the video game Quake, albeit with heavy modification by Valve. While the engine served as the basis for GoldSrc, Gabe Newell has stated that a majority of the code used in the engine was created by Valve themselves, not taken from Quake. GoldSrc’s artificial intelligence systems, for example, were essentially made from scratch.[1] The engine also reuses code from other games in the Quake series, including QuakeWorld and Quake II, but this reuse is minimal in comparison to that of the original Quake.[2]”


    There’s also a nice family tree for id Software’s engines on Wikipedia:

    1. Borer says:

      That Quake family tree seems woefully incomplete. It arbitrarily includes the source engine and a few games based on each engine and omits tons more stuff.
      American McGee’s Alice is made in id Tech 3. And so is the original Call Of Duty as well as Jedi Knight and Jedi Knight 2. This continues with id Tech 4: Prey (2006), Wolfenstein (2009, I think), Brink. All left out. And id Tech 5 and 6 also have games not shown on the tree. They’re even mentioned in each engine’s Wikipedia article.
      It also rubs me the wrong way that the GoldSrc engine (based on one of id’s engines) get a separate branch but the IW Engine (also based on one of id’s engines) gets left out. Why?
      I’m mostly irritated because it seems so random to me. What’s the rationale behind who gets to be on the family tree and who doesn’t? Especially considering how little of id’s code is actually in GoldSrc (according to the snippet provided above).

      I know this is inconsequential. I know I shouldn’t get worked up over a random graph on Wikipedia, yet here I am, complaining anyway.

      1. Winfield says:

        If it helps, I appreciate the passion and am now outraged along side you.

      2. Taxi says:

        Yea I’ve seen that chart and while it’s nice, it’s also strangely incomplete.

        But no, the HL engine was basically Quake with some modern bits added in, some possibly from Q2, some of Valve’s own design. It was certainly extremely streamlined, I think the entire concept of QuakeC and QuakeVM were thrown out, which made it difficult to port, but the renderer, file formats and overall engine design were undoubtedly id.

        I mean, even as an end user, if you just look at the map editor, the console, the cfg files, the way mods/mission packs are loaded, the .pak files, it’s all very much Quake.

    2. Taxi says:

      Half-Life engine was based directly off Quake 1. Newell and Carmack were buddies since Newell’s job at Microsoft was to port Doom to Windows 95. After that was done, Carmack suggested to Newell that they might as well make their own game, and provided them with the Quake source.

      Of course Carmack being Carmack, he didn’t stop there and presumably kept supplying Valve with bits of Q2 as it was being developed, so some of it probably made it to their engine too. Color lighting may be such thing but that’s my speculation.

      And on top of that Valve included lots od their own stuff like skeletal animation. Nevertheless the base is in original Quake.

      The other popular branch is Call of Duty series that were based on Quake 3 until at least MW3, so there is some legacy there going way back to Quake 1 too.

  10. Echo Tango says:

    While young people get nervous handling raw memory pointers

    Every time I encounter this, I still get baffled. A memory address is a pretty simple concept[1], and working with them is pretty simple if you don’t have to do any arithmetic. As your example illustrates, the hard part is when you’re not where you think you are. Documenting what’s supposed to happen, and guarding against endianness and variable-sizes should keep it a straight-forward process. :)

    [1] Memory is like a big array, or a house address. Whatever. Stuff near index 0 is for the OS, and the maximum index is basically where your memory runs out.

    1. Chad says:

      You haven’t mentioned padding, alignment, segmentation, or the various ways in which all of this had to be hacked around Windows ultimate roots in a combination of a CP/M clone and DOS, which is fine for today but mattered in the days of DOOM. Once you get a handle on that, you learn that computers are often much better at arranging data structures for efficient (time and space) than humans, so you can get performance improvements by letting the compiler (and sometimes the linker) handle some of this. Next, it turns out to be much easier for humans to write simple programs if they don’t have to worry about all of the details of memory management, as evidenced by the fact that a great deal of code uses the strategy “do it badly and just restart and try again if things go wrong”, which is why “have you tried turning it off and back on again?” is the basis for customer support.

      Ignoring all that bother, it turns out that direct memory access and pointer arithmetic is the foot-pointed shotgun that enables the majority of crash-the-program and take-over-the-system bugs, so there’s something of a movement to minimize them.

      …which is all another way of saying that computers are important enough today that we can benefit from more than one way to make them do things.

      1. King Marth says:

        Security is a major argument here. Direct memory access is simple (provided you grok the distinction between symbol and referent, which is way more of a mental roadblock than people acknowledge), but the problems you’re solving are hard. Programs don’t have the luxury of only following the script the 80% of the time when it’s easy and defaulting to general purpose problem-solving for the corner cases where something doesn’t look right, you need to handle all the corner cases up front, especially from people capable of looking at your instructions and producing an input specifically designed to make your program misstep and transform your customer’s machine into a bitcoin miner.

        Real-world example: A retail disk of Legend of Zelda: Twilight Princess can be used to install arbitrary programs onto a Nintendo Wii, as long as those programs are formatted in the shape of a save file with a particular shape for Epona’s name. Fine when you’re doing it to your own hardware, but this is from a large company working with a computer they built themselves.

        1. Echo Tango says:

          Good point. I think in my sleep-deprived brain, I got mixed up Shamus’ usage of “pointer” with “reference” from C++, which as far as I know, you’re not allowed to do arithmetic with. Either way, I also wasn’t thinking about horrible bugs and exploits, but only in the context of a simple program. But real-world programs are almost never simple anymore, since as Chad alludes to, we rely on them for pretty much everything, so the stakes are higher today.

    2. James Schend says:

      Working with raw memory pointers is also how you write exploits that get you and your customers pwned. There are extremely good reasons to be very very cautious of doing it.

    3. Richard says:

      Memory addresses are seriously complicated, once you start to look into what the OS is actually doing.

      The OS is not in fact near (virtual or physical) index 0 on any system you’re likely to encounter.
      On all operating systems you’re likely to use, the first page of memory is deliberately filled with landmines that trigger a crash if you go there.

      Memory isn’t contiguous, it’s in ‘lumps’ (pages), and if you’re using a lot of it those pages really matter.

      Windows, macOS and Linux actually place most of themselves into invisible memory that you can’t see, can’t touch and cannot even tell is there. (Excluding exploits like Rowhammer)

      The complexity means that almost all the time, it’s far better to let the compiler toolchain (or the runtime) do most of the optimisation, because those were written by people who’s job it is to really understand the fine detail of what the various operating systems are actually doing.

      It’s almost always far better for people like you and I to spend our time improving our algorithms, not our memory layouts.

    4. Cubic says:

      Not to mention dangling pointers, off-by-one indexing and so on to trigger segfaults, and all the joys of manual memory management. And forgetful programmers returning pointers into the stack (which then was deallocated and reused and THEN dereferenced). Well, you just had to power through it. Ah yes, those were the days.

  11. Lino says:


    Regards of how you felt about those days, they are over.

    Should be “Regardless”.

    In terms of the article, most of it kinda went over my head (after all, I’m quite the simple simpleton), but it was still an interesting read :)

  12. Dreadjaws says:

    Wait, you solved the problem already? Bollocks to that! This is not the content I’m subscribed for! I demand you find yourself yet again into an inescapable chain of increasingly infuriating problems that should be trivial to solve if the people in charge provided proper documentation or didn’t ignore basic design rules!

    1. Nimrandir says:

      For my part, I’m still waiting for a thorough analysis of the story collapse in Unity’s documentation. I mean, it’s clear Shamus has lost faith in the authors by now.

      1. danielfogli says:

        It’s because of stuff like this that I think the blog should have some kind of thumbs up interactivity :-D

  13. Paul Spooner says:

    I really wanted to re-create this scene in Blender using procedural geometry and procedural textures… I’m still working on it.

  14. Nimrandir says:

    Today I learned the word ‘endianness.’ I’m not quite sure how I got through my programming classes without its coming up.

    Also, I learned that the word ‘index’ has multiple valid plurals. I’ve always used ‘indices’ and reserved ‘indexes’ for the singular tense of the verb.

  15. Amita says:

    The glowing floor is actually supposed to be water, which I have not yet (and probably won’t) implement.

    I rather like that featureless, glowing white void beneath the hexagons. Gives everything a bright, high-key, feel to it. Goes really well with the white light shining through the cracks on the ceiling hexes. Now that I think about it, it reminds me of Korn’s Coming Undone music video…

    Throw in some bold splashes of colour for accents and you could be onto something there. Like a slightly grungy Mirror’s Edge.

  16. pseudonym says:

    Are you going to open-source this Shamus? It might happen to be a nice reference for other people with the same problem. Also your knowledge of C and C++ and the way how graphics used to work certainly helps. You are one of the few people in the world who could pull this off AND has an interest in doing so. There are probably more people who like Gary’s mod and Unity. I am sure your code would make some people happy if you put it in a public space with a FLOSS license.

    1. ElementalAlchemist says:

      Nobody would ever use something like this in practice. You’d just grab any one of a number of pre-existing I/O scripts for a 3D program like Max, Maya, etc., import the map, then export it out as an FBX or other format Unity (or other target engine) will accept.

  17. Decius says:

    Given the difference between a .bsp file and the map file that unity expects, wouldn’t the task be a program that takes the .bsp file and outputs a file that Unity can interpret? Yeah, you need to figure out triangle fans and all that, but you can/must translate from the old-school “write to memory address” paradigm to the modern “write to abstract memory location” when you first parse the data from the file- you know that the 1997 vertices are in the next so many bytes, so read the chunk of vertices and then convert them into the system that you’re using to save into a file format that Unity can already read.

    1. Richard says:

      It’d actually be more difficult.

      Unity has some pretty nice built-in classes for dealing with meshes and materials.

      While it mostly assumes they’re immutable after creation (a pain for the cool procedural stuff), it’s far easier to write a convertor from “weird old format” directly into “Unity’s native internal representation” than it would be to try to produce some other format.

      If nothing else, you can instantly see what you did (and thus what mistakes were made). If you try to output another format you’ll end up with the frustration of “Does this look weird because I exported it wrong, or because I misunderstood the original BSP format in some way?”

      1. Decius says:

        You still have the same problem- Does this look weird because I translated it to Unity’s native internal representation wrong, because Unity changed it, or because I misunderstood the original format in some way?”

        1. Richard says:

          No, you don’t.
          The Unity native representation is very simple and well documented. It doesn’t do any ‘magic tricks’ beyond the stuff all C# objects do.
          The Unity Editor is pretty cool, you can examine the raw mesh data both in-code and in the editor, and even swap out ‘pieces’ (mesh sections, materials) live in the Unity editor itself to check what you did.

          That’s not the case if you export some other 3D format.
          In order to do that you’d need to convert the source file into some intermediate representation of your own, then convert that into the target 3D format you have to import.

          It gets worse:
          The Unity importers all try to ‘fix up’ models to account for the fact that the formats and the Unity native representation have different limitations.

          There are only two standardised 3D model formats – glTF and Collada. Neither of them are simple to understand and both have a lot of surprising edge cases.
          All the other 3D formats are either completely reverse-engineered or have limited public docs – such as this BSP format itself!

          So instead of just two layers of “what went weird” you have at least four layers, most of which are hidden from you.

  18. DasMaddi says:

    Never expected to see much Source-related talk here. Fascinating read!

  19. Ralph says:

    Assuming Issac made the level, as well as the compiled BSP, you should have access to the raw map file. In the Quake 1 -> Half-life days this was a text format that looks a bit like JSON and may be a much easier entry point to translate the geometry into something modern:

    However as this is not the BSP you will not have access to the BSPing or finished lighting calculations. I don’t know if you need these, but I suspect Unity will redo them itself.

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply

Your email address will not be published.