I sort of skipped a bunch of steps at the end of the last entry, because I was in a hurry to get to the point where we could see ships shooting at each other. But now that we’ve made it, let’s back up and talk about how we got here. Specifically, I’d like to respond to this comment…
I really enjoy this kind of content – programming, explaining complex IDE type objects (I’ll probably never try Unity – I’m a back end natural language processing type guy – but it is fun to see someone else struggle with it). Looking at the GIF I’m sad there wasn’t a paragraph or two about how the lasers were added: was that hard? Does Unity have an “Add laser” button? I feel like it might. I also think that each ship must track some sort of hit points value, because I saw one of them get hit (oh to have a discussion about hit boxes, and how to determine if a laser is inside one or not – I remember two decades ago trying to code up hit detection code and there are lots of optimizations in that area) and then flame out and explode. Lots of complexity hiding behind that I suspect!
Unity doesn’t have an “add laser” button, but we can re-use the trick from last time to save ourselves a bit of work.

In Unity, you create game objects, place them in the scene, and move them around to make gameplay happen. Usually a game object will have some geometry attached to it. Like, I could jump over to Blender and whip up a laser bolt model to use. But instead I create a laser object with no geometry and attach a TrailRenderer to it.
A TrailRenderer is what we’re using to give the fighter ships glowing exhaust trails. I just need to set the length to a smaller value and it works as a Star Wars style laser bolt.
Hit Detection
I don’t actually need proper hit detection for these space battles. Since none of these things are player-controlled, I don’t need to play fair. Rather than tracking laser trajectories and testing for collisions, I just roll dice in the background.
When the laser is fired, I roll to see if it is destined to hit. If it’s a miss, then I have the laser fly past the target. If it’s a hit, then the laser travels directly at the target. Properly leading a target takes extra calculations, so I don’t bother. Instead, the laser just flies itself into the target like a homing missile. If it moved slowly, then you’d be able to see the curve. But since the lasers are moving so fast, this brazen cheating is undetectable.
However, I do still need collision detection!
See, my camera system is very primitive. I click on an object to begin following it, and then I use the controls to zoom / orbit the object. When you click on the window, Unity casts a ray from the camera origin to the mouse location. If I want to be able to select an object, then I need to give it some sort of ability to be hit.
In Unity, an object needs to have a collider if you want to detect interactions with other objects. Things like objects running into each other, objects running into the environment, objects being hit with rays, etc. If you don’t give an object a collider, then it will ghost through everything else in the scene and will never be available as a target for bullets, mouse-clicks, and the like.
There are a few different types of colliders you can use. As the developer, you need to balance speed with accuracy. You can have flawless per-pixel hit detection on every strand of your protagonist’s hair if you don’t care about framerate, or you can have blazing fast and wildly inaccurate collision checks that treat your protagonist like a big cube.
Sphere
This is the simplest collider of them all. You give the sphere a radius and Unity will treat the object like a ball.
Cube
A bit more accurate than the sphere in many cases, but very slightly more expensive.
Simplified Model
This is the most common technique. You have one model for the visuals, and another 3D model that’s the same thing with simpler geometry. A great example of this is in Team Fortress 2, where characters are treated as though their limbs are made of boxes for the purposes of hit detection.

This provides a pretty nice tradeoff between speed and accuracy. However, it’s not perfect. In the above image, you can see that everyone effectively has a box over their heads. If you hit anywhere on the box, then it counts as a headshot, even if the bullet would technically miss the head inside.
If this is still too sloppy for you, then there’s only one option left…
Full Model
If you really need absolute perfection, then just use the same geometry for the visuals and collisions. This might be acceptable if the base model is fairly simple, or if you only have a small number of these things in the scene, or if you’re not going to do a lot of collision checks.
This will quickly turn into a disaster if your game features lots of busy models and lots of collision checks. For example, imagine a battle royale shooter between anime-style characters where the figures have lots of elaborate hair, hanging belts, and fluttering capes. Give everyone a machine gun and watch your framerate drop to single digits.
Sure, the game will be unplayable, but it will finally put to rest arguments over hitboxes.This is a lie. People will argue about hitboxes no matter what you do.

For the purposes of this project, I just need to put a sphere collider over the fighters so they can be clicked on. And since the ships are usually pretty tiny and clicking on fast-moving things is hard, I make the sphere about twice the size of the ship so that clicking on them is easier.
Hitpoints
Initially I started out with a simple system where shots always hit. Each fighter had 4 hitpoints, so ships exploded after 4 hits. However, this made fights incredibly mechanical. Everyone fired at the same rate, and so the resulting dogfight was very rhythmic. Four shots, then an explosion.
pew-pew-pew-pew-BOOM!
pew-pew-pew-pew-BOOM!
pew-pew-pew-pew-BOOM!
And so on. I tried adding a random to-hit chance, but the underlying rhythm of the lasers was still there. I made sure everyone’s lasers were out of phase with each other. That scattered the shots, but the ships still blew up very regularly. It just didn’t feel like the chaos of a real dogfight.
So what I did was assign each ship a random armor class of 18 or less. Ships roll 3d6 to shoot, and they have to roll higher than the AC of the ship they’re attacking in order to hit. This finally broke up the action and made things feel organic.
Small mistake: The highest AC is 18, and you have to roll higher than the AC to score a hit. So once in a while I’d get a fight that had an AC 18 fighter in it, and that fighter would be an untouchable god. Very high AC values in general were bad, because it made sure the last two fighters would take forever to finish each other off. Capping AC to ~15 or so gave the fight an organic feel without turning the ending into a slog.
There’s one final problem I want to talk about…
Iteration Friction
When you’re making some form of content, you often have to deal with iteration friction. Maybe you’re doing a mural and you have to wait for a layer of paint to dry before you can appraise the work you just did. Maybe you’re making a videogame level and you need to launch the game / load the level to see your changes. Maybe you’re editing a video and your editing software lags and stutters on the 4K footage, so you have to render a section of the video to see if the audio is in sync and the cuts work. Maybe you’re writing a book and you need to get feedback from your editor to know if your revisions are any good.
Friction sucks, and it’s a major reason why giving accurate time estimates on a project is such a nightmare. Five or ten seconds might sound small, but those lost moments add up over the long haul. These intervals are particularly bad for concentration. The chunks of time are large enough to break your concentration, but they’re not big enough that you could use them to accomplish something else.
Moments of friction can lead to you getting restless, picking up your phone, and suddenly this 15 second wait turns into a 15 minute distraction on social media.
Friction tends to increase as a project matures. When you’ve just started a new level, you can rebuild the lights and shadows instantly. Once you’ve put a couple of days into it, that same task might take over a minute. The more complicated the level gets, the longer it takes to recalculate the lights and the longer it takes to load it in-game.
I imagine this is the reason so many games have your character crawl through a narrow gap that breaks line-of-sight.Jedi Fallen Order is the most recent example of this that I’m familiar with. The level can be split into two smaller halves, and then those parts can be stapled together at the end. This saves the level designers from needing to work on the entire area as a single project.

Iteration friction is a thing in coding, too. Your typical compiler can compile a single source file in a fraction of a second. But the project grows over time. One source file becomes a dozen, and then two dozen, and then fifty, and then a hundred. Pretty soon it takes fifteen seconds to compile small changes. If it takes you an hour to chase down a fiddly crash bug, you’ll probably spend ten minutes reading and writing code, and the rest of the hour will be lost to compile times and loading screens.
Unity is actually pretty good about friction under normal conditions. Your project is going to be written in C#, which is a high-ish level languageReminder that high / low languages are backwards from expectations. High level languages are easy to read. Low-level languages are terse and inscrutable. with a bunch of fancy tools wrapped around it. This sort of thing is prone to debilitating levels of friction. But I would say that Unity fares pretty well for what it offers. It’s going to be higher friction than a project written in vanilla C, but not by much.
However, there’s something wrong with Unity’s HDRP. I don’t know why, but this project has a baseline 10 seconds of friction, right out of the gate. If I change ANYTHING – a source file, a texture map, a model – then when I jump back to Unity it will stall for ten seconds before I can do anything. That’s really sucking the joy out of something that’s supposed to be a no-stakes, low-effort project.
On a whim, I make a duplicate project using Unity’s default render path. With the same code and the same assets, this new project has about one second of friction instead of ten.

That doesn’t make a lot of sense. Yes, I’m sure the HDRP is more complex under the hood. But why does it take so long to integrate simple changes? Even if all I do is rename a single variable, I still have to wait ten seconds instead of one. What’s it doing in those nine extra seconds? Recompiling every shader for every possible render path for no reason? Is this a bug?
Anyway, I think I’m done fiddling with this project for now. I have a list of additional things I want to try,I still want to take another swing at making the spaceship again. but I’m currently posting 5 days a week and I can’t scrape together enough hours for programming with that workload. I hope to get some lead time on my other writing and then pick this up again later.
Footnotes:
[1] This is a lie. People will argue about hitboxes no matter what you do.
[2] Jedi Fallen Order is the most recent example of this that I’m familiar with.
[3] Reminder that high / low languages are backwards from expectations. High level languages are easy to read. Low-level languages are terse and inscrutable.
[4] I still want to take another swing at making the spaceship again.
Quakecon 2011 Keynote Annotated

An interesting but technically dense talk about gaming technology. I translate it for the non-coders.
Rage 2

The game was a dud, and I'm convinced a big part of that is due to the way the game leaned into its story. Its terrible, cringe-inducing story.
The Dumbest Cutscene

This is it. This is the dumbest cutscene ever created for a AAA game. It's so bad it's simultaneously hilarious and painful. This is "The Room" of video game cutscenes.
How I Plan To Rule This Dumb Industry

Here is how I'd conquer the game-publishing business. (Hint: NOT by copying EA, 2K, Activision, Take-Two, or Ubisoft.)
Project Octant

A programming project where I set out to make a Minecraft-style world so I can experiment with Octree data.
Interesting. I’ve always assumed that developers did this kind of thing for draw-distance or memory-management reasons. It never occurred to me that it might also be for design workflow reasons.
Those kinds of things certainly used to be used for memory reasons (draw distance is a memory management issue, just both a RAM and a VRAM issue, as will as a CPU/GPU one), along with the ubiquitous loading elevators and cutscene doors. These days, gaming PCs tend to be so beefy that it’s not much of a problem anymore, but given the increased requirements of a game during development (uncompressed textures; resources split between the game, IDE, and compiler; etc.), as well as the need to load a level rapidly so you can edit, reload, and test without losing momentum, it’s reasonable to assume that a game dev would keep using old memory-reducing techniques to speed up the code/level editing cycle. In a way, it’s still about memory and CPU/GPU management, but specific to the dev’s workflow instead of final gameplay.
I’m reminded of working in the NWN Toolset and a lot of changes (or just even opening the module) can take a significant amount of time, despite it now being a 20-year-old game! There were certainly tricks to certain processes to improve the workflow. One example was just deleting an asset, which could take 30-60 seconds in the toolset, so people suggested finding the relevant file(s) while the toolset was open and deleting them there, which would be a lot more instantaneous.
The NWN toolset was a fascinating thing. I spent so much time messing around in it, mostly just trying to figure out how stuff worked. “Characters without infravision or low-light vision should suffer to-hit penalties in dark areas, but they don’t! I must write a script to fix it! But what if they’re near a light source or carrying a light source? Argh!” Alas, I don’t think I ever produced a module with more than three rooms and I certainly never completed anything that I was willing to put out on the internet.
It can be done, but because there are so many variables like you said (have an item equipped, have a light spell, have infravision), all with different OnEvent scripts (OnPlayerEquipItem, OnPlayerUnequipItem, etc.) pyou’d have to create an entire subsystem that’s entire purpose is to decide whether a given PC is suffering a -4 attack penalty. Personally, I always felt the “Am I able to see shit?” impact of having infravision or a light source was plenty of incentive to make PCs want lighting.
In a dogfight between two opposed swarms of fighters, I would expect shots that miss their target to frequently hit something behind the target instead. How do you avoid having that happen to your lasers?
I’ll hazard a guess that with a D&D-style to-hit vs. AC system, the shots are only tested against the target and automatically pass harmlessly by anyone else. It might not be realistic enough for a space battle game, but good enough for a model/animation demo.
I’m not sure about realistic. A real space battle probably has so much empty space that hitting something behind your target should be a weird fluke of luck anyway. A battle around the Earth could probably accommodate several billion ships comfortably, for example.
Given the size of the combat space (large) against the ships and lasers), I’d expect unintended hits to be incredibly rare in practice. Rare enough that simply ignoring it is probably a reasonable solution.
This. The angular size of another fighter that’s even a small distance away from you is incredibly tiny. A stray shot hitting a non-target is basically like drawing a line in a random direction and happening to intersect something by chance.
I’m extrapolating from this XKCD “What If”
https://what-if.xkcd.com/109/
From that article, the odds of hitting the sun (with an area of .2 square degrees) is 1 in 180000.
The sun is about 850,000 miles wide, and 93 million miles away, so about 100 times as far away as it is wide.
100x as far away as it’s width is nothing for a space ship. If we assume a space fighter is 100 feet long (which I think is generous), then it reaches sun-like angular area when it’s about 2 miles away. 2 miles is a rounding error when we start talking about space-level distances.
And again – we’re not talking distance to the target. We’re talking distance to the non-targeted ship BEHIND the target getting hit by a stray shot.
The odds of a real stray actually hitting anything small is so vanishingly small as to be almost impossible.
Now, the odds of hitting the mothership are another story.
That is why it’s perfectly acceptable to just “eyeball it”.
That makes sense for areas with TONS of ammunition sent down range. It’s the entire idea of enfilade fire.
I don’t think it works in space with fighters going super fast, possibly even fractions of the speed of light, since a laser is small. Millimeters thick? Even if it is yards thick, that’s still tiny compared to space, which is named after the thing it is full of. So it’s probably not useful for a fighter simulator where you’re shooting at objects the equivalent of multiple kilometers away to track that far for ‘maybe I hit something’ reasons.
But it’s an interesting thought exercise. What sort of situations WOULD justify this sort of thing? Would massive capital ships with super-mega weapons be worth tracking? If so, your mega cap ships would probably spread themselves far specifically to avoid having any kind of fire be able to target more than one at a time. Some of them space simulator games I watched through (Freespace? Or the other Space game?) have examples of this where the space ships are absolutely dwarfed by capital ships, but the cap ships are so so so spread out that even flying at maximum speed in your fighter takes MINUTES of real time to get close to another ship. And if you manually traverse areas instead of “auto-piloting/cut scene’ing out travel time”, you’re looking at an hour of just flying through space until you hit the action.
> If you hit anywhere on the box, then it counts as a headshot, even if the bullet would technically miss the head inside.
Notoriously, disguised spies keep their original hitbox. So headshotting them may require shooting a foot or two behind their visible head.
> High level languages are easy to read
Clearly you’ve never programmed in Haskell :)
> Low-level languages are terse and inscrutable.
Well of course. Low-level languages are too tied to the hardware. To get any meaningful abstraction requires a high-level language like C or Fortran.
Huh, I was gonna do a “C is high-level” shout-out, but seems I was preempted. Er… something something high-level assembly?
I find that when I write binaries directly using a hex editor that I can circumvent some of the inefficiencies of assembly. Sure that is still quite high level, given that I do not instruct the ssd controller directly to write the bytes to disk, but it is low level enough for me.
The real low-level programming is putting transistors into position by hand.
The real low-level programming is using a magnetized needle and a steady hand.
The real low level programming is coming up with the precise value of various constants when you create the universe such that the correct bits are found naturally occurring on the magnetic media.
The real low level programming is redefining the concepts of correct and incorrect, so that all possible arrangements of bits are exactly what you want.
Technically building something with transistors isn’t programming because you are making hardware at that point, not software.
Nevertheless it is still quite cool to make something useful using only logic gates.
Considering how much magic is going on in modern CPUs, that’s completely accurate.
Good old reliable Fortran, for when you just want to do a bunch of math. I don’t know about anyone else, but I’ve always found Fortran code fairly readable, provided it wasn’t some old Fortran grognard’s hideous, uncommented, and deliberately obfuscated spaghetti code. Not too sure about these Johnny-come-lately standards like Fortran95 though. Those non-primitive, user-defined data types seem mighty suspicious to me.
Maybe it’s because my only experience is with grognard-spaghetti code, but holy shit I hate Fortran’s lack of readability.
I’ve spent the last 2 weeks troubleshooting uninitialized variables and un-debugable call-stacks because even the compiler is like “what? Surely you don’t expect me to figure out this bullshit…”
A funny bug that was in TF2 for years was that Engineers were missing the box portion of their hitbox that covered their pelvic region. So while it was highly unlikely to come up in game, it was technically possible to shoot right through an Engi’s lower abdomen.
You can see that in Shamus’s image. (6th, 2nd row)
So you can!
Wait, that’s it? I thought this project was about making greebles that tell a story? Not that I mind programming posts, but man, this went off the rails the second it began and just never found it’s way back.
Yeah, I’m also kinda disappointed that we never got to that part. I think that carrier and fighters were a great start :/
But I guess we should be grateful for what we got – this was supposed to be a quick, 30-minute diversion. As far as I remember, it wasn’t even supposed to be a blog post…
It’s all right. Someday Shamus can dust off all these little projects and jam them together to make a procedurally generated city with space battles above where all the spaceships have progened immersive sim interiors too. :D
Didn’t he make a Unity remake of Procedural City? Just drop those space fighters in there. :)
My favourite part of Pixel City Redux is that it ended with a city-wide rave. And since we’ve heard nothing about it since, I choose to believe the city raves every night into eternity.
Man, that procgen immersive sim project was getting so interesting, I wish he would continue that one!
My 5 year old son sat down to read the article with me, and was impressed with the round glowing bit in the spaceship.
I call it the Croft Shimmy
Oh, so that what it’s called… “Iterative friction” is a bane of my existence. Just switching focus from Visual Studio to Unity takes a few seconds, and if I changed a file, I’m in for anything from additional 10 seconds to a minute, depending on hell-knows-what (it surely isn’t compilation time – that’s pretty fast in C#, but Unity also does a lot of other things whenever a script is compiled). I can disable automatic refresh to fix this, but then I will forget to reload scripts manually and spend 30 minutes wondering why the game isn’t behaving like it’s supposed to after my changes…
Worse still are source control updates. Our game is BIG for Unity. Like, at the “this engine wasn’t designed to handle so much stuff” limit. We have a lot of assets, and a lot more get added every day. So import times become really bothersome. And don’t even ask me about changing a platform… I need to be able to freely switch between PC, PS4 and XBox One versions, and I need 3 different PCs to do it, because switching between platforms can take up to several days, and I can’t keep 3 copies of the repository on one PC, because the repo takes up about 700Gb and I’m not willing to shell out for 4TB SSD (and the project is basically unusable on HDD).
“This is a lie. People will argue about hitboxes no matter what you do.”
Like, “Hey! How come my character got headshotted when you CLEARLY shot his Super Saiyan-style hair! It’s just HAIR! The bullet should have passed right through it!” ;)
And yet, even the “standard” Unity render pipline has a space horizon.
I’m surprised you left out AABBs from your collision list. Also, I feel like in most cases Box and Sphere collision shapes should be swapped
AABB: Axis Aligned Bounding Box, meaning the width/height/depth can be changed but it can’t rotate and stays aligned to the grid. Fastest because it’s just 4 greater-than and less-than checks.
Box: 6 vector subtractions 6 dot products which are all pretty quick and just basic addition/multiplication. This is assuming box normals are already normalized, and they usually are, but if not then it’s slower than sphere due to the 6 square roots involved in the vector normalization.
Sphere: While usually just 1 or 2 lines of code (distance formula) combined with an if(distance < radius) check, getting the distance requires calculating a square root which is a performance drain right up there with trig functions like sin() and cos().
Simplified Model: The slowest, since even though it’s simplified we still need to basically do vector math and collision checks for every single triangle in the collision mesh. The higher resolution the collision mesh, the longer the collision check takes.
You don’t have to do a square root with sphere checks. It’s easier to take the distance squared value, and square the radius instead when doing the comparison. It’s mathematically the same assuming your distance and radius are both positive values and much quicker to calculate.