I’m afraid this one got away from me. I wanted to talk about code reuse. So I decided to talk about the atlas texture I’m using. But then I needed to explain what that is and why we need it. And then I got dragged into asides about how I use my atlas and some theory about how texture addressing works.
So my two-paragraph aside is now an entire entry of information that’s not directly connected to the problem of generating 3D levels. I could send this mess back for another round of editing and restructuring, but then I’d have nothing to post this week and you would be sad.
I have made this longer than usual because I have not had time to make it shorter. – Blaise Pascal
I don’t know if this entry will hold up, but I think we can all at least agree that Blaise Pascal is one of the coolest names ever. My anglo-centric take on his first name is that it ought to be pronounced “Blaze”. Blaze Pascal! That’s practically a superhero name.
Anyway, I’m now making this overlong post even longer by complaining about the length. Let’s just Get On With It before I make everything worse.
Every graphics generation has its own bottleneck. At one point in history it was all about rendering as few polygons as possible, because those were calculated by your feeble mid-90s CPU, and that poor thing was already being overworked. Then a few years later all of that work was offloaded to your graphics cardYou might remember the times when you’d hear about needing a card that supported “Transform and Lighting” to play a game. That was the point where we dumped a lot of the polygon work onto the GPU., and suddenly “fill rate” was the big concern. Developers stopped being so obsessive about reducing the polygon count of everything and started worrying about how many total pixels the program had to draw.
At the time, you’d hear people talking about how to reduce “overdraw”. Overdraw is when you draw a bunch of pixels (perhaps to render the background) and then cover them up when you draw stuff in the foreground. If you’ve ever seen a Bob Ross painting, then you know he starts by filling the entire canvas with the sky color, then paints over most of it to create a gradient, then paints over 80% of the sky to draw the mountains, then paints over more sky with clouds, then covers up about half the mountains with happy little trees. In the end, he probably paints over enough surface area to cover 3 or 4 canvases. If Bob Ross was a game developer in the aughts, we’d say he had a massive overdraw problem.
My knowledge of these problems kind of trails off around 2010 or so. As time has gone on, the whole system has grown more complex and these days there’s just too much to keep up with if you’re just a hobbyist like me.
However, I have gleaned a bit of wisdom from other developers, GDC talks, and John Carmack sightings. And I know that among the many concerns that devs have to worry about these days, limiting the number of draw calls is still very important.
A draw call is when your graphics engine tells the GPUThe graphics card. “Hey, here’s a big lump of polygons. Draw these polygons in these different locations, using this texture. This would draw (say) all the instances of “wooden_door_3” in the entire level. A modern GPU is designed around massive parallelism. The polygons and texture are loaded into the card’s memory, and then the card is ready to process them in bulk. One processing unit on the card can be drawing the first door, another can be drawing the door elsewhere in the level, and so on. I don’t know how many lanes we have on the cards these days, but it’s probably a safe-ish guess to say that a modern GPU can draw a dozen doors just as quickly as they draw 1, provided that the developer has everything set up properly so that things can be done in batches.
It takes the same amount of time to bake one cookie as it does to bake a dozen of them. This is kinda the same dealIt’s the year of Covid-19, so rather than leaving the house for a terrible car analogy, I thought it would be safer to stay inside and try a terrible baking analogy.. The more stuff you can do in a single draw call, the closer you’ll get to utilizing 100% of that precious GPU power.
But what if we’re not efficient about it? Let’s say we don’t put models into batches and we fling crap at the GPU one at a time, in random order. What if we bake our cookies one at a time? What you’ll get is something like this:
We load the door model and door texture into memory. Then one small part of the GPU will draw the door while most of the rest of the card sits idle. Now it’s time to draw (say) a bathtub. So now the entire GPU needs to idle until we get the new models and textures into place. Then once again we use a fraction of the GPU’s power while the rest sits idle. Then we have another pauseThese pauses are measured in microseconds or smaller. These individual moments aren’t something you can “feel”, unless you get enough of them happening in a single frame. The difference in performance can be extreme. Poor management of draw calls can make even the most powerful graphics cards run poorly, since you’ll always be wasting a majority of its capacity no matter how fast it is. In fact, the better the card, the more damage it does when you fail to properly manage draw calls.
Disclaimer: The above description is a massive oversimplification. There are different stages of memory that run at different speeds, different amounts of memory that can help mitigate the cost of switching to a new texture, and a bunch of other hairy details I’m not qualified to explain. And like I said, my knowledge is spotty and out-of-date these days.
Now, Unity is handling most of this for me. It keeps track of models and textures and does its best to do things in large batches. As a Unity developer, you just need to make sure all of those wooden doors draw from the same geometryWhen you’re a newbie, it’s easy to accidentally clone geometry instead of sharing it, meaning every door will get a copy of the geometry instead of everyone sharing a single copy. and texture, and you’re all good.
In terms of texture-switching vs. model-switching, texture-switching seems to be the bigger boogeyman. Getting back to our Bob Ross analogy, switching textures is like changing colors on your brush. If you’re cleaning off your brush after every stoke, then the job will take ages. On the other hand, if you do all the blue sky, then all the white mountains, then all the green trees, then you’ll work incredibly quickly. (For this example to fit, we have to assume that your paintbrush doesn’t run out of paint as long as you’re using the same color, and that changing color means throwing away the old brush and driving to the store for a new one.)
So now you’re probably thinking, “Hey Shamus, the engine doesn’t care what the texture looks like. Why don’t you put a bunch of images on the same texture and just use that one texture for everything? Then you’d never have to change textures!”
Yes! That’s a thing. It’s called an atlas texture. If you’re curious what one looks like, here’s a really old Minecraft atlas:
Atlas textures do help a great deal, although they make asset production more complicated.
As I’ve explained before. Models are made from triangles, and triangles are made by playing connect-the-dots between vertices. Every vertex in a model will have its location in 3D space, which is usually represented by the variables xyz. We also have a set of texture coordinates, which are expressed as uv. These coords say what part of the texture you’re interested in. A uv value of (0.25, 0) means 25% of the way across the image left-to-right, at the very top of the image.
Below, I’m going to continue to think of everything as top-down, because that’s consistent with how Windows screen coordinates, image editors, and written language works. I’ve always been annoyed by bottom-up systems by the way they break from our normal assumptions about coordinates in 2D space.
A uv value of (0.5, 0.5) means halfway across the image and halfway down, giving you the dead center. If you hand it a u value of 2.75, you’re talking about going all the way across the image and then wrap around to the left, then go all the way across a second time and wrap around, and then go 75% of the way across.
This system is what allows you to repeat the same texture several times over a very large polygon. If you couldn’t wrap, then you’d need to create a new polygon every time you wanted to repeat the texture. It would be very tedious and very wasteful.
The problem is that if you’re using an atlas texture, that “wrapping around” stops working. Let’s have another look at that Minecraft texture:
Look at the grass texture in the top-left position. If I try to repeat that grass 3 times horizontally, it’s not going to wrap. It’ll just keep going and grab the stone and dirt texture entries.
So what you need here is a special shader. You can feed it normal repeating uv values, and then the shader will constrain their values to the specific sub-image you’re interested in. If it goes off the right side of the sub-image, it’ll wrap to the left edge, and if it goes off the bottom it’ll wrap to the topAnd vice-versa. The point is that the texture should behave like an infinite plain that can tile as much as you want in any direction..
Now you just need a way to explain to the graphics hardware WHERE that sub-image is. (I personally call them “cells” because it’s shorter, but I don’t know that the official industry terminology is.)
The Brute-Force Way
So we need to define a square region. We need some way to tell the GPU, “Okay, constrain the standard uv values to region such-and-such of the texture. We could do this by sending two additional coordinate pairs. One will be the upper-left corner of the cell, and the other will be the lower-right. This means we need an additional four variables, which we can call qr and st.
This isn’t a terrible setup. Four variables isn’t much in the grand scheme of things. But this is a non-zero cost. Can we get this done in three?
The Slightly Smarter Way
If we agree ahead of time that – as in the case of the Minecraft atlas – all cells need to be the same size, then we can get rid of the variables st. We just hand it the upper-left corner and it will be able to figure out the rest.
The disadvantage of this system is that every cell needs to be the same size. That’s fine if you’re making a Minecraft-style cube world where everything is built on the same 1-meter grid, but it’s a mess if we try to do it in some other genre / art style. The tiny little control panel on the captain’s chair will take the same number of pixels as the giant viewscreen at the front of the bridgeSure, you could make the viewscreen out of hundreds of little sub-textures, but that would be an amazing pain in the ass to model. Your art team will probably try to assassinate you if you try..
We could use a third number for cell size. So qr gives us the cell origin, and s gives you the size.
But can we do even better? Can we get the job done in just two numbers?
To be honest, I have no idea what the “official” way to handle this is. I doubt it looks anything like mine. I imagine the official system is a lot smarter in some way that never occurred to me. This is just what I worked out a few years ago on one of my own projects.
We have two variables: qr. The q defines the overall size of the grid for this particular cell, and r provides both the horizontal and vertical positions on the grid. It sounds weird, but it’s actually simple and great for lazy people.
A q value of 4 tells the shader to pretend that the entire texture is a 4×4 grid of sub-texture cells. That means each cell is the size of 1/q. A value of 2 would mean the shader should think of the whole texture as a 2×2 grid of sub-textures. Thus each cell is 0.5 wide and tall. A 4 would give us a cell size of 0.25. And so on.
So now we know how big our cell is. To find its origin: Divide r by q, and round it down to the nearest whole number. That’s your row. Now divide r by q again, only this time keep just the remainder.
For a simpler way to visualize it, just imagine the cells are numbered left-to-right, top-to-bottom.
Is this still a bit hard to follow for you? Me too. So when I wrote this in 2019 I left myself some comments at the top of my shader file to remind me of how it works:
/*-------------------------------------------------------------------------- This shader is designed to allow many textures to exist in the same texture image. This shader depends on two sets of UV coords. The first set comes from TEXCOORD0, and is the standard texture address. The second UV comes from TEXCOORD1. The X value defines the size of the texture grid. So a value of 4 would break the texture into a 4x4 grid. The Y value is the number of the cell within that grid, numbering left-to-right, bottom-to-top. A 4x4 grid would be numbered so: Examples: TEXCOORD1 = (4,12): SubTexture would be in square #12 of the diagram below. *---*---*---*---* | 12| 13| 14| 15| *---*---*---*---* | 8 | 9 | 10| 11| *---*---*---*---* | 4 | 5 | 6 | 7 | *---*---*---*---* | 0 | 1 | 2 | 3 | *---*---*---*---* TEXCOORD1 = (2,3): SubTexture is the upper-right quarter of the texture like so: *---*---* | 2 | 3 | *---*---* | 0 | 1 | *---*---* --------------------------------------------------------------------------*/
Ah, I see here that I’m using bottom-to-top coords. Sigh. Whatever. That will never not feel upside-down to me, but whatever.
The main reason I love this system is that it’s human-readable. If I’m trying to fix a bug and I find myself needing to examine individual texture coordinates, I want to be able to tell right numbers from wrong ones. If I’m looking at some texture coordinates see the numbers (0.3125, 0.1875), it’s not immediately clear which cell we’re dealing with. But if I see (16, 83), I can quickly work out that I should look for the cell in row 5, column 3 of a 16×16 grid in my atlas texture. That will tell me what I SHOULD be seeing, which gets me halfway to figuring out where to find the bug that screwing up the process.
This also means that I can have cells of mixed sizes on my atlas. As long as they’re powers of 2 in size, a 1024×1024 atlas can support textures anywhere from 512×512 in size, all the way down to dinky little 2×2 textures.
I don’t have any early versions of my atlas texture. But if you don’t mind a bit of a spoiler, then here’s the one I’m using now in late September:
My texture is actually 1024×1024, but the top half is unused empty space right now so I cropped that out. You can see that we’ve got cells of many different sizes. The weird advertisements near the middle are left over from a bit when I was experimenting with glowing billboards and just needed something quick for testing.
We’ll come back to this next week next week, and hopefully get back to the main point of the project.
 You might remember the times when you’d hear about needing a card that supported “Transform and Lighting” to play a game. That was the point where we dumped a lot of the polygon work onto the GPU.
 The graphics card.
 It’s the year of Covid-19, so rather than leaving the house for a terrible car analogy, I thought it would be safer to stay inside and try a terrible baking analogy.
 These pauses are measured in microseconds or smaller. These individual moments aren’t something you can “feel”, unless you get enough of them happening in a single frame.
 When you’re a newbie, it’s easy to accidentally clone geometry instead of sharing it, meaning every door will get a copy of the geometry instead of everyone sharing a single copy.
 There is a very low probability of this being true.
 And vice-versa. The point is that the texture should behave like an infinite plain that can tile as much as you want in any direction.
 Sure, you could make the viewscreen out of hundreds of little sub-textures, but that would be an amazing pain in the ass to model. Your art team will probably try to assassinate you if you try.
Game at the Bottom
Why spend millions on visuals that are just a distraction from the REAL game of hotbar-watching?
The Best of 2019
I called 2019 "The Year of corporate Dystopia". Here is a list of the games I thought were interesting or worth talking about that year.
Grand Theft Auto Retrospective
This series began as a cheap little 2D overhead game and grew into the most profitable entertainment product ever made. I have a love / hate relationship with the series.
Starcraft: Bot Fight
Let's do some scripting to make the Starcraft AI fight itself, and see how smart it is. Or isn't.
This Game is Too Videogame-y
What's wrong with a game being "too videogameish"?