Project Bug Hunt #4: Atlas

By Shamus Posted Tuesday Sep 29, 2020

Filed under: Programming 49 comments

I’m afraid this one got away from me. I wanted to talk about code reuse. So I decided to talk about the atlas texture I’m using. But then I needed to explain what that is and why we need it. And then I got dragged into asides about how I use my atlas and some theory about how texture addressing works.  

So my two-paragraph aside is now an entire entry of information that’s not directly connected to the problem of generating 3D levels. I could send this mess back for another round of editing and restructuring, but then I’d have nothing to post this week and you would be sad.

I have made this longer than usual because I have not had time to make it shorter. – Blaise Pascal

I don’t know if this entry will hold up, but I think we can all at least agree that Blaise Pascal is one of the coolest names ever. My anglo-centric take on his first name is that it ought to be pronounced “Blaze”. Blaze Pascal! That’s practically a superhero name.

Anyway, I’m now making this overlong post even longer by complaining about the length. Let’s just Get On With It before I make everything worse.

Overdraw

Happy little pixels.
Happy little pixels.

Every graphics generation has its own bottleneck. At one point in history it was all about rendering as few polygons as possible, because those were calculated by your feeble mid-90s CPU, and that poor thing was already being overworked. Then a few years later all of that work was offloaded to your graphics cardYou might remember the times when you’d hear about needing a card that supported “Transform and Lighting” to play a game. That was the point where we dumped a lot of the polygon work onto the GPU., and suddenly “fill rate” was the big concern. Developers stopped being so obsessive about reducing the polygon count of everything and started worrying about how many total pixels the program had to draw. 

At the time, you’d hear people talking about how to reduce “overdraw”. Overdraw is when you draw a bunch of pixels (perhaps to render the background) and then cover them up when you draw stuff in the foreground. If you’ve ever seen a Bob Ross painting, then you know he starts by filling the entire canvas with the sky color, then paints over most of it to create a gradient, then paints over 80% of the sky to draw the mountains, then paints over more sky with clouds, then covers up about half the mountains with happy little trees. In the end, he probably paints over enough surface area to cover 3 or 4 canvases. If Bob Ross was a game developer in the aughts, we’d say he had a massive overdraw problem.

My knowledge of these problems kind of trails off around 2010 or so. As time has gone on, the whole system has grown more complex and these days there’s just too much to keep up with if you’re just a hobbyist like me. 

However, I have gleaned a bit of wisdom from other developers, GDC talks, and John Carmack sightings. And I know that among the many concerns that devs have to worry about these days, limiting the number of draw calls is still very important.

Draw Calls

A draw call is when your graphics engine tells the GPUThe graphics card. “Hey, here’s a big lump of polygons. Draw these polygons in these different locations, using this texture. This would draw (say) all the instances of “wooden_door_3” in the entire level. A modern GPU is designed around massive parallelism. The polygons and texture are loaded into the card’s memory, and then the card is ready to process them in bulk. One processing unit on the card can be drawing the first door, another can be drawing the door elsewhere in the level, and so on. I don’t know how many lanes we have on the cards these days, but it’s probably a safe-ish guess to say that a modern GPU can draw a dozen doors just as quickly as they draw 1, provided that the developer has everything set up properly so that things can be done in batches. 

It takes the same amount of time to bake one cookie as it does to bake a dozen of them. This is kinda the same dealIt’s the year of Covid-19, so rather than leaving the house for a terrible car analogy, I thought it would be safer to stay inside and try a terrible baking analogy.. The more stuff you can do in a single draw call, the closer you’ll get to utilizing 100% of that precious GPU power.

But what if we’re not efficient about it? Let’s say we don’t put models into batches and we fling crap at the GPU one at a time, in random order. What if we bake our cookies one at a time? What you’ll get is something like this:

We load the door model and door texture into memory. Then one small part of the GPU will draw the door while most of the rest of the card sits idle. Now it’s time to draw (say) a bathtub. So now the entire GPU needs to idle until we get the new models and textures into place. Then once again we use a fraction of the GPU’s power while the rest sits idle. Then we have another pauseThese pauses are measured in microseconds or smaller. These individual moments aren’t something you can “feel”, unless you get enough of them happening in a single frame. The difference in performance can be extreme. Poor management of draw calls can make even the most powerful graphics cards run poorly, since you’ll always be wasting a majority of its capacity no matter how fast it is. In fact, the better the card, the more damage it does when you fail to properly manage draw calls.

Disclaimer: The above description is a massive oversimplification. There are different stages of memory that run at different speeds, different amounts of memory that can help mitigate the cost of switching to a new texture, and a bunch of other hairy details I’m not qualified to explain. And like I said, my knowledge is spotty and out-of-date these days. 

Now, Unity is handling most of this for me. It keeps track of models and textures and does its best to do things in large batches. As a Unity developer, you just need to make sure all of those wooden doors draw from the same geometryWhen you’re a newbie, it’s easy to accidentally clone geometry instead of sharing it, meaning every door will get a copy of the geometry instead of everyone sharing a single copy. and texture, and you’re all good. 

Atlas Texture

In terms of texture-switching vs. model-switching, texture-switching seems to be the bigger boogeyman. Getting back to our Bob Ross analogy, switching textures is like changing colors on your brush. If you’re cleaning off your brush after every stoke, then the job will take ages. On the other hand, if you do all the blue sky, then all the white mountains, then all the green trees, then you’ll work incredibly quickly. (For this example to fit, we have to assume that your paintbrush doesn’t run out of paint as long as you’re using the same color, and that changing color means throwing away the old brush and driving to the store for a new one.)

So now you’re probably thinking, “Hey Shamus, the engine doesn’t care what the texture looks like. Why don’t you put a bunch of images on the same texture and just use that one texture for everything? Then you’d never have to change textures!”

Yes! That’s a thing. It’s called an atlas texture. If you’re curious what one looks like, here’s a really old Minecraft atlas:

This was just the first result I found on Google. Looking closer, I think this is from an add-on texture pack. (Maybe Painterly Pack? I don't know. It's been ages.) Either way, you get the idea.
This was just the first result I found on Google. Looking closer, I think this is from an add-on texture pack. (Maybe Painterly Pack? I don't know. It's been ages.) Either way, you get the idea.

Atlas textures do help a great deal, although they make asset production more complicated. 

As I’ve explained before. Models are made from triangles, and triangles are made by playing connect-the-dots between vertices. Every vertex in a model will have its location in 3D space, which is usually represented by the variables xyz. We also have a set of texture coordinates, which are expressed as uv. These coords say what part of the texture you’re interested in. A uv value of (0.25, 0) means 25% of the way across the image left-to-right, at the very top of the image.

Annoying complication: Different systems reference the vertical in different ways. For some engines / APIs, 0 is the top and 1 is the bottom, where other things have things the other way around. If I remember correctlyThere is a very low probability of this being true., then OpenGL was bottom-up and DirectX is top-down. Or maybe I’m thhinking of an engine? Whatever. Over the years I’ve lost track of which way is “up”, so I normally just flip vertical coordinates until things look right.

Below, I’m going to continue to think of everything as top-down, because that’s consistent with how Windows screen coordinates, image editors, and written language works. I’ve always been annoyed by bottom-up systems by the way they break from our normal assumptions about coordinates in 2D space.

I wonder if the texture for Atlas is stored in an atlas texture?
I wonder if the texture for Atlas is stored in an atlas texture?

A uv value of (0.5, 0.5) means halfway across the image and halfway down, giving you the dead center. If you hand it a u value of 2.75, you’re talking about going all the way across the image and then wrap around to the left, then go all the way across a second time and wrap around, and then go 75% of the way across. 

This system is what allows you to repeat the same texture several times over a very large polygon. If you couldn’t wrap, then you’d need to create a new polygon every time you wanted to repeat the texture. It would be very tedious and very wasteful.

The problem is that if you’re using an atlas texture, that “wrapping around” stops working. Let’s have another look at that Minecraft texture:

This atlas is probably a decade old by now. I wonder what the Minecraft atlas looks like these days?
This atlas is probably a decade old by now. I wonder what the Minecraft atlas looks like these days?

Look at the grass texture in the top-left position. If I try to repeat that grass 3 times horizontally, it’s not going to wrap. It’ll just keep going and grab the stone and dirt texture entries.

So what you need here is a special shader. You can feed it normal repeating uv values, and then the shader will constrain their values to the specific sub-image you’re interested in. If it goes off the right side of the sub-image, it’ll wrap to the left edge, and if it goes off the bottom it’ll wrap to the topAnd vice-versa. The point is that the texture should behave like an infinite plain that can tile as much as you want in any direction.

Now you just need a way to explain to the graphics hardware WHERE that sub-image is. (I personally call them “cells” because it’s shorter, but I don’t know that the official industry terminology is.)

The Brute-Force Way

So we need to define a square region. We need some way to tell the GPU, “Okay, constrain the standard uv values to region such-and-such of the texture. We could do this by sending two additional coordinate pairs. One will be the upper-left corner of the cell, and the other will be the lower-right. This means we need an additional four variables, which we can call qr and st.

This isn’t a terrible setup. Four variables isn’t much in the grand scheme of things. But this is a non-zero cost. Can we get this done in three?

The Slightly Smarter Way

If we agree ahead of time that – as in the case of the Minecraft atlas – all cells need to be the same size, then we can get rid of the variables st. We just hand it the upper-left corner and it will be able to figure out the rest.

The disadvantage of this system is that every cell needs to be the same size. That’s fine if you’re making a Minecraft-style cube world where everything is built on the same 1-meter grid, but it’s a mess if we try to do it in some other genre / art style. The tiny little control panel on the captain’s chair will take the same number of pixels as the giant viewscreen at the front of the bridgeSure, you could make the viewscreen out of hundreds of little sub-textures, but that would be an amazing pain in the ass to model. Your art team will probably try to assassinate you if you try.

We could use a third number for cell size. So qr gives us the cell origin, and s gives you the size.

But can we do even better? Can we get the job done in just two numbers?

My Way

To be honest, I have no idea what the “official” way to handle this is. I doubt it looks anything like mine. I imagine the official system is a lot smarter in some way that never occurred to me. This is just what I worked out a few years ago on one of my own projects.

We have two variables: qr. The q defines the overall size of the grid for this particular cell, and r provides both the horizontal and vertical positions on the grid. It sounds weird, but it’s actually simple and great for lazy people.

A q value of 4 tells the shader to pretend that the entire texture is a 4×4 grid of sub-texture cells. That means each cell is the size of 1/q. A value of 2 would mean the shader should think of the whole texture as a 2×2 grid of sub-textures. Thus each cell is 0.5 wide and tall. A 4 would give us a cell size of 0.25. And so on.

So now we know how big our cell is. To find its origin: Divide r by q, and round it down to the nearest whole number. That’s your row. Now divide r by q again, only this time keep just the remainder.

For a simpler way to visualize it, just imagine the cells are numbered left-to-right, top-to-bottom.

Is this still a bit hard to follow for you? Me too. So when I wrote this in 2019 I left myself some comments at the top of my shader file to remind me of how it works:

/*--------------------------------------------------------------------------
This shader is designed to allow many textures to exist in the same 
texture image. This shader depends on two sets of UV coords. The 
first set comes from TEXCOORD0, and is the standard texture address.
 
The second UV comes from TEXCOORD1. The X value defines the size of 
the texture grid. So a value of 4 would break the texture into a 4x4 
grid. The Y value is the number of the cell within that grid, numbering
left-to-right, bottom-to-top. A 4x4 grid would be numbered so:
 
Examples:
 
TEXCOORD1 = (4,12): SubTexture would be in square #12 of the diagram below.
 
*---*---*---*---*
| 12| 13| 14| 15|
*---*---*---*---*
| 8 | 9 | 10| 11|
*---*---*---*---*
| 4 | 5 | 6 | 7 |
*---*---*---*---*
| 0 | 1 | 2 | 3 |
*---*---*---*---*
 
TEXCOORD1 = (2,3): SubTexture is the upper-right quarter of the texture like so:
 
*---*---*
| 2 | 3 |
*---*---*
| 0 | 1 |
*---*---*
 
--------------------------------------------------------------------------*/

Ah, I see here that I’m using bottom-to-top coords. Sigh. Whatever. That will never not feel upside-down to me, but whatever.

The main reason I love this system is that it’s human-readable. If I’m trying to fix a bug and I find myself needing to examine individual texture coordinates, I want to be able to tell right numbers from wrong ones. If I’m looking at some texture coordinates see the numbers (0.3125, 0.1875), it’s not immediately clear which cell we’re dealing with. But if I see (16, 83), I can quickly work out that I should look for the cell in row 5, column 3 of a 16×16 grid in my atlas texture. That will tell me what I SHOULD be seeing, which gets me halfway to figuring out where to find the bug that screwing up the process.

This also means that I can have cells of mixed sizes on my atlas. As long as they’re powers of 2 in size, a 1024×1024 atlas can support textures anywhere from 512×512 in size, all the way down to dinky little 2×2 textures. 

My Atlas

I don’t have any early versions of my atlas texture. But if you don’t mind a bit of a spoiler, then here’s the one I’m using now in late September:

More than half of this is old junk from the early stages of the project that I haven't gotten around to deleting yet.
More than half of this is old junk from the early stages of the project that I haven't gotten around to deleting yet.

My texture is actually 1024×1024, but the top half is unused empty space right now so I cropped that out. You can see that we’ve got cells of many different sizes. The weird advertisements near the middle are left over from a bit when I was experimenting with glowing billboards and just needed something quick for testing.

We’ll come back to this next week next week, and hopefully get back to the main point of the project.

 

Footnotes:

[1] You might remember the times when you’d hear about needing a card that supported “Transform and Lighting” to play a game. That was the point where we dumped a lot of the polygon work onto the GPU.

[2] The graphics card.

[3] It’s the year of Covid-19, so rather than leaving the house for a terrible car analogy, I thought it would be safer to stay inside and try a terrible baking analogy.

[4] These pauses are measured in microseconds or smaller. These individual moments aren’t something you can “feel”, unless you get enough of them happening in a single frame.

[5] When you’re a newbie, it’s easy to accidentally clone geometry instead of sharing it, meaning every door will get a copy of the geometry instead of everyone sharing a single copy.

[6] There is a very low probability of this being true.

[7] And vice-versa. The point is that the texture should behave like an infinite plain that can tile as much as you want in any direction.

[8] Sure, you could make the viewscreen out of hundreds of little sub-textures, but that would be an amazing pain in the ass to model. Your art team will probably try to assassinate you if you try.



From The Archives:
 

49 thoughts on “Project Bug Hunt #4: Atlas

  1. ShivanHunter says:

    I for one like these general-info posts. Your thoughts on tech issues are informative and fun to pick through (and as a math nerd, I wholeheartedly agree that Pascal is awesome!)

    Image coordinate conventions are odd – I actually think in bottom-up rather than top-down, since that’s how the Cartesian plane is usually drawn. I guess that’s how you get so many competing conventions. (The thing that really gets me is how 3D programs can’t decide which axis is “up” or even “forward”.)

    Funny thing about Minecraft: Modern Minecraft actually stores all the block and item textures separately, and (not sure on the details) combines them into a procedural atlas on game start. Cool way to do it for mod support, I guess! And some UI stuff is still stored as a proper atlas.

    Heh I see that Mass Effect 1 box art on your atlas :)

    1. Echo Tango says:

      Depending on the game, you might even want “down” instead of “up” or “forward” for your third axis. For example, in an RTS you might want X to be east-west, Y to be north-south, and Z would be altitude[1], because you’ve got submarines, tanks on hills, and airplanes. If it was a sci-fi game about exploring the depths of Ganymede’s ocean, you might want Z to be depth instead of altitude, because you’re always going downwards from the initial bore-hole in the ice. Similarly for Z being “forward” or “backward” – a game with the camera facing stage-actors from the audience seats might have +Z be towards the audience. :)

      [1] Positive Z is higher altitude.

      1. Decius says:

        What I’ve seen more often is X be East-west, Y be up/down, and Z be north/south.

        Because there were conventions about X and Y that developed during the side-scrolling era and Z was superimposed on them later on.

    2. tmtvl says:

      I’m Commander Shepherd and this is my favourite texture atlas on Twenty Sided.

  2. eldomtom2 says:

    Yep, that’s Painterly Pack, to my mind the best Minecraft texture pack ever made. Sadly it fell victim to the traditional curse of Minecraft modmakers deleting the files made for old versions of the game.

  3. Liam says:

    Have you tried using shader graph? I’m finding it absurdly easy to create shaders that do my bidding.

    I use a texture atlas with a card game I’m working on. I used a flipbook node to consume my texture2d and it gives me back the texture at a given index of my atlas (granted, my textures are all the same size though)

  4. Steve C says:

    You missed an opportunity to use this texture as an example. Then a footnote about buying it and a link to the dev blog.

    1. Shamus says:

      I honestly had to look at that for several seconds before I recognized it. :)

    2. Naota says:

      Bonus trivia: that texture got SO MUCH BIGGER by the end of development, but I think we managed to keep it all in one giant atlas. Which is fine, unless you’re running an Intel laptop from 2002 that refuses to comprehend textures larger than 512×512 (an actual problem Arvind and I discovered when releasing Unrest on Steam).

  5. Abnaxis says:

    I’ve always been annoyed by bottom-up systems by the way they break from our normal assumptions about coordinates in 2D space.

    That right there, more than anything what I’ve seen you post to this blog (I mean, other than when you’ve said explicitly), makes it very clear you didn’t come to software development from an academic background. Not that there’s anything won’t with that, but still…

    I’ve always HATED top-down because it’s opposite of how you draw a graph on graph paper. Until now, I never understood why anyone ever thought it was a good idea to do it that way, and assumed it had something to do with some weird esoteric hardware thing like CRT circuit boards filling in pixels from the top, and engine developers were too lazy to flip to a proper right-handed coordinate system.

    Top-down is for writing, not for drawing.

    1. Shamus says:

      “Top-down is for writing, not for drawing.”

      I’d state it as:

      Bottom up is for plotting on paper, and almost everything else is top-down.

      In Windows, a rectangular window has 0,0 in the top-left corner, and the entire screen space is mapped the same way.

      Webpage elements are positioned from the top-left by default, with 0,0 being in the top-left corner.

      On the text-based versions of BASIC I learned in the 80s, all screens were top-to-bottom. On the TI 99/4a, both the screen and the individual character bitmaps were mapped top-to-bottom.

      I believe all written languages work top-to-bottom? Some go left-to-right, some go right-to-left, some go in columns rather than rows. But as far as I know top-to-bottom is universal. (Actually, I have to imagine there must be SOME exception to this, but dangit they must be rare.)

      For products with titles (CDs, games, Movies, books) library and store shelves are sorted left-to-right, top-to-bottom.

      It’s weird. And yeah, if I’d learned this stuff in a classroom I’d surely have used-bottom-up so that it didn’t feel so unnatural when I ran into it in rendering.

      1. Abnaxis says:

        I feel like we need a Hamilton-esque rap to go along with this debate…

        Presenting sorted lists–words on a page, store shelves, etc., are left-right/top-bottom. That’s not even rendering or drawing though, that’s putting objects into sorted bins.

        As soon as you stop doing sorting things and start doing math thing (which is what you’re doing when you’re asking the computer to make polygons), going left-right/top-bottom puts you into left-handed-coordinate-land unless you do REALLY weird stuff like pointing the x-axis down the vertical. Not that that has stopped some engines from doing exactly that (I think I vaguely remember Unreal having weird coordinate system orientation like that…?)

        This runs counter to every single book on vector-math and physics you’d care to crack open, which love to inundate you with right-hand rules that are all built on right-handed coordinate systems. Rules which, incidentally, have been around for I think like 3 orders of magnitude longer than Microsoft has been kicking, so I don’t think they’re going to change any time soon. If I had a nickel for every time I’ve introduced a bug because of left-handed wonkyness I could tell all these people asking me to do image processing to go away while I retire.

        The thing is, I understand the forces behind my vexation now. HTML was made for displaying documents, it makes sense to go reading-order. Hell, 30-year-ago me that ran Windows 3.x from DOS probably thought reading-order coordinates was perfectly intuitive. Image rendering is a relatively new thing built on old codes that only cared about rendering text, where it made perfect sense to order things the way they have for however many decades they’ve been up to it. But dammit, I am SO TIRED of not knowing which way my result is going to be pointing when I take a cross-product!

      2. Erik says:

        At one level, it’s a language difference, between English and Math. :) Cartesian coordinates always place 0,0 in the center with positive X going right and positive Y going up.

        But the origin of the upper-left coordinate origin for graphics doesn’t come from language, it comes from CRT display technology. In a CRT (which, for our younger readers, was ALL active displays for the first 25 years of display technology), the scanning electron beam starts at the upper left corner of the tube, draws a line in X, flies back quickly, and starts the next line, all the while slowly moving down in Y. Therefore, the natural coordinate system has 0,0 as the first pixel of the first line, which is in the upper left.

        In a past job, we were building engines that rendered the design database of a chip, then compared it to the microscope image. The rendering team naturally built a system with the origin in the lower-left, like the design databases. The microscope team naturally built a system with the origin in the upper-left, like the camera. When we plugged them together for the first time, there was mass confusion until someone looked at an image with enough data to see what was happening, then we had to go back and reprogram the Y counters on the image output buffer to run the other way.

        1. Abnaxis says:

          I thoughtI remembered CRTs working that way, but then I doubted myself and thought I was pulling BS out of my ass, because I have no idea where I would have learned that nugget of information.

        2. Chad Miller says:

          But the origin of the upper-left coordinate origin for graphics doesn’t come from language, it comes from CRT display technology. In a CRT (which, for our younger readers, was ALL active displays for the first 25 years of display technology)

          CRTs themselves may have inherited it from the days from before computers had monitors at all. Top-down is certainly the most natural order for printing on paper.

        3. Decius says:

          CRTs weren’t the only display technology for a long time.

          Lights, Nixie tubes, and other display tech predate computers powerful enough to drive a CRT.

      3. Pylo says:

        Really Shamus? Are you telling us that in order to make your character jump up in your game, you would SUBTRACT a (positive) value from its vertical position?

        1. Shamus says:

          Actually, that might be how I did things in Good Robot, I don’t remember. That’s the only time I’ve really worked exclusively in 2D.

          In Good Robot, we’ve got 2 different systems going:

          1) I need to build the map. Since you’re always moving DOWN, it makes sense to construct levels so that 0 is the top.

          2) The player needs to move around the world. And like you said, if positive is down then you need negative velocity to jump.

          These two things share a coord system, so we have to tolerate some strangeness either way.

          1. Pylo says:

            Well you got me there, GR is specific in that you are progressing by going down, but does it actually have “jump” since it’s flying?
            Anyway I was thinking in terms of a classic side scroller, would you actually put negative velocity on a jump? It boggles my mind how anyone can find that natural.
            Also I don’t understand why you think there is some fundamental difference between 2D and 3D graphics. Modern graphics hardware can only really do 3D anyway. 2D is just “set one coordinate to always be 0” or whatever.

            1. Shamus says:

              You seem to be under the impression that I’m arguing that positive-up is fundamentally wrong somehow. I’m not saying that. I’m just saying it FEELS wrong because of a lifetime of previous experience with other 2D mediums. When I’m doing 2D, I get stuck on the idea that 0 is the top of the screen and positive is down, because all the OTHER 2D stuff I’ve ever done (Windows, webpages, text, etc) worked that way.

              I don’t have this issue in 3D. In 3D, zero is “ground level” and positive is up. That feels natural to me.

              Fun fact: Good Robot did sort of have a jump feature for a time. We toyed with the idea of having different robots with different capabilities / stats, and one was a robot that used jetpack-style movement rather than free-flight. Obviously this was all cut somewhere along the line.

              1. Pylo says:

                Fair enough.
                Love the programming posts, btw!

    2. Echo Tango says:

      Alternately, you could say that early mathematicians made a mistake, by not having graph-paper’s positive values point the same direction as written languages. (French, English, and Russian all being top-down, left-to-right.)

      1. Abnaxis says:

        Things I learned today: Arabic script is RTL.

        Since we use Arabic numerals, we should REALLY set our coordinate systems so (0,0) is on the bottom right, and increases as you go up and left.

        (Gods, it makes my brain hurt just thinking about it)

        1. Geebs says:

          Arabic script is RTL, but numbers in Arabic are actually referred to as “Indian” numerals and are written LTR.

          How’s your brain now?

          1. Abnaxis says:

            Wait, but is it still bottom to top?

            AHA!

          2. Thomas says:

            In my head English numerals are sort of backwards. You don’t know what the “1” means in “100000” until you’ve counted how many numbers are to the right of it. If we did our numbers the other way round (000001), you’d have enough information to understand what each number means as you came to it.

            (But if we did this in spoken English it would be a roller coaster ride. So the amount of money left in our budget is one-seventy-eight hundred… where will it stop?)

            1. Decius says:

              We could also go middle-endian:
              “Eight hundred, ten thousand, forty for 10,840

              1. Thomas says:

                We used to do that verbally. “I have one hundred, one-and twenty years”

    3. Gndwyn says:

      Amazing. When you pointed that out I literally went from baffled why anyone would start numbering at the bottom to disappointed that everyone doesn’t.

      1. Pylo says:

        But if you live in a building with multiple floors, you probably won’t be disappointed :D

  6. Geebs says:

    Texture atlases are all well and good until somebody tries something really incredibly unbelievably exotic, like bilinear filtering or mipmapping.

    1. Naota says:

      Or both, which Unity does… and then requires you to waste pixels padding each element of your texture atlas.

    2. Pylo says:

      I wanted to comment on that, but the way you said it made mu day :D
      Also if you are actually doing texture repeat (wrapping) you are basically FUBAR (and no amount of padding is going to save you).

  7. Paul Spooner says:

    I love everything about this. The minimalist texture transforms! The peek into the GPU operation! The atlas textures! It’s really fun to look at an atlas texture and imagine all the things you could make from those parts. Like a well organized set of Legos.

  8. Gndwyn says:

    OMG, have you ever believed something ridiculous for decades because of an assumption you never noticed you’d made?

    I’ve been messing around with basic game modding for 20+ years, often just kitbashing textures for stuff in Unreal Tournament, Freedom Force, Minecraft, Oblivion, etc. I always assumed the UV in UV texture stood for “ultra-violet” and it was some arcane thing having to do with lighting along with other terms I didn’t really understand like “normal map” and “specular map.” But obviously you’re not going to have anything in graphics for humans dealing with ultra-violet light!

    It’s just the u and v co-ordinates! AAAAAAHH! I’m so dumb!

    Thanks, Shamus for giving me a great laugh at my own expense this morning.

  9. Gndwyn says:

    Also, sadly for anyone like me with nostalgia for playing with the old format but happily I’m sure for pretty much anyone who has to actually use the darn things, Minecraft switched to individual texture files for each block (unless they’re animated) about…[peeks through his fingers not really wanting to see the answer]…seven years ago.

    I miss having my own bespoke Minecraft Atlas Texture (which I didn’t know to call it that until now) that I’d cut and pasted individual squares of hundreds of times. (Digs up old terrain.png file and stares at it nostalgically for a while…)

  10. Joshua says:

    Slightly off topic for this post (but not the series in general), but I wondered if you had come across Houdini at all. Some intriguing techniques with the Unity integration shown here, for example https://youtu.be/KDtZVf5KDUE

  11. RFS-81 says:

    Interesting! Do Real Games (TM) use this technique or are their textures too large? And couldn’t graphic cards have some feature where you say: “These are the textures I’m going to use a lot, please keep them buffered”?

    1. Decius says:

      “Keep them buffered” is complicated, because of the various types of buffers and the latency of various types of memory and the hard limits on how much fast cache you have.

      Suffice it to say that keeping a texture in the fastest cache while not using it would slow down everything else way too much to be worth it.

    2. Addie says:

      Adrien Courreges has a very nice look at how the DOOM (2016) renderer works, including a picture of the ‘texture’ atlas (it’s the bump map, shadow map, mipmap map etc atlas too – they call it a ‘megatexture’). So yes, Real Games™ use this technique.

      http://www.adriancourreges.com/blog/2016/09/09/doom-2016-graphics-study/

      Graphics cards allow you to specify whether a memory buffer you’ve allocated is intended to be used-once, written-once-but-read-many, or read-and-written-repeatedly, to allow it to best store your textures, vertexes and shaders. Anything more sophisticated is up to you as a programmer – the iD engines since Rage have updated tiles in their texture atlas depending on what is needed, which has the bonus of stopping memory from getting fragmented, but any memory management technique you care to use is at your disposal. Anything you tell the card to do has to be transferred across the bus, so it’s important to be efficient and to reuse as much as possible – shader programs can do a lot of work right on the card nowadays, and a little ingenuity can go a long way.

  12. tmtvl says:

    Yeah coordinates in OpenGL are right-handed: x goes right, y goes up, and z goes towards the camera. It’s funny, I was never particularly interested in it but when you started this series I finally decided to pick it up.

  13. Decius says:

    With that texture atlas index method, your options for initial location of a texture are limited, in addition to them being forced to be squares.

    You’d get even more performance improvement with one number representing the top-left pixel and another representing the bottom-right pixel; you don’t need to divide, and you can have any rectangle, but your texture atlas is now limited to maxint pixels, which isn’t a problem because you have only gigabytes of memory for it anyway.

  14. I think you’re totally overthinking the post topics!

    This was a perfectly fine entry and tangential explanations of things like this are just as interesting as the “main topic” of generating the levels. The only thing you really need to worry about is making sure the order of how things are explained makes sense, but otherwise were gonna love whatever you want to talk about!

    Heck even the order probably ain’t that important! Assuming the people reading this are into programming then were all used to having desperate pieces of information getting connected together over a long period of time. It’s just what the ravages of dealing with code for a living does to a person..

    This IS a programming blog series for God’s sake! If I didn’t enjoy tangents I wouldn’t be ON this site!
    Isn’t twenty-sided supposed to be a podcast about D&D? Have you even mentioned dungeons and/or dragons in the last 5 years? The whole thing is probably a tangent inside of another tangent! That’s what were here for, dammit!

    Sorry I get a little too exited when talking about tangents. But I’m sure everybody here can relate.
    Totally normal thing to get exited about.

    1. Philadelphus says:

      Totally normal thing to get exited about.

      But tangents are orthogonal to normal. ;)

  15. Nick says:

    I’m genuinely curious what kind of entry would not hold up. I love everything you post.

  16. John says:

    When I first started dabbling with graphics in Java, I was deeply annoyed by the way that it put the origin in the upper left corner rather than the lower left. I eventually got used to it though. Numbers go up, image goes down. So far, so good. The one thing that still bothers me is rotation. If I tell Java to rotate an image, say, 30 degrees, it’ll rotate the image 30 degrees clockwise. For some reason–possibly all those years of math classes–I still expect it to go counter-clockwise. I suppose I haven’t done enough with rotation to get used to it the way I have with verticality. It’s made it very difficult for me to trust code that I didn’t write myself that purports to find the angle from one point to another. I can never be sure that it’s calculating the angle I think it should be calculating.

  17. D-z says:

    Note: it is pronounced “Blaze Pascal” indeed ;)

    1. evileeyore says:

      But if we pronounce it ‘Blasé Pascal” it gains a certain hipster appeal…

      1. Rohan says:

        Also a good way of describing his accursed wager.

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply to D-z Cancel reply

Your email address will not be published.