Project Bug Hunt #5: More About the Atlas

By Shamus Posted Tuesday Oct 6, 2020

Filed under: Programming 56 comments

Last time I talked about what an atlas texture is, why we need them, and how I use them. The important thing is that I’m not hand-crafting the atlas every time I add a new texture. I have some code that does all of this for me. It reads all the images in a particular folder, sorts them-large-to-small for efficient packing, and then generates my atlas texture. It also stores the location of each texture within the atlas. So later when I’m busy generating walls and level geometry, I can say:

cell = LibraryAtlas.Lookup (“textureName”);

That gives me the location of the sub-texture within the atlas. Now I can plot uv coords as if I was using a standalone texture and not worry about the atlas at all. That’s immensely important when you’re building polygons with code and you want to keep things as simple as possible.

Last time I shared my atlas texture. Actually, that was just one fourth of the atlas. Sort of? The point is, I don’t just have one atlas texture. I have four.

The Atlas Collection

So how it works is I hand the shader four different textures. These are all atlas textures with the same exact layout of sub-images, but each atlas contains different data. The shader looks at all the different atlases and does some calculations to figure out what the final color will be for this particular pixel.

It’s important to note that a texture is a collection of four channels of data: Red, green, blue, and alphaAlpha just means transparency.. Sure, RGB values are normally used to store color, but since I’m writing this shader myself I can interpret the data however I want. I can use a blue channel in one of the textures to determine how much to tint the base texture pink. To the shader, it’s all just numbers.

So let’s talk about these four different textures.

The Diffuse Map

This is the simplest and most obvious of our four textures. It contains the traditional texture data that most people are familiar with. If this was 1997, then this would be the only textureAside from baked shadow maps, which aren’t worth exploring right now.. But even though I’m planning on using retro-ish assets, I’m not sticking to pure retro technologies.

So we need a few more textures.

The Normal Map

In the really old days, objects looked very primitive. A supposedly round support pillar would have just six sides. Character models would have primitive mitten hands with the fingers painted on the surface. Everything was blocky and chunky in appearance.

But then as technology progressed, we gained the freedom to use more polygons. That flagon of ale didn’t need to be based on a five-sided cylinder. Characters didn’t need to have flat faces with a pyramid for a nose. Instead you could model the eyes, lips, cheekbones, and other features.

But eventually you get diminishing returns and expanding costs if you continue to add detail with polygons. It would be a nightmare to model a wall where every brick, every crack, and every surface imperfection was fully realized in 3D. All of those millions of polygons would multiply the work needed to be done by the rendering system. Drawing the wall would be slower. Lighting the wall would be slower. Casting shadows would be slower. Heck, just loading the level would take forever.

You end up with this horrible system where making a room look 10% more detailed would make it 100x slower to draw. Moore’s Law has done a lot for us, but even Gordon Moore can’t save us from a growth curve this steep.


We don’t need to model walls down to the individual brick. The problem we’re trying to solve is that walls look a little too flat when you shine a light on them. Shine a light on the right side of a brick wall, and you expect to see the right edges of the bricks catch the light, and the left edges go dark. If it doesn’t, then the wall doesn’t feel like a brick wall, it feels like a wall covered in brick-patterned wallpaper.

The solution here is a normal map. Texture maps are intended to hold visual data – pictures of bricks and such. But there’s nothing saying you can’t store positional / directional data in them. So a normal map uses the RGB color channels to store X, Y, Z vectors. Essentially, the normal map says, “When you render this bit of the wall, pretend the wall faces this direction before you do the lighting calculations.” The bricks will be lit as if they’re really protruding from the wall, without us needing to spend hours of modeling work and millions of polygons to make it happen.

When I make a texture map, I also make a 3D relief map to go with it. The relief map tells the atlas generator to pretend that bright pixels protrude from the surfaces and dark pixels are depressions in the surface. From there it just takes a little math to create a normal map.

The Specular Map

(In the image above, you see some images are greyscale and some are red. The grey ones are leftover from the early stages of the project before I decided how all of this was going to work. Just ignore them. We’re interested in the pink / red sections for now.)

Specular lighting is when you get metallic or glossy highlights from a surface. Let’s consider a few different kinds of surface:

  • Chalk is a fully matte surface. It’s not glossy at all. You can clean that chalk (or even the board) all day, but you’re never going to see your reflection in it.
  • Brass is incredibly reflective. If you’ve got a strong light shining on polished brass or silver, then you’ll get all of these bright glints on the surface. In real life, if you get in close you’ll see those bright spots are actually mirror-like reflections of the light sources around you.
  • Stainless steel is fairly shiny, but unlike Brass it doesn’t really create a mirror-like effect. The surface is a little rough, which scatters the light in a disorganized way. You still get those shiny highlights, but they’re no longer precise reflections of the local light sources and are instead smeared out.

So when we’re dealing with specular lighting, we’re essentially dealing with two variables. One is the strength of the specular light, which dictates how bright those shiny spots are. The other is the specular roughness, which controls how much the shiny spots are scattered around.

Essentially, specular lighting is just a cheap way to make fake reflections. We’re basically reflecting just the light sources in the room. At a distance, that looks passably like a reflection. You need to get very close (or have a very large reflective surface) to see that the reflection is just spots of light and doesn’t include any of the details of the space you’re in.

Specular lighting is one of those technologies where the first 10% of your effort gets you 90% of the way to realism, but then you need a huge additional investment of additional programming and computing power to move the needle further. You can do a bunch of extra calculations to make sure the (say) gold surface reflects the appropriate amount of light, and correctly tints the reflection. You can make even better fake reflections with a static cube map. You can get a little closer still if you’re willing to burn a bunch of processing power maintaining a dynamic cube map. Or you can go one step further and implement screen space reflections. If that’s still not real enough for you, then you can go all the way and shoot for a ray tracing solution. Basic specular lighting is cheap to set up, but there’s no upper limit on how much additional power you can expend if you’re trying to simulate reality.

You can see how the specular map changes the reflective qualities of the floor. Most tiles have rough reflections like stainless steel. A few have sharp, polished reflections. And the grooves between tiles are completely dull with no reflection at all.
You can see how the specular map changes the reflective qualities of the floor. Most tiles have rough reflections like stainless steel. A few have sharp, polished reflections. And the grooves between tiles are completely dull with no reflection at all.

So great. Basic specular lighting is a cheap way to make fake reflections. The problem is that you don’t want everything in a scene to exhibit the same specular properties. Consider Quake 4, where all of the level geometry had the same exact specular properties. That was understandable in 2005 when this stuff was fairly new, but you can’t get away with that now.

As an aside: This creates a bit of a trap for artists. Specular surfaces are inherently more interesting than perfectly diffuse surfaces. Which encourages the artist (or more likely: art director) to really lean into them, even when it’s not appropriate. This probably explains how you end up with weird design choices like Dead Island, where specular abuse makes it look like people are made of dull plastic.

Related: The abuse of color filters in Mass Effect 3, or the many games that abuse / overuse bloom, motion blur, and depth of field. When you’re the creative director and you look at game assets for too long, it’s easy to become bored with them. This is often true even if the art is really good. At this point, heavy-handed filters feel really fun and make the scenery seem interesting again. That’s nice for the developer who’s been staring at this space station corridor for the last 6 months and who would rather gouge their eyes out than look at it again, but it tends to ruin things for the players who experience the place for the first time. For them, the filters confuse and clutter the scene by taking away the intended point of focus and draw the eye to random corners where the effect is the strongest.

We want metal and glass to be very shiny, we want plastic to be slightly shiny, and we want bricks and dirt to have no shine at all. We want this to be true, even if those different surfaces are all part of the same 3D model with the same texture.

So what we need is a way to control our specular lighting on a per-pixel basis. In my project, this means an extra texture. The blue channel controls how strong the specular lighting is, and the green channel controls how rough it is. Also, I used the red channel to store a copy of the relief map I discussed in the previous section. That’s not currently being used, but I could use that to add support for parallax mapping later on.

The Data Texture

One last texture. For now, I’m only using two channels:

Red: Self-lighting. If the red value is high, then the pixel will self-illuminate. This can be used to make things that glow. This might be a display screen, or some generic sci-fi control panel lights, or whatever.

Green: Paint color. This is a bit weird and unconventional. Right now I color the polygons of each room so that no two rooms are the same color. This color is normally covered up by the texture map, which means you never see it. However, I can use the green channel to tell the shader to multiply the texture color and the polygon color together. This is enormously useful for debugging during development. Later on, I might keep this feature so I can color-code different sections the way the System Shock games colored the walls in the different areas. Something like: Medical, Engineering, Research, Crew, etc.

Blue: Not currently used, although I have plans for it. I’ll put this channel to use later in the project.

Building the Atlas

I have a big pile of loose textures like wall1, computer3, desk5, or whatever. They’re all stored in a specific folder where the game can find them. The atlas code reads in these images and packs them into the diffuse atlas image. Then it calculates the normal map and sticks it into another image. Then it does the same for the specular and data images.

So that’s how my atlas works. Next I think it’s time to build some furniture.



[1] Alpha just means transparency.

[2] Aside from baked shadow maps, which aren’t worth exploring right now.

From The Archives:

56 thoughts on “Project Bug Hunt #5: More About the Atlas

  1. Philadelphus says:

    I think the most impressive part of this article for me is the code that figures out how to automatically pack textures together. That might be because all of my coding experience is with data and non-graphics, but it sounds really cool.

    Bump mapping something I’ve played around with a tiny bit in Blender, and still feel is basically magic. Somehow you make the surface be 3D, without actually being 3D? Amazing!

    Specular vs. diffuse is also a thing in acrylic painting, interestingly; on my desk before me, I have two jars of acrylic medium (the base into which you add dyes, basically colorless paint), one “gloss” (specular) and the other “matte” (diffuse). By adding them to colored paint you can influence whether it has a specular or diffuse look when dry, which can really affect how the finished painting looks. Painting something that should be matte glossy can look pretty weird, but judicious use of gloss in the right place can make things look more sparkly and reflective when moving your head around relative to the light source(s). One of my favorite painting tricks I’ve discovered is to add tiny glass spheres to (transparent) paint, which both reflect and refract light in surprising ways and really add a certain je ne sais quoi to the work when seen in person.

    1. Echo Tango says:

      Bump mapping is just doing the calculations for the lighting as if the surface was all bumpy, but it’s actually flat. So, cheating and adding numbers part-way into the normal set of calculations.

      1. Geebs says:

        the normal set of calculations

        Also, the set of Normal calculations.

        1. ContribuTor says:

          Does the set of normal calculations include calculating itself?

          1. Decius says:

            The Russel set of calculations includes those, and only those, sets of calculations that do not calculate themselves.

    2. Will says:

      You can kill the illusion of bump mapping by looking almost parallel to a bump-mapped surface. It will obviously be perfectly flat from that angle; move away and you can see that it’s just painted, trompe l’oeil-like, as though all the surface features had depth and were lit accordingly.

      The major difference between bump mapping and just baking that lighting into the texture (which you can do) is that the former is dynamic—you don’t need a new texture for every slightly different angle of light, and you can even have your lighting change dynamically.

    3. Smosh says:

      > I think the most impressive part of this article for me is the code that figures out how to automatically pack textures together.

      1. Write a routine to subdivide any space in four equal squares.
      2. Sort your textures by size, descending.
      3. Sort the list of spaces that you have by ascending order.
      4. Put the biggest texture it into the smallest space that fits it. If it ends up utilizing less than a quater of a space, call the subdivide routine for the total space so you don’t waste all your space on nothing. Add the other three generated spaces into your list of spaces for the future. Do more than once if needed.
      5. goto 3.

      Even a completely unoptimized solution that just brute forces everything on arrays by comparing everything with everything won’t even take a second for thousands of textures, and I’m sure we can do better with two heaps instead of re-sorting the same arrays over and over, but that’s just optimization and won’t change the strategy.

      1. Decius says:

        You can do better than sorting your list every time, by adding the new spaces with a merge sort.

        1. Richard says:

          And better still by using a binary tree, eg std:map.
          Especially as someone else already wrote everything except the “smaller than” function

        2. Smosh says:

          I know, and I said as such, but sometimes I find it convenient to write my algorithms “the dumb way”. For example in a recent hobby project I kept all data structures as arrays, and used linear search for everything. Was that efficient? Hell no. Was it quickly done, easy to remember, easy to use, bug-free and without complex side effects? Hell yes.

          It allowed me to continue working on the parts of the project that actually needed my attention, and since the thing barely ever had more than 50 elements on screen and was basically turn-based, performance did not matter. My 4 Ghz machine just solved the issue for me.

          It would also have been easy to improve if that ever became necessary. Writing complicated algorithms early means you burn valuable hours on doing something that won’t matter in the end.

      2. Philadelphus says:

        Thanks, that makes sense!

    4. Paul Spooner says:

      You can get a similar effect (to the glass beads) with a high specularity, high roughness, non self-shadowing material with noise in the normal map. Kinda like glitter?

      1. Philadelphus says:

        Interesting! I suppose I should provide an example of what I was talking about, though unfortunately the true power of the glass bead—their dynamic light-reflecting nature—is really hard to get across in a still photo.

  2. Groboclown says:

    Maybe other band nerds can identify better than me. That shiny brass instrument looks like a 4 rotary valve tuba. It doesn’t have the shape of a french horn, to me.

    1. Abnaxis says:

      So I haven’t played french horn since college, I am by no means an expert.

      However, every “standard” F/B 4-valve french horn I’ve seen has the fourth valve on the thumb, not the pinky. Your left pinky goes into a j-hook next to the valves that would make it very hard to press the instrument against your lips if it wasn’t there.

      Of course, there are other horns that exist that it could maybe be? I’ve never played a tenor horn before, though I remember having to transpose the music for it a lot

      EDIT: Definitely has to be a big instrument like a tuba. The rotors and keys are on the same side, you don’t have room for that on something smaller like a horn

      1. The Puzzler says:

        Looks like you were right: the name of the source image file is rotary-valves-tuba-valves-stimmzug (Google translates stimmzug as tuning slide).

  3. zackoid says:

    How deep are the spaces between the tiles on that floor? Either they are ankle-wrecking traps or they’re filled with grout made from vantablack.

    1. Paul Spooner says:

      Can’t it be both? “Vantablack Ankle Trap” just rolls off the tongue!

    2. Philadelphus says:

      Step on a crack, break your own ba—er, ankle.

  4. Paul Spooner says:

    Good to see you’ve got doors in as well as the door frames! Are they parametrically programed to open when you walk up to them?

    1. eaglewingz says:

      But you have dub in the “swoosh” sound yourself.

  5. bobbert says:

    Chalk is a fully matte surface. It’s not glossy at all. You can clean that chalk board all day, but you’re never going to see your reflection in it.

    Chalk is also matte, but the chalkboards you are thinking about are made of slate.

    1. Echo Tango says:

      I think a fully polished chalk-board might be a little specular. Definitely not as matte as the chalk itself. :)

      1. Paul Spooner says:

        And it’s almost the reverse for a whiteboard, where the background is glossy and the marker is somewhat matte.

  6. Piflik says:

    I don’t know how welcome this kind of advice is, especially considering the lead time for these posts, but I personally would put the relief into the alpha channel of the normal map. This way I’d have more space in the specular map, because I’d want to either use RGB for specular color, instead of using only one channel for strength, or use one of the channels for “metallic” (in Unity, if you use a surface shader instead of Vertex/Fragment combo, you’d have a property for that, otherwise you’d have to do it yourself), because for metallic surfaces you’d usually have much, if not all, of the color information in the specular, instead of the diffuse.

    1. Pink says:

      Only if he isn’t using unity packed normal maps(which repack the channels, discarding the original blue and alpha from the file.)

    2. Echo Tango says:

      I feel like this is covered by

      there’s no upper limit on how much additional power you can expend if you’re trying to simulate reality


      the first 10% of your effort gets you 90% of the way to realism

      Having diffuse and specular lighting at all makes games look a lot better than not having them, but trying to get very high res and/or accurate lighting can eat up a lot of budget. :)

  7. Liam says:

    Another option for texture packing in Unity is the Texture2dArray function. I find it useful for higher quality textures without having a very massive texture atlas.

  8. ShivanHunter says:

    Just FYI: The green channel (it’s always the bloody green channel) of your normal map for the floor tiles is reversed. In that last screenshot with the lighting, you can see the side towards the camera – away from the light – is being lit.

    Yet another thing in 3D graphics that no one can agree on ;)

    1. Niriel says:

      Does that relate to Unity being frustratingly left-handed in its coordinate system?

      1. ShivanHunter says:

        Probably, since flipping an axis reverses the handedness. Unreal has the same issue – it uses left-handed coords and I always have to flip normalmaps for UE4.

  9. pun pundit says:

    Bottom-up coordinate systems make sense from a mathematician’s perspective where positive X is to the right and positive Y is up in practically all of classroom and textbook graphs.

    Also, Depth of Field and Motion Blur are extremely annoying to me and I disable them by any means in any game I play. These are artifacts of physical cameras, why do we want to recreate them in our games?

    1. I’m commander shepard and this is my favorite post on the internet.

    2. Echo Tango says:

      Human eyeballs also suffer from depth of field – not everything can be in focus at once.[1] Those can also be used to evoke specific feelings or moods.

      [1] Also motion-blur, but this is usually hidden by our sneaky brains in normal situations.

      1. Yeah, so then you don’t need to simulate it on the screen because our eyes do it anyhoo.

        1. ShivanHunter says:

          I can see the logic for simulating DOF: Our eyes don’t do that on a screen, since the whole screen is roughly the same distance away. And it can occasionally produce not-horrible visuals. The big problem is that, to shift the focus, you have to change the perspective (which can be clunky, and is sometimes impossible) instead of just moving your eyes. So the DOF doesn’t really act like DOF, it just feels like parts of the screen are randomly blurry.

          But I do think it might be theoretically possible to design a good DOF effect in a game, I just haven’t seen it done. Might need eye-tracking hardware to really do it right.

          (I make no such excuses for motion blur. I’ve never seen a motion blur effect that didn’t give me a gorram headache)

    3. PeteTimesSix says:

      Can I add color correction to that list? Not the subtle kind that Ive reluctantly gotten used to, the “we want this place to look cold, therefore the entire screen is now only allowed to be blue, including the third person protagonist with the everburning hellfire shoulderpads” kind.

      1. ShivanHunter says:

        Color filters were my first introduction to some game artists’ tendency, when they get a shiny new effect in their toolbox, to overuse it to the point of absurdity. Case in point: Unreal Tournament 3, which had the audacity to take Unreal’s vibrant fantasy scenes and turn them pure brown. (Or, the only other art style in UT3, sci-fi dark blue with yellow pinpricks of light everywhere) The other example, from around the same time, was when the FreeSpace 2 modding forums got their hands on normal mapping – I don’t know if any screenshots from the time survive, but it was not pretty.

        1. Rohan says:

          The greying of UT3 caused a problem I’m pretty sure the devs never considered.

          Nvidia’s drivers include an anaglyph 3D mode, which can give you stereoscopic play for pretty-much anything if you don’t mind wearing red/cyan filter glasses. It does mess with the colour of things, but it looks pretty impressive.

          Now, here’s the problem: A key part of UT’s gameplay is based on colour. Distinguishing between red and blue teams while using 3D glasses is actually possible in the original UT, but it’s almost impossible in UT3. Why? Saturation. The saturated colours in UT1 stand out in anaglyph glasses because they dominate one eye over the other. UT3’s desaturated colours have barely any difference between eyes, which makes it really hard to tell red from blue.

      2. Decius says:

        Misused color correction, sure.

        Every tool powerful enough to be useful is powerful enough to be dangerous. The converse is not true.

    4. Decius says:

      Also: lens flare. And graininess.

  10. Jamie Pate says:

    Everyone is (or all the cool kids are) using BSDF now for shading, the specular model doesn’t calculate the differences in light scattering between dielectric and metallic materials.

    In unity it’s called the ‘metallic workflow’ and you specify metallic (0 or 1) and smoothness 0..1 (possibly a whole texture channel for each)

    1. Niriel says:

      Yeah. This does energy conservation properly. And also, it takes into account that every smooth surface becomes 100% reflective when seen from a very shallow angle. The Lambert model used in naive diffuse doesn’t do that at all, and it’s super important.

  11. Gndwyn says:

    So is the specular lighting what gives the original Bioshock that look that it has? Everything looks…not plastic, but unreal in the same way, like it’s all the same stuff.

    And is it just me, or does the metal armor and weapons in Dark Souls look qualitatively better (shiny in the right metallic way) than metal in other games like Skyrim? Any idea what Dark Souls does differently to make the metal look so good?

    1. tmtvl says:

      Which one, the OG or the remaster?

      1. Gndwyn says:

        I meant the original Bioshock (and the first PC version of Dark Souls, if that’s what you were asking about).

    2. Addie says:

      Conjecture, but the Souls games have always had an unreasonable amount of attention put into their character models. You can run Demon’s Souls at 4K using RPCS3 and the models all still look great – there’s detail on them that just is never visible on the PS3’s native 720p. There’s detailing and stitching in the cloth and leather, engravings in the metal work, all sorts. Skyrim hasn’t had the same level of love put into it; it’s all a bit more ‘generic fantasy armour’.

      They’re also a bit more minimalist in the use of reflections; everything is a bit grimy and worn in the Souls games, so the bits that are actually shiny stand out better. Because of the detail, things like the rings in mail armour actually shine out, but without having to do the ‘proper reflections’ you’d expect from a flat sheet of metal.

  12. Joshua says:


    I think you need to pick either creating or crafting.

  13. Duoae says:

    Maybe this is a stupid question: Is it possible to do a sort of negative or inverse specular reflection in order to better approximate diffuse shadows without having to go the normal route applied in current gen ray traced games (i.e. ambient occlusion)?

    For example, in the header image, I can see the “positive” reflection in the dark floor from the lightly coloured column but couldn’t an opposite effect be applied to the column surface that’s facing away from the light source?

    1. Geebs says:

      The header image is either using screen space reflections or straight up raytracing. The image lower down (under the brass instrument) is a naive specular implementation as per an early 2000s lighting model; you can see the reflections are conspicuously absent. That version of specular lighting doesn’t “know” anything about occluders in the scene and only cares about the angle between the particular fragment being drawn and the point light sources near it.

      So: not really.

      1. Shamus says:

        Oops. The header image should NOT have been captured with SSR enabled. I’ve been trying to make sure the images match the stage of the project I’m talking about, and I didn’t start playing around with post-processing effects until much later.

  14. Dennis says:

    Wait, so do you have to create the bump map and specular map by hand? That seems difficult to wrap your head around, and like an astronomical amount of effort.

    1. Alex says:

      You can automatically generate the Normal map from a height map – if you draw a greyscale texture with recessed areas black and raised areas white, an image manipulation program like GIMP can automatically convert that into the coloured texture that stores the direction of the edges.

      The specular map is just three greyscale textures – one for how smooth the surface is, one for how much light the surface reflects and one a copy of the height map – stored as the three colour channels of a single image for convenience.

    2. Liam says:

      I find Materialize handy for creating all the different maps

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply

Your email address will not be published.