Pixel City Redux #3: Shader Rant

By Shamus Posted Tuesday Apr 24, 2018

Filed under: Programming 122 comments

Last time I talked about making a special shader in Unity. It turns out that writing Unity shaders is a mixture of awesome and awful. But before we can get into that, we need to fix these buildings:

Bah. Close enough. Ship it.
Bah. Close enough. Ship it.

See, I’m going to be writing the lighting shader. My hope is that I’ll be able to use Unity to save me from the arduous task of writing my own shadowing system. I want buildings to be able to cast shadows on each other, the ground, and even themselves. But a cube doesn’t have any overhanging bits that might cast shadows on itself. So before I go messing around with shaders, let’s make some more complex buildings.

I don’t need to make the full building generator just yet. All I need is something complex enough to self-shadow.

Kind of amazing what a huge improvement it is to just add ledges.
Kind of amazing what a huge improvement it is to just add ledges.

I’ll probably throw most of this code away later. These buildings are stupidly primitive. There’s a triangle pair for every single window, there aren’t any “gaps” with no windows, the ledges all look the same, there’s no street detail, and there isn’t any clutter on the roof. But these buildings can self-shadow, which is what counts.

So let’s work on that shader…

The Shader

Remember that a shader is a special program that runs on your graphics card. It’s not written in the same language as the rest of the program. If you’re using OpenGL to make your graphics, then you program your shaders in a language called GLSL. If you’re using DirectX, then you’ll write your shaders in a language called HLSL. In Unity, you’re using a modified version of HLSL.

In the OpenGL world, I really dislike how messy it can get when you’re trying to manage your shaders. Rendering is usually a multi-stage process. Somewhere at the core of your program you’ll have a bit of logic to the effect of:

  • Grab all the FooBar objects in the scene and render them using shader A.
  • Now take all the Widget objects and render them with Shader B.
  • Now take all the Whatsit objects and render them with Shader A, except with a few different parameters than the ones we used for FooBars.
  • Now take all the transparent FooBars and and Widgets and render them with Shader C, in the order of their current distance from the camera.
  • Now finish it off by applying Shader D to the entire screen.

I’m leaving out a lot of details, but hopefully you get the idea. The program is sending various scene elements to different shaders and when you’re looking at the C++ code you can’t see what the individual shaders do. To see that, you need to switch over to this other programming language (and maybe even a different text editor) to look at all of those scattered source files. And when you’re looking at the shader code itself you can’t tell when it gets used. You can alleviate this by liberally commenting your code and documenting the structure, but that doesn’t change the fact that some of the most intensely complex logic in your program is straddling two different languages and several different source files.

In the world of Unity, they move a lot of the “what objects get drawn by which shader” logic into the shader itself, and made it so that you can bundle all of your shaders into one file. This weirded me out at first, but once I start building my shader I realized how much clearer and more manageable this system is. I really like it.

ON THE OTHER HAND…

The documentation for the Unity shader language barely exists, and the parts that do exist are apparently written for people who already understand it. There’s almost no example code whatsoever. And even when it does give you a bit of code, it does so without giving you the full context of where it’s used.

Car analogies are played out, so let’s try…

A Baking Analogy

Dangit. Now I want cake. I guess I should stick to car analogies.
Dangit. Now I want cake. I guess I should stick to car analogies.

I search for “How do I make chocolate cake?”. Then I wind up on the Unity documentation for Chocolate cake, where it tells me:

Chocolate Cake is a kind of food. It can be made in a kitchen using ingredients.

Related topics: Cupcakes, Dessert, Cooking.

The docs are so aggressively useless that sometimes I wonder if it wouldn’t be better if Unity just took them down so they stop poisoning the search results. (Probably not. I’m sure they very occasionally answer people’s questions.) Most of what I learned about Unity came from forum discussions.

So I read a few forums and I see people talking about “eggs”. Apparently eggs are an ingredient in chocolate cake? So I look up eggs in the docs.

Eggs are an ingredient. The are frequently used in making various cakes. They should not be added during the cooking process.

Related topics: Cupcakes, Chickens, Chicken breeding.

Okay. I see that cupcakes keep showing up in related topics. Maybe that’s where the payload of information is.

Cupcakes are a smaller type of cake. Please refer to the documentation on Chocolate Cake for more information.

Related topics: Chocolate Cake, Food, Feudalism in the 12th Century, The Moon Landing.

Or maybe not.

Sometimes it’s even worse than this! Sometimes the only “help” I can find is one of the Unity tutorial videos. Regular video is bad enough, but these are archives of livestream sessions. It’s incredibly demoralizing to reach for something simple like, “Where are the options to disable [some feature]?” and the best “answer” is an hour and a half of unedited footage for a previous version of Unity that you can’t even be sure will answer your question. This system is so bad at providing information that it borders on encryption.

But after several frustrating hours of trying to break through the layers of obfuscation in the Unity documentation, I do manage to learn a few things.

The Few Things I Learned

The Unity shader language forms a hierarchy. You write a single shader. That shader might contain many subshaders. Each subshader might contain multiple passes. Each pass contains a vertex shader and a fragment shader. So then you finally come across a few lines of example code in the skeletal Unity documentation, but you have no idea where this code goes within that structure. I searched for ages and I never did find just a simple example shader that the user could start with. The closest thing I could find was this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Shader "MyShader" {
    Properties {
        _MyTexture ("My Texture", 2D) = "white" { }
        // other properties like colors or vectors go here as well
    }
    SubShader {
        // here goes the 'meat' of your
        // - surface shader or
        // - vertex and program shader or
        // - fixed function shader
    }
    SubShader {
        // here goes a simpler version of the SubShader
        // above than can run on older graphics cards
    }
}

This is supposedly an “Example” program, except it has only a single line of code (Line 3) that actually does something. The rest is just comments and curly braces. Lines 7-10 are a description of what sort of code might go here, without telling me what that code is or what it does. (Or where to look that up.) It’s the equivalent of, “After the ingredients are combined, put them inside of a specific container and place the container in the appropriate appliance for a length of time or until the cake is a particular color.” I swear it’s taunting me by explicitly listing the things I need to know that it refuses to tell me.

Maybe elsewhere I’d find something that was supposedly an “example program” that fills in lines 7-10, but I try pasting it in and it doesn’t work and I have no idea why. There’s no starting point for learning any of it. I couldn’t just find the basic framework and build up from there.

I did eventually find this tutorial on a blog called Catlike Coding, and that basically saved this project. I never would have gotten anywhere without it. I don’t know where author Jasper Flick got his knowledge, but I’m guessing it wasn’t from the relentlessly unhelpful and secretive manual pages.

The Catlike tutorials give me the crucial first step of explaining the overall structure of a shader and how it operates with Unity, and then it gives a working example that will actually compile. From there I can begin modifying it to meet my needs. The Catlike project builds towards photorealism effects, and I’m headed in the opposite direction, but at least I have some sort of foothold on what I’m trying to learn.

When I’m done, I have lighting and shadows:

If you look closely you can see a red speck on the left and a blue one on the right. That's the origin of the lights. There's also a green light positioned below the camera.
If you look closely you can see a red speck on the left and a blue one on the right. That's the origin of the lights. There's also a green light positioned below the camera.

You’ll notice that the scene is no longer textured. This is the lighting component of the scene, with nothing else.

How the program builds this:

Shader pass #1: Render the entire scene. I could use this lighting pass to project sunlight onto the scene, but I’m not planning on using sunlight in this world. So for now everything is drawn solid black. Later I’ll do something else with this shader pass, but for now it’s just a pitch-black city.

Shader pass #2: Lighting pass. Unity goes through every light in the scene. You’ll notice I’ve got three: Red, green, and blue.For each light, it renders all of the objects that fall within that light’s radius and might possibly be illuminated by it. My shader renders these objects with this particular light’s properties. This lighting contribution is added to the existing image. When this pass is done, we have the image above.

I should add that most of step #2 is automated. Unity sorts through the objects in the scene and figures out which objects need to be illuminated by which lights. So that’s a bunch of work I don’t have to do. Like, Unity just saved me from writing hundreds of lines of code.

In OpenGL, we would do things a little differently. We would render the entire scene at full brightness, producing something like this:

I know the sidewalks look goofy. We'll take care of that mess in a later entry.
I know the sidewalks look goofy. We'll take care of that mess in a later entry.

Then we’d do the lighting pass. Except, instead of drawing directly to the screen, we’d draw into a framebuffer. A framebuffer is like another screen you can draw to. It’s the same size as the viewport. (So if the user is running the game at 1920×1080, your hidden framebuffer would be the same dimensions.)

So then we have two different images: The scene at full brightness is in the main viewport, and we’ve got the lighting stored in a spare framebuffer. We could then draw the framebuffer into the main viewport, having it multiply the two images together. Here is what that looks like if I take the two previous screenshots and combine them in an image editor using a multiplication filter:

This is actually an image edit and not a render.
This is actually an image edit and not a render.

Yeah. That’s exactly what we’re going for. Except, Unity doesn’t give me the ability to create a framebuffer. When I search for how to do this, I get endless forums posts where people answer with “Use a render texture”. A render texture is a texture you draw into and then slap on an object in the scene. It’s used for things like mirrors or in-game security screens that show you what’s going on elsewhere in the level. In the world of OpenGL this would be the wrong way to do thingsRender textures MUST be a power of 2. So to make one large enough for a 1920×1080 display I’d need to make a gargantuan 2048×2048 texture, even though half of that texture memory would go completely to waste..

What I’m trying to do here is called “deferred rendering”. I see that Unity offers a totally different rendering path that supports deferred rendering, but that whole system is built around assumptions that you’ll be using stuff like normal maps and other modern-day FX. The documentation on that stuff is in even worse shape than the rest of the docs. I think to use this I’d have to start over, except I’d be making something even more complicated with less information.

Shamus, why don’t you just look at the…

Let me stop you right there, because I’ve seen a LOT of these sorts of answers over the last couple of days as I’ve waded through forums looking for answers. I get the sense that the Unity documentation was in better shape in the past than it is now. This is bad because the various forum questions are now poisoned with flippant RTFM answers that have become useless. This page in the Unity docs tells me to go and look at the built-in shaders so I can see a working example of the thing I’m interested in. But when I follow the link it redirects me to a generic download page where I can download Unity itself.

The page is confusingly designed and it looks like you’ve been redirected somewhere useless, but if you poke around you’ll find a download for the promised shaders. But then I download the file and it’s just hundreds of shader source files. Okay, thanks… but which one of these – or which several of them – are the answer to my question of “How does rendering system X work?” I wanted to know which shaders were used in which order, and now I’m looking at a huge pile of random shaders and I have no idea where to go from here.

I’ve spent half a day of chasing links that promise knowledge but only take me to unfinished manual pages, dead links, and unrelated topics. Maybe there’s some magical combination of search terms that will take me to the payload of information, but the longer this goes on the more I suspect it just isn’t there.

I’ll come back to deferred rendering later in the project, but at this point I’d really like to get back to programming.

Let’s Get Back to Programming

So now maybe you’re asking:

Shamus, why can’t you just draw everything into the main viewport? Why not just draw the full-bright city and then layer the lights on top of it?

Remember that each light adds itself to what’s already there. Then the result of those passes is multiplied with the full bright scene. If I draw the full bright scene and then ADD the lights to it, then it would be trying to brighten an already maximum-bright scene. If I multiplied each pass, then I wouldn’t get the right result. During the pass for the red light, everywhere the red light isn’t touching would be turned black. Then when the blue light came along, it wouldn’t be lighting the brick face of the building, it would be lighting a pure black void.

Or to put it another way. What I’m trying to do is this:

Frambuffer = (Redlight + BlueLight + Greenlight)
Viewport = Buildings
FinalImage = Viewport * Framebuffer

I want to add the lights together, and then combine it after the adding is done. That’s not the same as:

FinalImage = Buildings * Redlight * BlueLight * Greenlight

And it’s also not the same as:

FinalImage = Buildings + Redlight + BlueLight + Greenlight

Only the first will produce correct results.

Hm. Maybe I can hack my way around this.

Whatever Works, Right?

Rather than build up the lighting contributions in a framebuffer and then multiply them with the viewport, what if I did it the other way? What if I add all the lights together FIRST, inside the viewport. (Which is what I’m doing now.) Then I’ll draw the textured buildings into the scene using a multiply filter. Will Unity let me do that? If I stick another extra shader pass at the end will Unity just blindly throw all the objects into it?

Okay. That worked. Huh.
Okay. That worked. Huh.

Apparently so!

Yes, this looks almost identical to the previous image. But this one really was built with shaders and the previous one was created with an image editor. So basically I’m doing:

Redlight + BlueLight + Greenlight * Buildings

(Yes, in mathematics you normally do multiplication before addition, but since we’re drawing color values onto a canvas the order of operations is strictly left-to-right.)

Now, let’s get back to that first shader pass. (The one that’s all black.) What I’m thinking of using that for is emissive stuff. (Stuff that glows.)

Right now I’ve got my window texture. It’s your standard 32 bit texture, with a byte for the red, green, blue, and alpha channels. The alpha channel is used for transparency. So a single window looks like this:

(Yeah I didn’t put a lot of time into this texture.)

You’ll notice the windows don’t completely obscure the wall behind them. The window itself is opaque, but then around the window is some empty space where you can see through to the bricks / concrete / whatever behind it.

The thing is, we’re using an entire byte for that alpha value, and we don’t need to. I don’t plan on having windows with soft faded edges that blend in with the wall. In my textures, the window is either fully opaque or fully transparent. We only need one bit (a single zero / one value) for that. That means we’ve got another seven bits I can use for other things.

So I make the highest bit control the alpha, and I use the next bit for emissive glow. If the bit is set, then during the first render pass this pixel will be drawn at full brightness rather than pure black.

Wow. That made a big difference.
Wow. That made a big difference.

Yeah. Now the windows glow. Above I said that every single window was a triangle pair. (A rectangle.) This is stupidly wasteful and I’ll make it more efficient later. But since we’re already making so many polygons, it should be trivial to assign each building its own color, and then have every polygon in the building be a random offset from that:

You know, this flat-poly stuff looks really unexpectedly cool. I might come back to this later.
You know, this flat-poly stuff looks really unexpectedly cool. I might come back to this later.

Now, instead of making the windows pure white, I’ll have them use the polygon color to determine the glow coming from the window:

Apparently there's a city-wide rave going on?
Apparently there's a city-wide rave going on?

Not bad, giving how much of this I’m half-assing. Okay, the magenta windows look pretty silly, but I think we’re on the right track here.

Eventually I’ll come up with a less ridiculous way of lighting these windows, but for now this works.

 

Footnotes:

[1] Render textures MUST be a power of 2. So to make one large enough for a 1920×1080 display I’d need to make a gargantuan 2048×2048 texture, even though half of that texture memory would go completely to waste.



From The Archives:
 

122 thoughts on “Pixel City Redux #3: Shader Rant

  1. Decius says:

    Other than the problem exemplified by the magenta building, those windows look good enough to ship.

    If you can find a way to make the windows not just bright, but illuminating, without the problem caused by having that many light sources that aren’t even points, that would be epic. But I’ve never seen any realtiime rendering that allowed for light sources that aren’t points.

    1. Cerapa says:

      Adding bloom could be a possibility?

      1. Ilseroth says:

        Very easily actually, there is a standard unity postprocessing package that Unity has for download from their store (free) that was made by the Unity devs. I don’t know why it isn’t just straight bundled with Unity itself, it used to be.

    2. Halceon says:

      Well, if he had kept more bits for the alpha, he could have that empty surrounding space be filled with a glow effect. So the windows would seem to illuminate the wall around them a bit.

      1. Decius says:

        There’s a HUGE difference between bloom/obscuring with brightness, and illuminating the windowsills.

  2. Piflik says:

    Small correction: Unity Shaders are primarily written in ShaderLab, but within the shader itself you can write CG, HLSL or GLSL, you just have to tell ShaderLab which of these you want.

  3. eunoia says:

    This is pretty wonderful – i’m glad the unity docs are impenetrable to professionals and not just hobbyists like me! Keep on keeping on – I can’t wait to read the rest!

    PS Catlike coding saved me a lot of time in the past. Great resource!

    1. Olivier FAURE says:

      I wouldn’t call Shamus a professional; more like a really experienced amateur.

      … although he did release a few games on Steam fairly recently, so I guess he still counts as a professional.

      1. DanMan says:

        Shamus spent several years building graphics on a Second Life style 3D online World. I would call him a professional at graphics rendering if not necessarily a professional game maker.

        1. Olivier FAURE says:

          I know, that’s what the words “still” and “experienced” implied. (I guess I could have been more explicit)

          Also, I don’t want to delve to deep into this because it feels needlessly aggressive, but I think “professional” implies a certain approach that Shamus doesn’t really have?

          Like, when I think “professional”, I visualize more “the man with the suit and tie, who knows his weapon perfectly, prepares for every situation, and can snipe a man in a moving crowd from kilometers away with wind” than “the guy who has two or three guns that he roughly knows how to clean and disassemble, and can shoot most targets with them from a block or two away, which is good enough for most jobs”. Shamus is definitely more of the second type :P

          1. Shamus says:

            Speaking from personal experience:

            I think the difference between pro and amateur is more about getting paid than how much you know.

            I’m TOTALLY fine with you calling me an amateur. I’m just warning you that a lot of those “professionals” fall pretty far short of “know their weapon perfectly”. There are some brilliant hobbyists (I hope to be one someday) and there are an alarming number of people getting paid who don’t know what they’re doing. :)

            1. Droid says:

              This is true for all fields, though, isn’t it?

              Except for mathematics, of course, where everyone knows everything about everything… right?
              *starry-eyed look*

              1. Olivier FAURE says:

                Eh, computer science is still a young field compared to, say, engineering or cinematography.

                There are guidelines and best practices, but they’re not all widely respected yet. For context, distributed source control and unit testing are (arguably) less than 20 years old.

                (although I’m no cinematographer, so I don’t know how good *their* guidelines are)

                1. Decius says:

                  Interesting that you would compare the age of computer programming to that of cinematography, when the Babbage Engine and early motion picture cameras were contemporary developments.

                  1. Echo Tango says:

                    The Babbage engine is as old as the earliest cameras, but nobody made use of it, and it was largely forgotten until after we already had some computer industries built. So I think the comparison of computers (young) to film (much older) is appropriate.

                    1. Decius says:

                      I’d say that until talkies the skills developed around motion cameras compare to the ‘programming’ skills needed to operate the Jacquard loom.

              2. Mephane says:

                This is true for all fields, though, isn’t it?

                Of course it does. In any trade and profession you will find people getting by very well despite being absolutely incompetent.

              3. Daemian Lucifer says:

                Of course.In maths there are only geniuses.Its a proven fact.

            2. Olivier FAURE says:

              I agree, and I’m glad you’re not offended. :)

              I guess the word I’m looking for is closer to “expert”, which is independent of being employed, and I think is also what eunoia meant. So in the metaphor the sniper is the expert, and you’re more the jack-of-all-trades or something.

            3. Kylroy says:

              I spent almost two years in college delivering pizza. I will insist that this does make me a “former professional driver”, in that I was being paid nontrivial money to drive a car. And constantly driving around city streets *did* make me better at piloting a car, though it in no way prepared me for a possible NASCAR career. I figure the same logic applies here, with the same limitations.

            4. MadTinkerer says:

              Amateurs are people who prioritize their spare time to learn new things. Professionals are people who are usually too busy turning what they already know into money to take the time to learn at the same pace as amateurs.

              As such, any particular amateur can know more than an average professional about a given subject, but usually lack the training and practice that the best professionals make sure they get from the start. Plus, there’s the whole on-paper qualification thing that people still seem to think is important even though anyone can go and get an actual MIT education (minus the piece of paper that confirms you did it) for the cost of time it takes to sit and listen.

              So, in a nutshell: amateurs are those who may actually be the most brilliant minds in their fields but have little to no recognition regardless of ability, and professionals are those lucky enough to afford training and evaluation of their abilities (according to specific predetermined syllabi) such as to be declared “certainly competent”.

              Now that I think of it, I wonder if this specifically is one of the major reasons why Indie keeps outperforming AAA in terms of innovation. I usually blame the ones at the top for their lack of vision, but maybe some of the blame can be pointed at those doing the implementing? Maybe the simple fact that AAA publishers are constantly deliberately building large teams of competent workers instead of small teams of brilliant workers; affects the quality of the product even more than the direct contribution of those at the top?

            5. Erik says:

              This is absolutely true. Professionals are those who get paid to do a thing; amateurs are those that do it for free. Actual skill at the task is not directly specified.

              Skill is not irrelevant, however. Professional status means someone paid you for a task; this strongly implies a basic level of competence. If you weren’t able to do the task, you shouldn’t have gotten paid to do it. At that level, a pro is defined as having a minimum level of competence that is undefined for an amateur – the amateur may be better or worse, and the fact the pro got paid doesn’t add any information to that.

              A pro will also learn some related skills that an amateur will usually not learn. These are mostly things related to process, e.g. for programming: how to write proper requirements, how to use source code control software, how to manage releases, how to work on new features on one code branch while supporting bug fixes to an older version on another branch, etc. These are skills that aren’t directly related to your programming skill, but are critical skills for functioning in a professional environment. Other skills will have similar related skills separating a professional from an amateur, but since I’m a programmer those are the set of related skills I know best.

              As far as expertise goes, this is again related to but not determined by professional status. The professional usually has more time and motivation to learn more about the skill they’re using professionally, but there are many motivated amateurs who can and will develop extremely high levels of expertise, and there are many unmotivated professionals who will learn only the bare minimum they need to learn to get paid. This is just human nature. A pro is *usually* more of an expert than an amateur, but by no means always so. Pros are minimally skilled often enough that no one should assume expertise based on professional status, though it’s safe to assume minimal competence until proven wrong.

            6. Daemian Lucifer says:

              Relevant:professional writers for the new colossus.

              1. Olivier FAURE says:

                Wow there, that’s uncalled for.

                You don’t don’t know who wrote that game, what constraints they had and under which conditions they worked.

                I think since the Mass Effect 3 retrospective, we’re supposed to pretend the writer is like this nebulous single entity behind each game (eg “Shoot Guy II’s writer”), and even though we don’t necessarily like this theoretical’s person work, that doesn’t extend to actually saying (or suggesting) that the actual writers who wrote the game were incompetent.

            7. Cybron says:

              Strongly agree with this. This is also compounded by the fact that a lot of technology-specific knowledge quickly becomes semi-obsolete in a way you don’t see in other fields. Some stuff carries over but some stuff doesn’t. This is why there were 40 year old men with years of industry experience in some of my classes when I was getting my degree.

  4. ElementalAlchemist says:

    I know you used it as a punchline in the previous article, but I gather now you might have a bit of sympathy for Nightdive. Even if it wasn’t a primary factor in their initial switch, I think it’s pretty telling that they decided not to return to Unity but stick with Unreal after their recent backtrack.

    1. KarmaTheAlligator says:

      But didn’t they already have a working demo in Unity? Something that was clearly showing lighting, shadows and most of what the game would use/do? It’s not like they had to restart from scratch with it.

      1. ElementalAlchemist says:

        Yes, they could have picked up right where they left off originally. But that would mean having to go back to using Unity. Which was my point.

  5. Olivier FAURE says:

    You’re making your workflow way too complicated by using Unity. Obviously what you need to do is start over using nothing but Vulkan. There’s an initial learning curve, but it’s not that hard to learn and you’ll be saving time in the long run!

    Just kidding :) But seriously, Vulkan’s API is pretty sweet. Intricate, but logical and as well-documented as they go.

    1. Richard says:

      The learning curve is an overhanging cliff half a mile high, but it’s great to make the final dino for the ledge…

      Vulkan is cool, but right now the tools are too young for most people.

  6. Droid says:

    Great article! Really enjoying the project so far. I also immediately thought of doing it “the other way around” when you had to add and multiply the shaders in that specific order, and also that it somehow wouldn’t work. I’m actually kind of impressed that it did.

    And if I may add a small idea:
    To improve the look of those windows, you could just look at what color range apartment lights usually fall into (based on how strange the magenta, and also the green and green-yellow ones look, I’m guessing either between blue and white or between white, red and yellow) and then give every building one “randomly” selected of those colours. “Randomly” in quotation marks because you’d have to somehow link the chosen color to the building so it doesn’t change every frame, e.g. by making it a function of some unlikely-to-change parameters of the building in question.

    1. Echo Tango says:

      Shamus could also have several ranges of acceptable window colors, for different types of buildings. Just like he’s got “brick” and “concrete” for the walls, he could have “boring office” and “night club” for the windows. Boring Office would be normal light-bulb colors, like white, yellow-ish white, blue-ish white, etc, like you can purchase in normal stores. Night Club would be crazier colors like bright green, magenta, or maybe even random colors for every window. :)

      1. BenD says:

        Nightclubs with windows???

        1. Echo Tango says:

          One-way mirrors/windows? I’ll admit, I hadn’t thought this through fully when I posted. Let’s pretend those buildings are hosting LAN parties in offices after hours instead, and they’ve got colored lighting… Or something. :P

    2. Richard says:

      That’s why it’s important to know which mathematical operations are commutative, and which aren’t.

      d*(a+b+c) == (a+b+c)*d

      The only advantage of the version on the left is if d = 0; because that would mean you don’t need to bother doing the addition.

      1. Droid says:

        You misunderstand. It’s not the fact that you can rearrange the formula that impressed me (that’s in the “I immediately thought of that, too” part), but that Unity, which by Paul’s and Shamus’ description sounds like a poorly documented mess of specialized tools able to solve every problem that you don’t have, just accepted this formula and even went ahead and rendered correctly.

        There is no doubt that it SHOULD be able to do that. But rendering stuff in Unity sounds just like the kind of thing that doesn’t care one bit about what it should be able to do and just breaks for no good reason whenever you try to do something unorthodox.

  7. blue_painted says:

    Video is about the worst way of documenting code, apart from unedited video where what little payload there is is hidden amongst “Umm, ahh, no, yes but no, right … oh no, not that bit …”

    1. Mephane says:

      Not just for code. Pure text or text with images is almost always the better solution. Even for things where it is genuinely useful to demonstrate something in motion as opposed to a series of intermediate steps, just throw in a short gif animation for that particular bit.

    2. Olivier FAURE says:

      Yeah, video tutorials feel like they’re a product of design by committee. Like, a team member said at a reunion “You know, we’re starting to need tutorials”, someone else suggested “Well, we could probably make a bunch of videos showing the software”, and they never really questioned it from there.

      Although to be fair, the Unity tutorial aren’t so much “bad” as “disorganized”. They’re all over the place, different tutorials were made by different people, and you don’t really feel like the Unity team has a lesson plan for you when you read their tutorial list.

      1. Modran says:

        You mean they lack… Unity ?

        *Mikedrop*

        1. Droid says:

          *slow clap*

        2. Cubic says:

          That … was Unreal, man.

        3. Olivier FAURE says:

          I feel like that Unity pun will be the Source of many bad jokes.

          1. PPX14 says:

            Puns so bad I might Cry

            1. Daemian Lucifer says:

              Tek a deep breath,relax.

    3. Sabrdance (MatthewH) says:

      Not for writing code, but for many computer applications I do find a video useful -“where is that file?” and “which of the seven dials do you alter” are easier to show in video than in stills.

      But if the video is longer than 10 minutes, it isn’t worth it.

      I develop internet university courses, and have an unofficial rule that all videos should be under 20 minutes, and only a single topic for pretty much this reason.

      1. Olivier FAURE says:

        10 minutes is already too long for me, unless the subject is really simple.

        The school I’m at does 95% of its teaching through videos, and most of the time I would just skim them, list all the relevant concepts, and look them up on the internet.

        Video just isn’t a convenient format. You can’t take it up at your own speed unless you keep pausing it and restarting it. It’s harder to skim through it without missing things. In a text, if you don’t understand a paragraph, you can just re-read it ten times until you get it. Doing the same on a video feels like a timing exercise, it’s super annoying.

        If you want to show what the UI looks like, use screenshots and gifs. Videos should be a last resort.

    4. Daemian Lucifer says:

      On the other hand,video is the easiest way to make a tutorial.Which is why its so common these days.

    5. default_ex says:

      Video tends to be a horrible way to document just about everything it’s used to document. These past couple of years have been my least productive in terms to creative projects. Not due to scope or any of the usual culprits but because there seems to be this bizarre craze to use video to document everything under the sun. So much time wasted looking through video to find things that on their own needed one sentence or screenshot to describe but instead got hour long videos with lots of meaningless banter to really waste your time.

  8. KarmaTheAlligator says:

    That mirrors my experience learning Unity exactly: a mess of video-only tutorials, bad documentation, and asking on forums (or asking my colleagues, when they have time). Loving the series so far!

  9. John says:

    I am firmly committed to 2D graphics in my own projects mostly because I never ever want to have to deal with stuff like this. The next time I get frustrated with the occasionally sparse documentation for libGDX I will think of this and count my blessings.

  10. GargamelLeNoir says:

    Against all odds, I have something constructive to comment in this post : There is a typo in the alt text of the image of the texture (for the recond).
    I think I more than earned my place in the credits of Pixel City : GOTY edition.

    1. DGM says:

      And while we’re demanding a share of Shamus’s credit just for pointing out typos:

      >> “Now, instead of making the windows pure while,”

      Should be “white,” not “while.” Remember me when you’re famous, Shamus. :P

      1. Lars says:

        Frambuffer = (Redlight + BlueLight + Greenlight)
        Viewport = Buildings
        FinalImage = Viewport * Framebuffer

        I’m smelling a minimum of one Undefined Identifier Error ahead. More if you ever mix that capital L in BlueLight into the other colors.

      2. Reed says:

        Famous Shamus? Sounds like he should be selling cookies… :)

  11. Zak McKracken says:

    I kind of wonder what the quality of the corresponding Stack Exchange site is. And apparently there’s even an answers.unity.com in the same style. I’d somehow expect the lack of documentation to lead to loads of popular questions about those badly documented things — but then I’m certain Shamus would have found those already … or maybe Unitdy devs are just so hardcore that they learn like people used to back in the days before the internet, at an age when books were too expensive, by Osmosis or something (or at least that’s how I learned C64 Basic, and TurboPascal: One tiny reference book, some programs by other people around me, and a lot of free time)

    1. Unity Answers is terrible. Like, really bad. It’s mostly people with little expertise or rationale, and because folks don’t or can’t search, the “answers” are scattered among a bunch of similar questions.

      The Game Development Stack Exchange is where I go when I need Unity answers, and it does pretty darn well. You’ve still got the problem that Unity has changed a lot over the years, but because the answers are based in an actual understanding of how stuff works, even an obsolete answer can be pretty useful.

  12. Zak McKracken says:

    I know you’ve probably solved this already, but wouldn’t the most straightforward way to include glowing windows be to use
    (light1 + light2 …) * texures + glow?
    In other words: Use the current state, and then add the glowing windows after the fact? That means you could even have textures on the windows (showing the interior of the building, or curtains, or just some random patterns), then add the glow from the interior light. And if you have a texture channel for it, that glow could even be non-uniform, both from one window to the next and within a single window.

    That would then even make for a nice ramp-up of the glow effect while the sun sets. Rather than jumping from un-illuminated to plain white, windows could slowly transition from (fading) passive illumination to becoming a bit brighter than their surroundings to glowing.

    Oh great, now I’m armchair-directing again. Sorry, can’t help it… that may be because I wish I had time to do something like this.

    1. Droid says:

      Do a lot of people actually have lights that slowly turn brighter and brighter to keep room illumination steady? That’s what I think the idea in your comment will end up like. Or did I read it completely wrong?

      1. If they have smart lights, maybe? I know I use flux to alter my computer’s brightness and blue light output so that it gets redder as the sun sets, so I’d guess you could use those fancy new smart lights to do something similar?

        Ooh, if you really wanted to get fancy, you could have some windows darken and lighten and change color like someone inside’s watching TV/a movie/gaming in a dark room!

        note: I have no actual experience with a smart house anything, my house was built in ’69 and is stupid enough we still have some 4 prong phone jacks.

    2. Decius says:

      Putting a texture of an office in the window is going to be horrible when moving around outside, even if it isn’t glaringly bad in screenshots.

      A texture of a curtain or blinds could work.

    3. Lee says:

      I believe Shamus mentioned that this is a nighttime only city gen, so when the sun sets = when the sun rises = never.

      Other than that, though, this sounds like an interesting idea (I say as a guy who has never used Unity).

  13. Echo Tango says:

    Shader “MyShader” {
    Properties {
    _MyTexture (“My Texture”, 2D) = “white” { }
    // other properties like colors or vectors go here as well
    }
    SubShader {
    // here goes the ‘meat’ of your
    // – surface shader or
    // – vertex and program shader or
    // – fixed function shader
    }
    SubShader {
    // here goes a simpler version of the SubShader
    // above than can run on older graphics cards
    }
    }

    When I first read this, I didn’t think it was a real example. That example seems like a fake/contrived/made-up example, similar to the car analogy. So…yeah, the docs have room for improvement. :)

    1. Joshua says:

      My only real experience with coding/scripting is some basic stuff in Neverwinter Nights, but I must confess I didn’t like the formatting of the curly braces from that “example”. I much prefer something like:

      *Edit* Nevermind, the comments post the code in the same way. Was trying to have the opening and closing brackets actually line up in the same place like this:
      {
      }

      But, posting the comment garbled up the spacing so they looked like the original example anyway. :(

  14. PeteTimesSix says:

    Shamus, why don’t you just look at the- *ducks*

    Yeah, the tutorials for Unity were always kind of absolutely awful.
    I do have to ask though, because I must have missed it, but why do you actually need a custom shader? Lighting a (semi) static city should be well within the abilites of the Unity Standard Shader™?

    1. Shamus says:

      I’m doing a bunch of atlas texture lookups, and I need the shader to manage the mapping.

  15. Sabrdance (MatthewH) says:

    Could someone enlighten the non-programmer: is there not a manual for Unity? I know that technical writers are people who exist and write manuals for technical things. Why is there not a book somewhere, say packaged with Unity, that says “Chapter XXVII: Shader Hierarchies” or somesuch, that explains each of the parts of the code?

    1. Paul Spooner says:

      In short, because Unity is free, and you pay for the manual. It’s not that straightforward, but that’s how it basically works out in practice.

    2. Kyrillos says:

      Short answer? Coders are, generally, bad at documenting things. And tearing apart someone else’s code to document it is a terrifying nightmare.

      1. John says:

        I have some limited experience with this. I used to work for litigation consultants and part of my job was to to read and understand our opponents’ Fortran code. It was awful. I never saw a single comment because one of the standard practices of the litigation consulting industry is strip any and all comments from code before providing it to the other side as the law requires. We did the same thing for the code we wrote. But the worst thing, the really appalling thing is that most of the Fortran I saw was spaghetti code. Spaghetti code! In the 21st century! I don’t know what version of Fortran that guy was using but every Fortran compiler that I’ve ever worked with has supported functions and subroutines. There is no excuse for writing “goto 1000” in this day and age. Gah.

        I consider myself fortunate that most Fortran programs are relatively short. Fortran is a language for doing math problems, not for building applications. I don’t think I ever saw a Fortran program more than three pages long when it was printed out.

        1. Echo Tango says:

          Only three pages of spaghetti to untangle? That’s hardly anything! :P

        2. Kyrillos says:

          My sympathies. I didn’t hear about this in Dante’s Inferno, but I’m sure it must have been there.

        3. Decius says:

          Wait… the standard procedure isn’t to subpoena the “source code” file that has all the comments?

          1. Richard says:

            I assume litigators also run it through an obfuscator that carefully renames all variables to “a”, “b”, “c”… “aa” etc

    3. Xeorm says:

      In my online coding experience it’s rare for there to be a ton of free documentation out there. It’s a lot of extra work, and takes some dedication from up top to do well. Usually one of those optionals that the developer will skip, and hope that the community will end up fleshing it out more.

      Google (android) and Microsoft are the few I know of with really good documentation. Unity’s documentation isn’t even that bad in comparison to many I’ve seen, which should tell you just how bad it usually is.

      1. John says:

        Oracle is really good about documenting Java. There’s detailed documentation for each class and some fairly extensive online tutorials as well. It’s the reason I use Java rather than, say, C++ with GTK or Qt.

        1. Richard says:

          I’d have to take extreme exception to that statement.

          Qt is by far the best documented library I’m aware of in any language – far, far, far better than any Java library.
          MSDN is also wonderful.

          macOS… not so much. Never tried to use GTK so couldn’t say.

          1. Olivier FAURE says:

            Yeah, from what I remember, Qt is really extensively documented.

            The downside is, it’s so complicated it’s almost a language of its own.

    4. Matt Downie says:

      A physical book? That would be massive and (since Unity keeps changing) it would be out of date by the time it reached you.

      So you put it all on the website. But even if the information is out there, it’s really hard to structure it in the way the users would want.

      What I want to know is, “How do I stop this physics object from rotating when something sends it flying?”

      The documentation will tell me… as long as I ask it what “RigidBody constraints” do. But if I knew to search for that, I’d already know the answer.

      So what I do instead is Google it, and if I phrase my question correctly there’ll probably be something on Stack Overflow about it.

  16. Warclam says:

    I’ve never delved into the messy world of shaders myself, but a former colleague loved doing shaders in Unity. He has a YouTube channel with some excellent short tutorials, called Makin’ Stuff Look Good. There hasn’t been a new video in a while, but what’s there might help you out.

  17. Jennifer Snow says:

    I find the baking analogy interesting, since your “lighting only” image looks a lot like a bunch of dramatically-lit cakes.

  18. bit says:

    The technicolour city lighting gave me HARD flashbacks to the opening cityscape of perfect dark.

    https://youtu.be/N6VTzPU-yiE
    (30 seconds or so in)

    Notable in my mind for having super colourful interior decorating and super neon lighting, right before greybrown shooters really became a thing.

    The majenta is a bit much, maybe, but I hope you don’t lose too much colour!

    1. Jennifer Snow says:

      I like bright colors like that, so I don’t see anything bizarre about it.

      Realism is overrated.

  19. Soldierhawk says:

    I know very, very little about coding, and even less about the specific kinds of things you’re trying to accomplish here, but that cake analogy was perfect. Had me rolling on the floor with sympathetic laughter.

    Well done! And congratulations on making ANY progress at all in that quagmire, let alone the decent progress you actually seem to have made.

  20. skeeto says:

    Are those ledge shadows done with shadow mapping, or is it something else? Since Unity did most of that work you didn’t get into the details.

    1. Riley Miller says:

      Unity uses Cascading Shadow Maps by default. I don’t think Shamus has talked about the technique in his various posts on shadows but basically Unity keeps around a number of shadow maps of various resolutions. Unity uses the highest resolution shadow maps for up close objects and lower res maps for objects that are further away.

  21. kdansky says:

    I am not sure reducing the number of polygons will have any performance impact whatsoever. You’re rendering a tiny number of triangles for current hardware. The building in front is 20 windows, 5 floors, 4 sides, for a grant total of 400 windows, or 800 triangles. If you have a thousand buildings of that size, you’re not even at a single million of triangles.

    Modern GPU easily handle a couple hundred million, two or three magnitudes more than what you have right now, and another magnitude more for basic backface culling or similar techniques.

    So before you start optimising: Benchmark and profile. Especially with shader code you really can’t trust you intuition where the bottleneck might be. Something that was obviously slow in the past (re-using the same tiny texture a hundred times instead of using a 10×10 version of it once) doesn’t really matter for shaders: They will happily do the same thing a thousand times over, and do it at ludicrous speeds.

    To reference the “pro vs amateur” discussion above: Early optimisation is a typical amateur mistake. When you have to justify spending a month of work (“We could have shipped an extra billable feature with that person-month!”) on something that can be measured to be pointless, you learn to not blindly optimise.

    1. tmtvl says:

      Most old laptops (which is what Shamus tends to develop for) don’t have modern GPUs, though.

      I mean, I know nothing about graphics programming, but considering this probably won’t turn into much but a neat series on doing graphics programming in Unity and maybe a decent screensaver.

      1. It’s good practice in general to minimize the number of polygons or draw calls.
        These days computers do multiple things, so while a game may not use a GPU fully, some other program may utilize the GPU in the background. So GPU load is just as important as CPU load in a multitasking system.

    2. Olivier FAURE says:

      Early optimisation is a typical amateur mistake

      Amen.

      I’d say it’s actually part of a category of errors, something like “trade-off ignorance”, where you focus too much on entry-level benefits and you end up ignoring subtler drawbacks.

  22. Paul Spooner says:

    So, I’m curious, are you going back and getting screenshots of the process after the feature is completed? Or do you just screenshot everything (from a locked camera no less) in the hopes that it will be useful at some point?

    1. Shamus says:

      It’s a mix.

      I really wish I could get screenshots in Unity easily, but when I whitelist Unity for bandicam it treats all the little interface widgets like “game” windows. (Probably because they’re being drawn with DirectX calls or somesuch.) So if I want a screenshot, I have to take a capture of the entire desktop, switch over to image editing program, crop it, name it , and save it. Not quite as seamless as just smacking the PrtScrn key while playing a game.

      But at important points in the project I do sometimes save images. The others are re-created by simply disabling the thing for the before / after image.

      If this series continues then all the shots will be new. This odd setup is an artifact of working for several days before I realized I was going to write about it.

      1. Shamus, try using Alt+PrtScrn and it’ll screenshot the current (active) window (not sure how it’ll handle the DirectX rendered stuff though).

      2. Olivier FAURE says:

        This is an area where most Linux environments do way better than Windows.

        My KDE Plasma “PrtScrn” screenshot maker gives me all the options I need like “hide cursor”, “print whole screen / selected window / selected rectangle”, “hide menu bar”, etc.

      3. Dan Efran says:

        “If” the series continues? “If”??? Surely you wouldn’t deny your loyal, very patient procgen fans the full story of this project?

      4. Decius says:

        Part of my current Dropbox application figures out when I hit Pr Sc and puts a .png of the clipboard into a folder that gets shared to the cloud.

        It seems like a trivial thing for a programmer to write something that does only the parts of that that you want to do.

  23. Joe says:

    Oh, that second-last picture. It makes me want liquorice allsorts now.

  24. “Redlight + BlueLight + Greenlight * Buildings”
    Should probably be “(Redlight + BlueLight + Greenlight) * Buildings” to avoid any ambiguity when reading code, depending on the language the operator order priority may actually vary.

    As to improving the dual triangle windows, I’m assuming you’ll do dual triangle exterior walls instead? Since a entire room is usually lit at once. Though a corner room would have two exterior walls.

    And in some cases a corridor may run along the exterior wall, this could be two long dual triangles.

    With some buildings the entire bottom floor exterior is always lit up (store fronts are lit even if the building is closed).

    And are you planning street lights? Or at least lit streets in some way?

    Also… now I want chocolate cake, damn you Shamus.

    1. Droid says:

      Shamus isn’t adding and multiplying variables here, that’s the whole reason he couldn’t just do it the other way around. So it’s highly unlikely this is actually a line in his code. It’s just a pseudocode example. More to the point, what he’s actually doing currently is probably more along the lines of:

      Clear(Viewport);
      Viewport = Viewport + Greenlight;
      Viewport = Viewport + Bluelight;
      Viewport = Viewport + Redlight;
      Viewport = Viewport * Buildings;

      If he could just store everything in a nice array of 1920×1080 values, he’d have already done so. That’s essentially a (pretty simple form of a) frame buffer, and Shamus already said he found no way to make that work in Unity. So he has to use the Viewport as the only place to save his image, so all operations have to be strictly in-place.

      At least that’s how I understood it, please correct me if my ignorance of Unity, or in general, made me jump to wrong conclusions.

  25. Zoe M says:

    The source I always used for shader development is the Unity surface shader examples/exhaustive argument listings. This: https://docs.unity3d.com/560/Documentation/Manual/SL-SurfaceShaderExamples.html and its related pages are where I got my teeth into Unity’s particular brand of shader nonsense and they cover a lot of useful bits and pieces – even so far as detailing where (tho not exactly) to insert custom buffer pass fragments.

    You may have already seen this but I thought I’d post it just in case – it sounds like you didn’t yet encounter this particular document so if I can save you a couple headaches it’s worth the risk. :)

  26. Anybody tried the latest Unreal? How’s the documentation for that? (genuinely curious as I’m pondering using it).

    1. Shamus says:

      I haven’t read the documentation, but before I began this project I looked up how to do procedural stuff in Unreal. I found this video:

      https://www.youtube.com/watch?v=mI7eYXMJ5eI

      That sent me running in the direction of Unity in a big hurry. The drag-n-drop “coding” with widgets in a flowchart-style looked like the worst possible way of accomplishing this. (And the result wasn’t even particularly sophisticated.)

      This might be unfair to Unreal. Obviously I have very particular (and unusual) needs, and you can’t judge something as big as Unreal on one demo like that. But that’s my take on what I saw.

      1. MaxEd says:

        Unreal does allow writing C++ code, avoiding Blueprints altogether, from what I heard. It also has a better default lighting system. I don’t usually pay much attention to graphics, but when I was participating in a GameJam and had to rate other people’s games, Unreal games really stood out among their Unity (or hand-coded OpenGL) peers just because the default picture Unreal does, before you write any shaders or anything, is simply better than Unity.

        But I’m not advocating for an engine switch (it’s way too late for that, of course). Though it would be interesting to see the project done 3rd time in a different engine, to compare the ease of development and the end result. Just don’t ever go near CryEngine – everything I ever heard or saw about it (including documentation) is just so incredibly bad that I know people who plain refuse to work on CryEngine games.

      2. Cerapa says:

        Yeah…you really shouldn’t do stuff like that with blueprints. It really boggles my mind why they thought that was a good example. Blueprints are really good for simple straightforward stuff , but really become a mess if you add loops and conditionals and complex math.

        This is something you should really do in C++ rather than blueprints, or at least write a large portion of it in C++ as a couple of blueprint nodes.

  27. baud says:

    I’ve read the shader explanation and went “Man, I can’t wait to go back to our spaghetti code bussiness application, it’s way simpler than this stuff.”

  28. HeroOfHyla says:

    Yeah, figuring out how to do stuff in Unity is … not nice.

    We’re just wrapping up a year-long senior project that involves making games for Android and iOS in Unity, integrating some propriety code provided by the project sponsor. Figuring out how to make the proprietary code work was a nightmare. We searched and searched, trying out maybe half a dozen solutions (okay, that’s hyperbole) before finally finding something that worked fine in iOS, and *used* to work in Android. Unfortunately, the Android support had been removed for performance reasons.

    After another few weeks of trying to find a working Android solution, we discovered that the forum posts mentioning that the feature had been removed were themselves outdated – support had been re-added a few months later!

    1. Olivier FAURE says:

      I’m not sure I understand. Out of curiosity, what was the main difficulty?

  29. RCN says:

    Couldn’t you use the first pass shader to also render a soft moonlight or cloud glow on top of the whole city?

  30. DivFord says:

    The thing where it links you to the main download page is wierd, but the stuff you want is there. You click on the ‘Downloads (win)’ menu, then select ‘Builtin Shaders’. That gives you all the source code for the default shaders. I don’t know why it’s so obtuse…

  31. Olivier FAURE says:

    Fun fact: this page shows up in the search results for “Greenlight”, not because it talks about Steam, but because it talks about colored lighting and the word appears in once line of code :P

    1. Droid says:

      Wow, Shamus must be paying insane amounts of money for that high a website profile.

      Is it generally the case or just for your personalised search results while logged in?

      1. Olivier FAURE says:

        I meant the in-site search bar.

  32. Val says:

    Sorry to come in so late, but I just found this series, and it explores a lot of ground I’m interested in.

    About halfway into the article, you tossed out this aside:

    Redlight + BlueLight + Greenlight * Buildings
    (Yes, in mathematics you normally do multiplication before addition, but since we’re drawing color values onto a canvas the order of operations is strictly left-to-right.)

    In reading through a bunch of shader tutorials and code, this has puzzled me greatly: I keep seeing math expressions that seem to assume a precedence order very different from what I’m used to, and no hint as to why they’re expressed that way. It’s already hard for me to visualize color operations in my head, so I’ve been very confused about either what the developer had in mind or how the code implements the developer’s stated intent.

    Can you point me to some info on how this works — in what circumstances I can or must evaluate left-to-right rather than by operator precedence? Can I force the order of evaluation by grouping clauses with parentheses? If so, do I take much of a performance hit for it?

    Thanks!

    1. Droid says:

      I am not a computer scientist, I’m a maths guy, so I might be wrong, but I think CS articles simply assume that you feed whatever you write there directly to the GPU as instructions, so
      # Redlight + BlueLight + Greenlight * Buildings
      becomes the three distinct operations
      Add(Redlight, Bluelight)
      Add(IntermediateResult, Greenlight)
      Multiply(IntermediateResult, Buildings)
      Return(IntermediateResult)
      (where the return value of Add and Multiply is always stored as IntermediateResult).

      So the left-to-right rule is an artifact of the way the programmer instructs the machine to do stuff: you can tell a GPU to add numbers, multiply numbers, etc., but it only knows a few basic instructions. In particular, it doesn’t know what to do with “here you have four numbers; add the first three together, then multiply by the fourth”. It simply isn’t versatile enough to do this. So programmers use maths notation because it’s the most common way to talk about adding and multiplying, but they use the left-to-right convention in their notation because anything else would have to be restructured before it could be used by a GPU anyway.

      1. Val says:

        At the lowest level, you’re right — I don’t know GPU architecture, but on the CPU mathematical expressions like this must be broken down to simple operations. We can write them directly in assembly language, which has a 1:1 correspondence with machine instructions (pay no attention to that microcode behind the curtain!)

        But most programmers don’t write assembly. When we write the expression in a higher-level language, the compiler takes care of translating it to the proper sequence of machine instructions. To do this, it follows the operator precedence rules of the language it’s compiling.

        C, C++, Java, C# and JavaScript all follow the precedence rules we learned for arithmetic, including that multiplication and division have higher precedence than addition and subtraction.

        I hadn’t looked at shader language specs in a long time, but this says OpenGL obeys the same rules: http://learnwebgl.brown37.net/12_shader_language/glsl_mathematical_operations.html

        Same for HLSL: https://docs.microsoft.com/en-us/windows/desktop/direct3dhlsl/dx-graphics-hlsl-operators#operator-precedence

        So it seems like the expression in question, R + G + B * b, should evaluate like (R + G) + (B * b)

        If the shader language specs don’t call for a different precedence, then what does Shamus mean when he says

        (Yes, in mathematics you normally do multiplication before addition, but since we’re drawing color values onto a canvas the order of operations is strictly left-to-right.)

        ?

        1. Shamus says:

          You MUST do operations left-to-right because you’re writing directly into the framebuffer. You’re literally painting on the image that will be shown to the user. If you do:

          (R + G) + (B * b)

          Then somehow (B * b) would need to be stored someplace else temporarily. You could do this using alternate framebuffers, but that’s expensive and eats a ton of video RAM.

          1. Val says:

            Hi Shamus, thanks for chiming in!

            I wasn’t advocating to do the calculation that way; I’m asking why the compiler doesn’t generate code that executes that way. Sorry to be obtuse, but I’m struggling to understand the mechanics of this. And in trying to explain my confusion I see I don’t even know if my problem is low- or high-level.

            Low-level: I guess I don’t understand the output of the shader compiler. In the CPU world, the terms of the expression get shuffled around so that B * b happens before getting added to G. That involves temporary locations — CPU registers?

            What’s happening here? Is it that the pixel in the framebuffer is the GPU concept that corresponds to the CPU’s registers in this case, as it’s where the hardware stores intermediate computation results?

            Can you point me to some reference that specifies or explains this behavior?

            High-level: Or perhaps this is a paradigm shift I’m not getting. I get the feeling this is like the stack-based processing I did in PostScript long ago. It took a while to click… but I’m not sure that’s a good analogy; the differences in semantics were clearly spelled out. In this case I can’t find any explicit docs that describe this behavior.

            Thanks for your patience!

            1. Droid says:

              Think of it this way: Whenever you want to draw a frame, imagine a REALLY big frame. Now, since this frame is so big and therefore unwieldy, is it better to:
              – paint all your green paint onto it, then all your red paint, then all your blue paint, and then start adding layers of other material on top of that for some transparency effect or whatever,
              – paint all your green paint and your red paint onto this canvas, then get another REALLY big, expensive, unwieldy canvas to store all the blue paint, apply the effects to that, and then paint the result of blue paint + effects onto the original canvas? And then discarding the second big, unwieldy canvas because you do not actually need more than one?

              I previously explained to you how it (probably) happened that the notation is different from standard maths notation, but I think I misunderstood what you wanted to know. You do strictly left-to-right with GPUs, because GPUs are REALLY good at doing lots and lots of independent calculations in parallel, things like additions and multiplications per-pixel. What they are decidedly less good at is managing all the memory they need, at the lightning-fast speeds they require (after all, the whole frame has to be done after 16.66 ms).

              It’s not that people writing GPU code aren’t aware that a + b + c * d normally is parsed as a + b + (c * d), it’s that for them it is clear that whatever you send your GPU has to be very low-level assembly-like code, and that creating additional framebuffers (which are large matrices/tables of pixel values IIRC, basically already data that a monitor could understand) is something to be done VERY SPARINGLY and only with good reason. There’s no time to waste writing a new framebuffer, since that includes pushing the old framebuffer out of the physical buffer for currently-in-use memory on the card (basically, the only memory accessible by the actual processing units, the “highway connection” if you will), writing the new framebuffer, probably immediately followed by pushing it out of memory, and loading the old one back in!
              That’s the really slow part of GPUs: pushing memory around. You want to avoid that at all costs, or your 1080 Ti will perform like a decade-old model. So, for someone who writes GPU code, it is immediately clear that you would execute your code in the way that uses as few intermediate results as possible at any given time.
              And what is the way to never have more than one intermediate result? You just execute binary operators strictly left-to-right, taking what you already started with as the first input and one table/matrix of new input, and overwriting the first input because that leaves the output easily accessible for the next step!

            2. Shamus says:

              In the shader, you don’t really have code that looks like (A * B) + (C * D).

              What you have is an entire shader program. That shader runs once for every pixel, and it outputs a single color value. That color value is then written to the framebuffer. (It may be added to the existing color already on the canvas, or it may be multiplied, or it may simply replace it. Depends on the blend settings you have set before the shader is called.)

              Each shader is run in turn, and each one contributes to the final image.

              1. Val says:

                Ah, got it — each of those components is handled in a separate shader pass. Of course, makes sense; I thought this was all happening in a single color calculation expression in one shader.

  33. Taxi says:

    Whoa man, you’re old. Older than me even!

    I’ve never learned to code but I used to mess around with level editors and shaders in the Quake 3 era. Later in the Doom 3 and co. era I’ve lost interest due to how complicated things got. Like before you could just make a texture with some shader and it would work fine, but suddenly you needed to have 3 textures, write page-long shader programs and goof around with light sources to get anywhere. Sure it looked gorgeous but damn!

    Oh and then Crysis and such just complete blew everything out of proportions. I can’t even imagine how those engines work. And now we’re like 3 generations past even that.

    My point is that yep, for the last 13 or so years it’s always been assumed that when you want to do ANYTHING, you’ll necessarily want to have all those crazy textures and shaders and lighting passes and whatnot. “Just a simple texture with framebuffer combinations? Ha what are you crazy? Go into your room and don’t get out before you learn to master the full AAA development pipeline!”

    I guess when you make your own 3D stuff, you can stay behind the curve so much you don’t even notice how the world have changed. But if you suddenly want to use some modern engine, you’ll see…

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply to ElementalAlchemist Cancel reply

Your email address will not be published.