On the first day(ish) of the project I made a working proof-of-concept demo. Today I’m going to pull a Nightdive by throwing everything away and restarting the project in Unity.
This isn’t as stupid as it sounds. I’m only a day or so into the project, so I’m not going to be throwing away a lot of code. Also, I think writing something in C++ and then re-writing it in C# is a good learning exercise. A year ago I took a swing at learning Unity. The problem is that once you’re done with the tutorials, you need to start making something real. But this leaves you with a three-pronged problem:
- Learning a new programming language.
- Working in a new programming paradigm, with strictly enforced object-oriented design structure.
- Trying to solve this new problem. (Whatever it is that I’m currently working on.)
That’s a lot of unknowns to juggle. Things go wrong all the time when you’re programming. In a situation like this if I do something and I don’t get the result I expect, I won’t even know where to look. Yes, maybe there’s a flaw in my design. But maybe the design is sound, but I’ve somehow expressed it incorrectly in the C# language. Or maybe that stuff is fine but I’m misunderstanding Unity. Even trivial problems can take ages to sort out if you don’t know how to find them.
But re-writing something I just wrote is a pretty good exercise. If nothing else, I’ll know the logic is sound.
If you just stick to doing the Unity tutorial programs you’ll end up focused on a very narrow workflow. Unity is built with the idea that you’ll import pre-made art assets and use simple, short scripts to move them around. And it’s pretty good at that. If you take some random models from the asset store and dump them into Unity, you can make something “playable” (in the sense that the player can push buttons and make things happen) in just a few minutes. This makes it feel like you’re making big progress towards “learning” Unity, but you’re not gaining a lot in terms of understanding how to make an actual game.
Creating a procgen city is a pretty complex task that pushes me into doing a lot of things not covered in the tutorials. At the same time, this work is familiar enough that I’m not getting lost in the program logic.
This re-write takes longer than the initial job. I wrote the original demo in one(ish) day, but the re-write / translation takes almost two. That sounds bad, but this is actually really good by the standards of what I’m trying to do. I’m throwing myself into a new language, a new coding style, and a new set of tools. That’s a huge learning curve to deal with. While I’d like to claim I was able to accomplish this because I’ve got a great big programmer brain, the truth is that a lot of the credit for this easy transition should go to Unity. While relentlessly strange, these tools are very easy to use.
What Makes Unity So Strange?
In the very old days – back in the 1970s and 1980s – coding was really inconvenient. You opened up your code in a text editor. You typed in code. Then when you were done editing you exited the text editor and typed some cryptic nonsense to the terminal windowObviously this was before the days of mouse-driven environments. to have it compile all of that code into a program you could run. Assuming it worked, you could then type the name of your program to run it, test it out, and then close it again. Then you’d run your text editor to go back to editing code.
All of this was before my time. When I arrived on the scene in 1990, we already had better tools for this, in the form of the Integrated Development Environment. (IDE.) To me, a “normal” programming environment looks like this:
You’ve got your IDE where you type your code. The IDE lets you browse through your source files, edit your code, and compile your program. It helps you look for errors when things go wrong. When you hit “run”, your program will start up as its own standalone program with its own window. If you’re making a game and you need (say) a level editor, then that would be another program you’d need to write yourself. Then you’d give that program to your artists and let them do their thing.
I spent my entire professional life using Microsoft’s programming tools. I started out using Borland’s Turbo C tools in 1990, but in 1994 I bought a copy of Microsoft’s Visual Studio for myself and never looked back. I dabbled with other languages and other tools over the years, but the bulk of my programming time was spent in VS.
In Unity, everything is a bit different. This is Unity:
Unity lets you browse the files in your project, it lets you test your program, and it acts as your level editor. When you run your game, it runs in a window inside the Unity environment. So I guess it’s an Integrated Integrated Development Environment? Everything is integrated now, right?
Well, no. The one thing Unity doesn’t have is a text editor, so you can’t use it to edit your code. When you click on the source file to edit the logic of your space marines, it opens in a separate program called MonoDevelop. I already wrote a bunch of complaints on the shortcomings of MonoDevelop a year ago, so I don’t need to repeat them here.
Having said that: Remember that annoying, glaringly obvious, widespread, easy-to-reproduce bug where you lose the ability to paste text? That is still present. That bug turns five years old pretty soon.
So Unity integrates everything except source editing, and for that you have to use this fiddly external editor that is apparently abandonware? Or if not abandonware, then “apathyware”. Either way, it’s not a comfortable way to work. Even ignoring the bugs, there are many problems with Monodevelop that make it painful to use. I’ll probably gripe about them in a later entry when I need to do some debugging, but for now let’s just get back to work…
One of the first problems I have to deal with in these types of projects is texture mapping. Without a texture map, everything in the world would be a smooth polygon. It would look a bit like this:
Think of texture maps like wallpaper. Imagine you’ve got this wallpaper with a strong pattern on it, and you’re trying to cover all the streets with this pattern. Except, you need to be able to have the textures meet at intersections without forming obvious seams. You need roads to be seamless at two lanes and seamless at eight lanes. That would be a maddening job. Aside from being annoying and fiddly, it would make the code really complex.
Valve discussed a similar problem in the commentary for Episode 2. When designing the caves, the level designers had a hard time getting those square bits of wallpaper to flow naturally on those organically round cave surfaces. Sure, if you’ve got the time and patience you can make that kind of situation work. You make things match up as well as you canThankfully, texture maps can be stretched or squashed, which you can’t do with real wallpaper. and shove all the nasty seams into a corner. Then you can stick a boulder in front of the seams to hide the mess.
But then two days later, gameplay testing reveals we really need a side-tunnel in this one spot. That throws off all that tedious texture-matching, meaning you’ll have to start over.
The solution that Valve came up with is to use a shader program to make a “3D” texture that can wrap around any surface. The artist doesn’t need to line anything up. It “just works”. The trade-off here is that the artist can’t control where specific details go. But who cares? When it comes to caves, you generally don’t want to worry about where all those little surface details go. All you care about is that you don’t see any seams.
I don’t know exactly how Valve did it, but I had to come up with a way to accomplish the same thing during project Octant. You can read that entry to see how I did it, but the short version is that I projected the texture onto the surface along all three axes, and then used the surface normal to fade between these three projections. So a west-facing wall would have the texture mapped so that the polygon’s position on the north-south axis controls the horizontal mapping of the texture. A south-facing polygon will use the position of the polygon on the east-west axis. If the wall is a diagonal that faces southwest, then it would use both of these projections, blended together 50-50. This doesn’t work if the texture is (for example) a picture of words or something else that needs a particular orientation, but since we’re dealing with things like pavement and asphalt it’s no problem.
Back in project Octant, the result looked like this:
A brick texture has really obvious lines in it. And the spacing of those lines varies slightly across the surface and they vary drastically between each axisBricks are wider than they are tall.. that makes it a nightmare to get it to line up. But above you can see I was able to wrap it fully around an irregular surface. So this works, basically. At the bottom of that… pillar thing(?) in the archway you can see the crossfade where it transitions between different mapping systems. It’s a little weird when you do this with a brick texture, but I think this is good enough for a nighttime cityscape. The player would need to be very picky and be looking very closely to be bothered by this. Like all my projects, I’m looking for the 10-minute solution that solves 90% of the problem rather than the ten hour solution that solves it 100%.
This sort of texture mapping requires making a shader. This turns out to be amazingly hard because the Unity documentation is a disaster. For the sake of getting on with things, let’s save that rant for later and just pretend that this was a straightforward task.
Once I get the shader working, I wind up more or less where I was at the end of the last entry. I’ve got a grid of streets and a “city” of cuboids:
While a layperson might mistake this graphical feast for a Grand Theft Auto V screenshot, this is actually just my city generator. Who knows where the project could go next? Someday I may even have lighting!
Anyway, this means I can just lazily make polygons and not have to calculate texture coordinates as I go. I don’t have to worry about seams or solving complex mapping problems. Now, if my only goal is to wrap the entire world in concrete, bricks, and pavement, then this would be the end of it. But based on the research I’ve conducted by looking out my window, I’ve learned that cities have more detail than that. Buildings have windows, sidewalks have patterns, and streets have lines.
So what I’m thinking is that I’ll combine two different texture samples. One will put down the base surface, and the other will add the detail.
Shamus! What are you doing, man? You just said you didn’t want to worry about texture mapping and now you’re mapping two different textures onto an object. How is this supposed to be “easier”?
The problem I was trying to avoid was making disparate surfaces line up. So I can have two roads arrive at an intersection (or two walls meet on the side of a building) and not worry that we’ll end up with a seam. A seam would look like this:
Gross, right? And painstakingly planning out all the texture positions so that I never end up with seams would be a pain in the ass. This base texture system I’ve come up with solves the problem for me. Now I’m going to stick (say) a window on top of that. But when I’m making the window I won’t have problems with seams. Windows won’t form a continuous surface. I can put one window on one section of wall and that bit of wall doesn’t need to worry about what any of its neighbors are doing.
So let’s have our shader combine these two textures and see what we get:
Yeah. That’s basically what we’re going for. As a reminder, these buildings are just simple cubes that fill the footprint of the building site. A PROPER building would have surface detail and won’t always fill the entire volume of space. Basically, I need to write the next-gen version of the procedural building generator I created for the original Pixel City. But there was no point in writing that until I’d decided how texture mapping was going to work.
The other advantage of this system is that it lets me mix & match base textures and windows. So one building can have brick with window style #1, the next one can be brick with window style #2, then the next one can be concrete with window #1, and so on. I don’t have to make a unique texture entry for every possible combination of surface + window.
Like I said last time: This project is a little more ambitious than the last one, so we’re going to be stuck in these early experimental stages for a little longer. I know Pixel City showed us a cityscape almost right away, but it’s going to take us some time to get there with this project.
 Obviously this was before the days of mouse-driven environments.
 Thankfully, texture maps can be stretched or squashed, which you can’t do with real wallpaper.
 Bricks are wider than they are tall.
TitleWhat’s Inside Skinner’s Box?
What is a skinner box, how does it interact with neurotransmitters, and what does it have to do with shooting people in the face for rare loot?
A look at the main Borderlands games. What works, what doesn't, and where the series can go from here.
The Strange Evolution of OpenGL
Sometimes software is engineered. Sometimes it grows organically. And sometimes it's thrown together seemingly at random over two decades.
A video discussing Megatexture technology. Why we needed it, what it was supposed to do, and why it maybe didn't totally work.
If Star Wars Was Made in 2006?
Imagine if the original Star Wars hadn't appeared in the 1970's, but instead was pitched to studios in 2006. How would that turn out?