## Project Bug Hunt #2: Let’s Build Some Walls

By Shamus Posted Tuesday Sep 15, 2020

To recap: The goal of this project is to generate something like an immersive sim shooter level. This means designing a level, building the rooms, and “furnishing” those rooms. I don’t know what furniture we’re going to use and to a certain extent it doesn’t matter. We just need to prove that we could fill the rooms with whatever the game calls for, and the furniture needs to be placed in a way that makes sense.

The Star Trek Enterprise might have a Captain’s chair, beds, a warp core, and an ice cream machineOh, you’ve never seen the Trek Ice Cream machine? Watch the episode where someone uses the toilet. The ice cream is right next door to that., but we shouldn’t see them all next to each other. A fridge doesn’t make sense in the middle of the room and the captain’s chair doesn’t make sense in the corner. My program needs to be able make this logic work without resorting to premade room layouts.

In addition, we need lighting that works within some reasonable performance constraints. The world needs coherent collision detection so the player doesn’t fall out of the level or get stuck on invisible walls. It needs to be possible for an AI to navigate the space, even if we don’t add proper shooter combat mechanics or proper enemy AI.

### Marching Squares

So I want to create a procgen shooter level, but I want to keep code complexity to a minimum. This means I have slightly contradictory goals.

The most obvious and direct way to do this would be to make my levels out of giant cubes on a grid, like in Wolfenstein 3D. That would indeed keep the complexity way down, but that would make for some catastrophically boring levels.

My plan to sort this out is to use Marching Squares. According to Wikipedia:

“Marching squares is a computer graphics algorithm that generates contours for a two-dimensional scalar field (rectangular array of individual numerical values). A similar method can be used to contour 2D triangle meshes.”

I actually explained marching squares back in 2012:

I messed up this image when I originally made it. In the final two diagrams, there should be one more little triangle at the very top, just left of center.

Basically, you fill in a grid with on / off values. An on value would be “inside a room” and an off value would be “void space between rooms”. Marching Square logic then fills in walls between the two, creating an enclosed space. It’s more complex than grid-based levels and it gives you beveled corners rather than sharp angles. It’s a little more complex than grids, but far more interesting to look at. I don’t know if this is the correct way to go, but it seems like a good place to start in terms of code complexity vs. visual interesting-ness..

If the 2D layouts are still too boring, there’s always the possibility that we could stack sections on top of each other and connect them with lifts / ramps / staircases. The logic would still be 2D (and thus low in complexity) but we’d have some vertical motion for the player.

And if none of that works out, we could always dump Marching Squares and try (say) Marching Cubes. I’ve done both in the past and I still remember how they work, so implementation should be easy. I can do either, but we might as well start with the easy one first.

### Top-Down, or Bottom-Up?

In the end, we’re (hopefully) going to have code that will cut the level into sections, sections that cut themselves into corridors and rooms, and rooms that fill themselves with furniture.

It seems like you’d want to start at one end or the other. Maybe we start at the bottom and get the furniture code working, then create a room to contain the furniture, then make a section made of multiple rooms, then several such sections to make a level. Or maybe we should start at the other end and work our way down to furniture.

While it seems counter-intuitive, I kinda want to start in the middle. I want to focus on building rooms, since that’s where the Marching Squares stuff is going to happen. I want to get that system working, or discard it early if it doesn’t work out. I might need to iterate on it a bit, and that will be easier if it’s not tied to a bunch of other systems yet.

But if rooms come from levels, then how can I generate rooms without generating the level first?

Well, I’m going to cheat. I’m going to use a hand-made 64×64 png for my map. Something like this:

Okay, that’s annoyingly small and probably impossible to see on a mobile device. Here’s a larger version:

Black is void space, white is a corridor, and any other color is a room. So for now my “level editor” is a simple image editor. Someday layouts will come from code, but for now we have this. This has the added benefit of allowing me to create controlled tests. If I notice that (say) the north-facing walls in rooms are missing, then I’d be stuck wondering if it was a problem with the level generator, the room generator, or the code that turns those shapes into polygons. But here I can test the room code while knowing exactly what the level is “supposed” to look like.

There’s a lot to say about getting this project set up and laying the foundation, but I’m in a hurry to get to the bits where I can start showing you screenshots. Once I have something to show, we can backtrack and talk about the finer points. I don’t want to spend three entries on theory before we draw our first polygon.

So here is the first map, along with the image that was used to generate it.

Left: My hand-made 64x64 pixel data. Right: The resulting level layout.

I’m cheating a bit here in order to show you this map. I didn’t add the feature to draw the layout like this until much later in the project, but I’m using it here because it makes all of this easier to follow. I really regret not adding it sooner. At the time I thought “Bah. I don’t need a map. That’s a fancy luxury feature. I don’t want to get distracted twiddling with novelty features rather than doing Real Work™!” That mindset was a mistake. Getting the basics working was a fiddly job, and I did the whole thing blind. When something went wrong, I had to figure it out by looking at lists of numbers and calculating in my head what things “should” look like. If I’d had the map available, I could have glanced at it to see what the program was TRYING to build, which would give me a lot of clues about where the process was breaking down.

Anyway, let’s switch back to the present tense so we can resume the pretense that I’m telling you about this stuff as I do it.

### But How Does it Work?

For this image, the camera is positioned in the North-East corner of the space. Those soda-can shapes on the left are pillars. Those come from the vertical line of black dots in the source image.

The program passes over this little image and looks at the pixels. “Okay, this blue pixel belongs to room #3, this white belongs to the corridor, this green pixel is room #12, etc.” When I’m done, I have a grid of room assignment numbers. Since 1 pixel = 1 meter, this grid represents 64×64 meters of game space.

Earlier I said that Marching Squares uses points that are on/off. I’m doing a bit of a variation on that here. I pass over the grid for Room #1. All points that belong to #1 are “on”, and any points that don’t are “off”. I use these on/off points to generate line segments, and I give those line segments to the room-building code. I pass over the grid once for each room, creating the 1D line segments that will (eventually) become the walls.

The trick here is that I’m passing over the grid left-to-right, top-to-bottom, because that’s the easiest way to traverse a grid. But the RoomBuilder code doesn’t care about the grid. It wants the line segments ordered so that it can trace around the perimeter of the room. All rooms will inevitably form closed loops.

So I have the room builder code sort the line segments to put them in order, so that running through the list will take you around the perimeter of the room clockwise.

When it’s done, I can look at which way the lines face and use that to find the angle looking into the room, perpendicular to the wall.

Okay. Now if this was a C++ project I’d need to stop here and spend the next two days hammering together shaders, a lighting system, and shadow casting code. But this is Unity and someone else did all that work for me. So let me slap down a huge plane so we have something to stand on, put a texture on the walls, and then turn on the lighting system and see what we get.

Maybe someone could have used a larger texture here? Don't worry, I've already sent an angry note to the art team about it.

No doubt you’re about to demand to know when you can pre-order this magnificent thing. But before you begin throwing money at your screen, I should warn you that the rooms are fully enclosed so we can’t travel between them. There’s no ceiling, the light comes from nowhere, and it would be physically impossible to make the texture mapping more boring.

### Extruding

Making the level entirely out of extruded 2D shapes would probably get way too boring. I mean, what we’re building right now is basically the original Doom, but without elevation changes. As it is now, the floor will always be perfectly flat and the walls will always be perfectly vertical. We need a little interest somewhere, but I don’t want to make anything too complicated just yet.

My plan here is to add the ability to shape walls. We can give the walls a set of 2D points that would act as a profile, controlling the depth of the walls. Essentially, we’re extruding up out of the floor to make walls, but then extruding outward from the walls to get… slightly more lumpy walls?

As things are now, all walls are infinitely thin. We don’t have doors yet, but if we did and you stood in a doorway, you could position yourself so that you were looking at the wall edge-on and see that it has zero thickness. We can fix this by pushing the polygons away from the wall. If I pull this wall outward by (say) 0.2m, and the adjacent room does the same, then we’ll have created a gap between the rooms that’s 0.4m thick. Now if we change that thickness as we go up the wall, then it will (hopefully) give the wall an interesting shape.

Like I said, we do this using coordinate pairs to define height and thickness. A number pair like 0.2, 0.0 says that “at the bottom of the wall (0.0 height), make the wall 0.2m thick.” The coord 0.5, 2.0 would say “At two meters off the ground, the wall is 0.5m thick”. And so on. So I’m going to make the wall bulge outward at 2m off the ground, and then get thin again.

Also, the art team has finally gotten around to adding a proper texture to the walls. (You know how those people are.) The result looks like this:

Better than a poke in the eye, I guess.

Eventually this wall shape will come from a settings file, but for now It’s all hard-coded.

I know it’s hard to get a sense of scale without recognizable objects in the scene. If it helps, the walls are 3 meters tall.

I’m not sure what you’d properly call this type of shaping. Like a lathe maybe?

In any case, I’m not sure if this is going to be interesting enough, but we’re going to experiment with it and see how it turns out.

#### Footnotes:

[1] Oh, you’ve never seen the Trek Ice Cream machine? Watch the episode where someone uses the toilet. The ice cream is right next door to that.

## 29 thoughts on “Project Bug Hunt #2: Let’s Build Some Walls”

I’m not sure what you’d properly call this type of shaping. Like a lathe maybe?

I did a reverse dictionary search for “vase shaped” and got “vasiform”, “having the form of a cylinder or tube”, which…kinda?
Actually, maybe “fusiform”, ” tapering toward each end” would be better.

This is a pretty cool project, and I’m looking forwards to seeing more of it. To go on a slight tangent, the use of an image file to generate the level reminds me of Noita, which uses collections of similar image files (where there are different colors for each material in the game, locations for traps or random props, etc.) to generate the random levels.

1. Echo Tango says:

Using an image file is not only totally fine, but actually makes debugging a bit easier, since you can see what’s where, without having to build some custom visualizer for some custom file-format. Given what Shamus has shown so far, I wouldn’t be surprised if later, a second image was used as a heightmap, to modify the floors of rooms. You could use the same two images each, for different “levels” of the ship, and use the floor of the next level up for the ceiling of the current level.

I know for sure that like, 3/4 of the spaces in System Shock 2 could be mapped like this. The only things I could see not working, are the overhanging catwalks in some areas, and I think some of the levels for The Many have overlapping, twisting tunnels that don’t work in discrete “levels” stacked on top of each other. :)

1. baud says:

This reminds me of a indie dev who, during the prototype phase, stored level information in csv format and then used excel (or equivalent) as an impromptu level editor

2. Abnaxis says:

If you’re in a space station, why would the walls be lumpy?

I think you need greebles and/or furniture that goes in the edges of the room more than lumps.

I don’t think the room sans anything but walls needs to be visually interesting yet. Any empty room looks boring, that’s how rooms work.

I feel like you’re approaching this the same way you approach making procgen caverns and landscapes, expecting geography to be eye-catching when that’s not how manufactured spaces function.

1. Echo Tango says:

I guess it depends on what type of ship or space-station we’re inhabiting. Straight(-ish) walls work great for the Von Braun, but The Many need some lumps! :)

3. Echo Tango says:

calculating in my head what things “should” look like. If I’d had the map available, I could have glanced at it to see […] where the process was breaking down.

This is a situation I’ve been bitten by too many times to count; You always need more ways to debug than you think you do. :)

4. John says:

I would love to see the algorithm that generates the line segments for a given room. That kind of thing is fascinating to me.

5. baud says:

it would be physically impossible to make the texture mapping more boring.

Dunno, I kinda like the wall texture, especially with the shiny effect, as it reminds me of tiling with small tiles, a bit like in subway tunnels. It’d be boring if all the game was like this, but having a level in a familiar environment can be fun.

1. Geebs says:

That first texture kinda looks like hessian wall weave.

“And they said that Hard-Vacuum Basket-Weaving course I took in college would never come in handy! Well, who’s laughing now?”

2. Jeff says:

That’s exactly what I was going to say! The small shiny tiles of varying shades actually reminds me of some subway stations, especially with the fat pillars.

6. pseudonym says:

Is the code somewhere online? It would be quite interesting to read it as you talk about it.

7. Draklaw says:

I’m not convinced about generating the geometry manually as you do. I was afraid that you would to that when I read the previous post; I don’t think it is the right way to generate procedural level in a game engine like Unity. The issue is that creating slightly complex geometry will require a lot of programming, and you will soon be limited by that. For instance, in your previous post you have a screenshot of System Shock 2 with some big tube-thing going in the wall. Building this with code would be a pain.

What I would do is to assemble pieces of wall/floor/ceiling build with a 3D software like blender. Basically, a tilemap system like in 2D games, but using 3D walls instead of 2D sprites. I would start with basically the same as you: a grid where each corner gives information about whether it is empty, a wall, a door or something else, but then I would build the geometry by placing a tile in each cell, based on the type of the corners. This is more or less what you do, except that you generate your tiles with code, which require more work and makes it hard to provide variety.

The first advantage is that it is possible to create variation for each tile. A wall with a window, tubes, whiteboard, etc. It’s also possible to make different tilesets, one for the corridor, one for control rooms, one for science lab, … That way you can have much more variety. The second advantage is that prefab tiles could contain lights, collision geometry if it needs to be different from visible geometry, waypoints or even some script for interactive things (like some computer embedded in the wall or a sign with customizable text).

Finally, if you wanted to do some real game and work with artists, it would enable them to build all these tiles on their favorite software, which would lead to much more interesting visual that what you can generate with lines of codes (in a reasonable amount of time). You would probably still need to make different procedural generation algorithm to make rooms that feel different, however, but that’s where the fun is, right ?

Take a look at Wang tiles for some inspiration. You might wish to use a 2-edge tileset instead of a 2-corner as suggested above, or even maybe a “blob” tileset.

1. tmtvl says:

Shamus likes pushing the boundaries of procgen, kinda like .kkrieger.

2. Echo Tango says:

There was an RPG a couple years ago, where the devs gave a talk explaining how their map-generator worked. It was pretty similar to how you describe, but with the allowance of differently-sized “tiles”. I totally can’t find that video, or even remember the name of the game…

My wildly speculative guess is that Shamus is doing it this low-level, to avoid the levels looking like it’s all the same things, just repeated a lot. I think that’s probably why that aformentioned game used variable-size tiles – the flexibility of the smallest tile, with the time-savings and artistic-vision of larger, pre-fab areas.

1. Draklaw says:

Using big tiles is a nice extension. If you want to make big objects that are part of the tileset, like a 2x2m engine going out of the wall with a tile size of 1x1m, it sure seems easier to place one big 2×2 tile than to split the model in 4 tiles and add complex rules to the procgen engine to make sure they are placed correctly with respect to each other.

Anyway, the point of my post was that going low-level will actually make it harder to avoid repetition. If really the goal is to make parametric tiles, like curved walls where the curvature can be customized as Shamus suggests, it is possible to make several tiles with the same topology and blend the vertex positions (a bit like morphing).

To be honest, I think that the most interesting thing in this project is the engine that will create plausible corridors, rooms, doors and place furniture correctly. Generating polygons directly is time-consuming and not that interesting. And while marching squares/cubes are nice to mesh complex implicit functions like terrains, it is not great for artificial environment like a spaceship, so I have the feeling that going that way will introduce a lot of limitations down the line.

That being said, I’m looking forward the rest of the series. Programming posts are the best :)

8. Richard says:

This is what I come here for.

Can’t wait to see how you get (got) on :)

9. Paul Spooner says:

Okay! So! While having the walls bulge out different amounts is a really good idea. And noting that I realize that you are currently and continuously beating the art team for their shortcomings. Also, this post is weeks behind cutting edge, and you’ve probably already fixed this. (For example, in the header image.)
But!
The walls bulge is precisely, exactly, and perhaps even comically wrong. Do an image search for “scifi corridor” and you’ll see that the bits that stick out are always sticking out more at the floor and the ceiling.

Great work so far though! Cary on! I’m sure the art team will get their act together eventually.
P.S. While you’re straightening them out, remind them that solid flat color images like the room maps compress way better via png with the bonus that you don’t get jpg artifacts.

1. Syal says:

The walls bulge is precisely, exactly, and perhaps even comically wrong.

Well, that just means we have to figure out what’s inside them that would cause that design. It’d have to be some thick room-encompassing thing with tiny stuff above it, probably power cables. Sci-fi room heaters, maybe? Gravity Reversers?

Actually at 2 meters it could be room-encompassing TVs (Tube Tellys!) with mandatory shipwide broadcasts. The upper slope is an optical illusion to make the news look bigger.

1. Mr. Wolf says:

Well, that just means we have to figure out what’s inside them

Aliens. They’re coming out of the goddamn walls!

10. Mr. Wolf says:

Perhaps the Enterprise’s captain’s chair should be in the corner. At least then he’d be able to see the whole bridge without twisting around awkwardly.

1. Moridin says:

The captain is there to be seen, not to see.

1. Syal says:

If he ever wants to see behind him, he can put the back of the room up on viewscreen.

11. Frank says:

This looks like a good project, and I’m interested to see where it leads. I always like to read about others who are working on procedural environments/cities/buildings. I’ve been working on my own project for years and thought about the same things as you. Of course we decided to solve these problems in completely different ways. I like it how there are so many different solutions in this area. This type of project is never ending as there are always improvements that can be made to buildings. If you’re interested in my progress, you can take a look at my blog here:
https://3dworldgen.blogspot.com/

12. ngthagg says:

I needed to implement marching cubes for a project once. I ended up going with marching tetrahedrons instead. More complicated to code, but no ambiguities! Doesn’t roll off the tongue the same way though.

13. Cyranor says:

Another way to make the walls more interesting could be to add “greebles” like the kit-bashed models of old school star wars or start trek. Things like pipes, condensers and other various bits of machinery. You could even make a bunch of different greebles that it picks randomly and then might scale randomly to add some character.

1. Rohan says:

Now, THAT would be a great place to use procedural content!

Even better, your procedural kit-basher could generate the geometry at different complexities for different levels of detail. There’s some new shader types added to the pipeline that can serve as an alternative to the old vertex/geometry/tesselation trio (well, quartet, given tesselation is divided up into control and evaluation stages) that would be an ideal fit for this kind of work. I’ve only read in any detail about Nvidia’s version (called Task and Mesh Shaders), but AMD has their own equivalent. Probably best to wait until it’s standardised in Vulkan before use or you’ll just end up having to convert it later.

Simulating something akin to the plastic kit part-sheets used in pre-CGI movies would be quite optimisation-friendly. It’s basically flat, so one level could just be a black and white image used to accept or reject a layer of metal texture. The next could be simple extrusion of the image to blocks of geometry. The next could do marching squares, similar to Semus’ levels in the article with flat caps, and the last could add bevelled caps for when you’re really close. Not to mention you could use multiple resolutions of the underlying image to generate even more inbetween levels of detail. Hell, you could even apply it recursively if you need to do it on something huge.

14. Rane2k says:

I´m looking forward to the next entry!

Something I think will come up in Shamus´ plans:

*Themes* for the wall shapes, textures etc.
So maybe the Borg ship has different pillars and walls than the space elf ship etc.

These different themes would then go into an XML, CSV, INI, YAML or whatever config file. (No idea what Unity uses).

At least, if I understand Shamus´ plan correctly. :)

15. Rohan says:

Lathe isn’t a bad choice of words, Seamus. POVRay has a “lathe” object type that basically takes a 2D curve as a bunch of x,y coordinates and a choice of interpolation curve and makes a surface by rotating it around the y-axis. It’s a great starting point for making anything which would have involved a lathe or potter’s wheel in its real-life construction. I don’t know how it works under the hood, but given POVRay’s underlying nature of being able to directly render mathematical objects without turning them into polygon meshes, it’s bound to be heavy on the maths, so studying the POVRay source (it’s a GPL program) might be of limited helpfulness for game development.

You can enclose spoilers in <strike> tags like so: