Procedural City, Part 10: More Performance

  By Shamus   Apr 30, 2009   61 comments

More Performance Enhancements

  • I put more blank space between the buildings, gave the existing buildings a slightly larger footprint, and made the streets wider. Surprisingly, this ended up cutting about 25% of the buildings in the scene, which actually looked better.
  • I merged the individual street lights into strips, like Christmas lights. This means all the lights on the same side of the street are actually one big polygon. This means streetlights look goofy if you get too low, but eliminated thousands and thousands of polygons from the scene. I doubt I’ll ever be really happy with how the streets look, but I think I at least have it to the point where I can live with it.
  • I added a widescreen / letterbox mode where the drawn area will only take up part of the screen space. This saved a not-at-all-impressive 2fps on my machine, but I’m hoping it will make the thing run better on low-end graphics cards. Users with fill-rate problems should see improvement when they use it.
  • I limit the cars to ten updates per second. The car movement code is fairly cheap, but there will potentially be a lot of cars in the scene and after few hundred iterations it starts to add up. Limiting the cars to 10fps means that their movement would look a bit stuttering up close. But since they creep along far away from the camera, there is no reason to waste time calculating each and every tiny little movement, most of which will be too minute to be visible to the user.
  • Cars are limited in how far away you can see them. After a car gets a certain distance from the camera, it’s taken out of play and dropped somewhere into the visible area in front of the camera. This keeps the apparent car density high.
  • Related to the above, the “distance” of the car is calculated in manhattan units. (I can’t find a definition of manhattan units anywhere online. I know I’m not the only person to use this term. I wonder what other people call it?) MU are a distance check that involves calculating the distance between A and B by simply adding the horizontal and vertical offset. If you’re a kilometer east and a kilometer south, then you’d walk about 1.4 kilometers on the diagonal but you’re 2 kilometers away in manhattan units. A real distance check calculates how from from A to B if you were flying. A manhattan check (do you still capitalize it when it’s the name of a unit and not a place?) figures out how far you would travel if you went from A to B by traveling only along one axis at a time. A real distance check requires two multiply operations, an add operation, and then a square root. Manhattan units require a simple addition operation. Using MU for visibility means that you can see farther looking down a grid-aligned street than you can by looking diagonally over the city. But this is ideal, since buildings are usually blocking the diagonal view. The cases where the user might see through a gap in the buildings several diagonal blocks away are rare and fleeting, and the user is not likely to notice a lack of cars. The performance boost from this is probably too small to measure, but I loved finding a case where I could shamelessly cut corners with no perceptible loss in quality.
  • I fixed a bug where I was rendering all of the black flat-color stuff twice. I was rendering it once with the rest of a building, and with a texture on it. (Although since I wasn’t setting texture coordinates, it was just taking a single pixel and smearing it all over the object.) So it was being drawn once with a pure black section of a texture covering it, and then drawn again later as black polygons with no texture. I’m sure I don’t have to explain why this mistake wasn’t visible while the program was running.

The final tally: The program is taking up my entire 1920×1200 desktop and runs at over 200fps. It goes down to ~90 when bloom lighting is on and becomes very fill-rate sensitive (finally! it’s nice to know the GPU has some sort of limits) but I’m happy enough with performance now.

The biggest drain on performance now is bloom lighting, which is optional. I can really make bloom work twice as fast, but make it look a little less appealing. In fact, it’s only now that I’m questioning if I’m doing bloom “right”.

A Bit About Bloom

I’ve never looked up how other programs generate bloom lighting. I saw bloom in action for the first time when I played Deus Ex 2. I looked at the blobby artifacts it created and made some assumptions about how it was done. Here is how I thought I worked:

(Note that the example bloom I’m using here is a simulated effect, generated in Paint Shop Pro. The artifact I want to show you is really visible in motion, but actually kind of subtle in a still frame, so I’m exaggerating the effect for demonstration purposes.)

This is the full view of the scene at a given instant during the run of the program:

The city, without bloom.
The city, without bloom.

But before I render that, I render a lower-resolution version of the scene into a texture:

The bloom buffer, if you will. It’s actually not THIS pixelated, but you get the idea.
The bloom buffer, if you will. It’s actually not THIS pixelated, but you get the idea.

After I have a low-res version of the scene saved in a little texture, I render the real, full screen deal for the viewer. (The first image.) Once I have that rendered, I take the pixelated bloom buffer and blend it with the scene using a “brighten filter”. People using Photoshop call it the “Screen” effect, I believe. OpenGL programmers call it glBlendFunc (GL_ONE, GL_ONE);.

Split image.  On the left side, the bloom is pixelated.  On the right, it’s smooth.
Split image. On the left side, the bloom is pixelated. On the right, it’s smooth.

What you end up with is the left half of the above image: big glowy rectangles over bright areas of the scene, instead of blurry circular blobs of light. Again, it looks fine in stillframe, but when the camera is flying around the scene the rectangles tend to stick out.

But you can soften the rectangles out by rendering the bloom layer more than once, and displacing it slightly each time. This fuzzes out the edges of the rectangle and gives it the dreamy glow I want. It also means blending a huge, screen-covering polygon multiple times. As I mentioned in my previous post, blending is (or can be) slow.

Is this how games do bloom? I don’t know. (Actually, modern games will invariably be using pixel shaders to achieve this effect, which will be faster and probably let them soften those edges in a single draw. But I’m bull-headed and insist on building this thing using stone knives and bearskins.) At any rate, there is an annoying but unsurprising trade off between performance and quality going on here, and the “sweet spot” between the two is guaranteed to be very different for different hardware. Bloom way well be unusably slow just two GPU generations ago. All I can do is give users a way to turn it off and see how it goes.

Feedback & Performance Suggestions

Some people made some great suggestions on how the program could be further sped up. The use of vertex buffers was suggested. A vertex buffer lets the program gather up all of the geometry in the world and stick it right on the graphics card. (Assuming the graphics card has the memory for it, which isn’t a problem in my case.) This would eliminate the most expensive thing my program has to do, which is shoveling geometry to the GPU as fast as it can. However, I’m pretty sure integrated graphics cards don’t support them. (Actually, data on what hardware supports them and what doesn’t is as scarce as copies of Duke Nukem Forever.) Since I can’t count on all users having VB support, I’d have to use two rendering paths: The one I have now for people with low-end hardware, and the super-duper fast one for people with newer hardware. This means people with great hardware will go even faster still, and people with lower hardware will have no benefit. Adding a bunch of code to push my framerate up to 500fps while doing nothing to help the guy chugging along at 20fps is a brilliant way to expend effort in such a way so as to not benefit anyone. So let’s skip that. (This is assuming I’m right that there are computers less than five years old that don’t support VB’s.)

Someone else suggested that a quadtree would be an ideal way to handle the culling I was doing in my last post. It’s tempting, but that would be a lot of work and I’ve already hit my vaguely defined goal.

I know I promised a wrap-up to this series this week, but the tiresome impositions of my mortal form have frustrated my progress. Also I played a bunch of Left 4 Dead. But I am actually done working on the thing. I just need to write about the final steps and release it. My plan is to release it two ways:

1) The source and project files, so others may tinker or point and laugh.
2) As a Windows screensaver.

Next time: Finishing touches.

202020161 comments? This post wasn't even all that interesting.


  1. Xed says:

    The procedural city posts are easily some of the most fascinating on this site. I feel like I learned quite a lot just by reading those. I’m really looking forward to getting my hands on the source code (and the screensaver).

  2. Simplex says:

    Hello,

    I am a lurker and rarely a poster here, but I have been following your Procedural City Project with interest.

    As far as releasing the city goes:

    “1) The source and project files, so others may tinker or point and laugh.
    2) As a Windows screensaver.”

    Could you release it as standalone exe which can be simply run at any time, not only as a screensaver?

  3. Awetugiw says:

    The use of Manhattan units is often referred to as taxicab geometry, city-block metric, L1-norm or (sorry for the not quite clear notation) ||.||_1. The “real” distance would be the L2-norm, or ||.||_2.

  4. Ingvar says:

    Note that you can elide the square root for a distance metric IFF you square the distance you’re comparing to (so instead of computing s = sqrt ( (Bx-Ax)^2 + (By-Ay)^2 ) you compute s^2 and compare that to your saved d^2 instead of d). Probably falls in the category of “small micro-optimisation” and it all depends and what-have-you, but a sqrt is (usually) more expensive than both additions and multiplications.

  5. Deoxy says:

    Can we both tinker AND point and laugh? :-)

    Seriously, this thing is awesome.

  6. Primogenitor says:

    “Manhattan distance” is how I was taught it. Also, I would say its a metric, not a unit. So it would be 2km manhattan distance or 1.41km as the crow flies (euclidean distance).

    Im tempted to try to re-implement your work in Python with PyGame & OpenGL, since that’s my language of choice.

  7. Michael Mchenry says:

    Nice work, Shamus! I look forward to running your screen saver. You should put a Twenty Sided logo on it somewhere.

    I wasn’t aware of Manhattan units. When I’m looking for cheap distance comparison, I do the two multiplies and the add (which are relatively cheap) and skip the square root. You don’t know the actual distance without the square root, but if all you’re doing is comparing against other squared distances, you have a perfectly accurate comparison.

    Also, I appreciate your stone knife approach. I’m working on a fixed-function project myself. But no vertex buffers? Terrifying! (I’m rendering a fractal-generated planet with LOD from space to ground level. NO WAY that is happening without vertex buffers.)

  8. Michael Mchenry says:

    Re: Skipping the square root –
    Sorry, Ingvar beat me to it.

  9. Gary says:

    Yeah, I was about to say… I’ve never heard of Manhattan distance referred to as a unit. The unit is just meters/kilometers/centimeters/cubits/etc.

    That said…Love the project :)

  10. darthfrodo says:

    wolfire games has a video here about various post-processing stuff they’re playing around with. They give a brief description of how they do bloom.

    btw, the city looks great!

  11. Vincent says:

    Hi,

    I’ve been following your blog for half a year now (got here via Stolen Pixels) and I must say I very much enjoy reading it. Loved this series, especially your smoke-and-mirrors-approach. Can’t wait for the screensaver. ;-)

    Cheers

  12. elias says:

    From what I understand, this is the way bloom lighting works in most games (using pixel shaders):
    1. Render the scene using a filter so that it only draws sections of the scene which have a “light value” (average of RGB, I think) above a certain threshold (like 0.7). Pixels which don’t meet the threshold should be transparent. Increasing the threshold causes the bloom to only affect brighter parts of the scene.
    2. Draw the texture that is the result of the above render, with a horizontal blur (for each pixel, use color and alpha values which average it with a few pixels to its right and left, generally skipping a pixel or two in between). Changing the number of pixels sampled for the blur affects performance a little (and the more pixels sampled, the more blended the blur will look, but I imagine it would look pretty good with 5 samples per pixel: that pixel + two from each side).
    3. Draw the texture that is the result of step 2, with a vertical blur (same as horizontal blur, but up and down, of course).
    4. Draw the scene normally, but alpha blended with the texture which results from step 3. The amount of alpha given to the bloom texture affects how much brighter it makes the scene (and how much that light obscures details in the bright areas; you get a neat effect by animating this value, for instance when you have a character moving from a dark area like a tunnel into bright sunlight you would start with the bloom high and then when the character comes out of the tunnel you animate the value down lower to simulate the character’s eyes adjusting to the light).

    • Volfram says:

      It’s been 3 years, so probably nobody’s eyes will ever see this post, but… Yeah, this.

      “But you can soften the rectangles out by rendering the bloom layer more than once, and displacing it slightly each time.”

      you’re what…?

      Again, it’s been 3 years, so Shamus has probably realized other options, but yes, if you wanted to do a bloom, you would render the scene to texture, apply some Gaussian blur, blend the result with the texture using an additive filter, and then display the result to the screen. You don’t even need to draw any transparent polygons. This technique should provide near-zero additional processing time.(way faster than rendering a low-res scene several times and then layering them over the final result.)

      For an even quicker approach, render the low-resolution scene(as you already are), then interpolate it up to screen resolution and overlay that.

  13. Krellen says:

    Awesome series, Shamus.

    Correct me if I’m wrong, but doesn’t the human eye only process at around 60fps (thus rendering speeds higher than that fairly pointless)?

  14. Kdansky says:

    DirectX 7 had something called Vertex Buffers, though I am not sure that is the same animal. But if it is, it would even work on my 7 year-old notebook (I have written DirectX-code on that one at that time).

  15. Manhattan Distance for me, too, but Wikipedia calls it Taxicab Geometry.

    Ben

  16. Lukasa says:

    If you were going to consider Manhattan Units to be a unit (as opposed to some distance determining scale, as those above me are used to using it), you would definitely capitalise Manhattan, but you probably wouldn’t call it a Manhattan Unit. May I suggest that 1 Manhattan Unit be called a Block, or something along those lines?

    [This being a convention borrowed from the physical sciences, where the unit is rarely the same as the name of the quantity: see Distance (Metres), Capacitance (Farads), etc.]

  17. Rats says:

    Krellen,

    I am guessing the 60fps value you have pulled is from 60i – the framerate used in television broadcasts.

    If my understanding of the matter is correct – 60i is actually only 30fps. The frames are not progressive (like on a TFT screen) but are changed odd lines of pixels first, and even lines second. So 30fps should be smooth enough for most perposes. IIRC, stop motion animation is typically done at 24 fps.

  18. Graham O' Malley says:

    I’ve been having great fun reading this series of articles, nice work!

    Have you looked into using display lists as opposed to Vertex Buffers? I may be off the mark, but as I remember it Vertex Buffers are usefull for caching models with lots of vertices and edges that move around (like a model of a person), the idea being that you can update the vertices when you need to and it preserves the relationship between the vertices and edges – a display list, on the other hand, is usefull for caching something that remains static, such as a building.

    AFAIK, all the structures that can be used to cache data on your graphics card (like Vertex Buffers, display lists et al) have been part of the OpenGL API for donkeys years, and most graphics cards can use them – I had to do a graphics project for college about 2 years ago that involved caching a lot of data to the card, that I developed on a laptop with a fairly crappy nvidia go6200 turbocache card, as well as a high end lab machine, and I ended up using display lists to render large amounts of static data, I saw a huuuuuuuuuge framerate increase regardless of the hardware I was using.

    Anyway thanks for your very interesting blog, I look forward to reading it more in the future!

  19. Joe says:

    Yay screensaver!

  20. mockware says:

    I came up with an approximation I liked better for distance where I take the largest delta and add half of the second delta. By using integers for the distance, the algorithm can be broken down to a very small inline assembly code.

    This was of coarse for 2 dimensional space.

  21. Drew says:

    For people with lame hardware, or those of us who might read your site from work, where the 3D hardware doesn’t exist, can you also record a short clip of your city in action, and then post it to youtube or something similar? That way we can get an idea of how it looks on solid hardware, without actually having said hardware available to us.

  22. moswald says:

    Krellen:

    a) That is a common misconception. The “framerate” of the human eye varies, but is almost certainly well above 60fps for the average person. And for gamers who are used to watching computer-generated video, the framerate is often detectable well over 100fps. See this tool here.

    b) If this were to be used in a game with a lot more going on (AI, physics, what-have-you), then the less processing power each piece uses, the better. It may run at 200fps now, but add in the “game”, and suddenly the 2fps Shamus saved may actually matter.


    Looking forward to the screensaver version. Probably the source, too. :)

  23. krellen says:

    moswald:

    That tool only verified my claim; I saw no difference between 60 fps and 90 fps.

  24. Conlaen says:

    Is it just me, or is there suddenly a white to grey gradient on the background? And only when reading this post, not on the main page.

    EDIT: And sure enough as soon as I had hit Submit Comment, it disappeared.

  25. I have done bloom in images in a similar way to the way you are doing it here. Make a copy of the image and use a Gaussian blur kernel it. Use this copy as the mask for the “screen” operation, defined as (values between 0 and 1 here),

    E = 1 – (1 – M) * (1 – I)

    Note that this operation is commutative.

    Instead of Gaussian blur you are doing the low-res thing. I am sure your method is much faster, but rougher, and definitely worth the trade-off.

  26. Picador says:

    Presumably, you would capitalize “Manhattan”, as I assume it refers to the island of Manhattan (where, as in any grid street layout, travelling from 2nd Ave and 33rd Street to 9th Ave and 14th Street involves a distance of 7+19 blocks).

  27. Greg says:

    What I was taught is exactly in line with what Primogenitor said so I shan’t repeat it :P

    As for capitalisation, I’m not sure. It’s named after the place, but it is its own construct. I briefly wondered if you capitalise “linconshire” in “linconshire sausages” but am now too busy wondering if I have any sausages left.

    Laters *waves bye*

  28. Zel says:

    Bloom has already been explained be elias better than I could, but the main idea you should consider is only rendering the bright elements of the scene. This would eliminate the black halo around the buildings in the sky. It’s easy in pixel shaders, just check the RGB value at the end and if it’s lower than 0.7, make it black (0.0). Without them, a low value filter of your rendered texture should do the trick (cut off anything below 0.7). You could write a fast simple blur (mean of the four adjacent samples) and apply it to the low-res texture at the same time. It might actually be faster than rendering a full screen quad multiple times.

  29. WWWebb says:

    The optimizing of the rendering is nice and all, but how long does it take to generate the city in the first place? How big is it (RAM or apparent visual distance)? Are there optimizations to be had in creating the buildings or textures? After all that’s the “procedural” in “Procedural City.”

    A screen saver that takes five minutes to load is worthless no matter how smoothly it runs.

  30. Abnaxis says:

    I always heard it referred to as city-block distance vs. crow’s fly distance.

    I did work with vertex buffers in OpenGL about four years ago, but that was for my school’s 3D-simulation lab, so it was a top-end computer for its time

  31. ngthagg says:

    I was actually introduced to this as the postman distance, although having seen the other options, this seems to be an inferior name for the metric. After all, postmen could cut across lawns and such.

  32. elias says:

    @Zel: That’s what I was thinking… it may be slower to render the low res one enough times to make it look blurry than it would to do the bloom the traditional way.

  33. Carra says:

    Seems I’ve been beating to suggest that you can just square both parts of the equation to calculate distance without calculating a squareroot.

    Thanks for releasing the code, I’ll have some fun browsing through it :)

  34. If I understood correctly, it seems that in order to render the bloom effect you are rendering the whole scene twice. Once for the small “bloom” texture and the second time for the full size backbuffer.
    Since you’re not using vertex buffers, each render pass is very expensive because it must copy the whole geometry data down the bus, to the graphics card.
    I think you should be able to render the scene only once and then use the backbuffer itself as a texture (but again, I only worked with DirectX and have no idea of how this stuff works in OpenGL). It should be called something like “resolving the backbuffer” to texture.

    And secondly, I suppose that every VGA compatible with Dx9 must support vertex buffers. That should account for every card produced in the last 5 years at least. :)

  35. Joseph says:

    Someone will surely compile this for Mac OS X. Please, when they do, make an announcement and host the mac binaries here so those of us who have sworn off compiling and don’t have windows can see the beauty of procedural city.

    Also… Isn’t Procedural City the name of the new Gritty Crime Drama MMORPG?

  36. Matt` says:

    To the “frame rate of the human eye” question, the human eye doesn’t have a frame rate per se, you just see… as for what’s detectable, that depends a lot on what you’re looking at.

    Generally speaking, anything over the 24-30fps used for TV is enough to convince us that we’re watching an object in motion rather than a series of still pictures, although that may need to be higher with computer generated images due to the lack of motion blur (if something’s blurry on each frame then we perceive it as being in motion at a lower framerate threshold).

    The level at which we’re able to detect anything (i.e. notice something blinking on and off) is rather high. It takes a very quick change for something to slip between the ‘frames’ and have us not see it… I don’t know how quick though, and even that would likely depend on the content (a white screen flipping to black then back to white would be easier to spot than a single pixel switching between similar colours quickly).

    Confounding all of that, there’s psychological effects like change blindness, where something can happen without attracting the attention of the eye’s focus (only a small part of our visual field is really in focus at once, the rest is painted in by our brain’s expectation of what ought to be there) and so we fail to notice a change being made if there’s a short blank period in between, or if the change happens very gradually. So even if we get an epic framerate, the processing software sometimes flakes out on us.

  37. Leonardo Herrera says:

    Beautiful!

    I’m assuming you will put it on Google Code and set a couple of admins so other people can keep tinkering and fixing it for a while. Right?

    Pretty please?

  38. KarmaDoor says:

    The writing about bloom reminded me of a programming article on Gamasutra about depth of field and blurring :
    http://www.gamasutra.com/features/20010209/evans_pfv.htm
    You have already implemented some of the optimizations listed, but there are also details on more complex versions of blurring.

  39. moswald says:

    Krellen:

    Interesting. I don’t see much of a difference between 90 and 120, unless they’re side-by-side. In which case I think 90 looks choppy. Everyone is different, I guess.

  40. Debaser says:

    Yes! A screensaver! Victory!

  41. Mario says:

    Conlaen (#24): “Is it just me, or is there suddenly a white to grey gradient on the background?”

    That’s Shamus’ “True Neutral” color scheme. I think if you don’t have one selected, it is set to choose one at random (at least it’s has been random for me lately).

  42. Fergle F Fergleson says:

    http://www.100fps.com/how_many_frames_can_humans_see.htm

    I found this link regarding how many FPS an eye can handle. The short version is “it depends”. A lot has to do with what you’re showing, and how you’re showing it.

    I’m looking forward to seeing the source code to this project a lot.

  43. Chris says:

    Vertex Buffers are old enough that you should feel confident using them given your 5 year criteria. You can almost make that same assumption about basic vertex and pixel shader capabilities. But vertex buffering is older than all that – you’re plenty safe.

  44. Elethiomel says:

    I like how you’re cutting corners by not cutting corners.

  45. LintMan says:

    I love the screensaver idea.

    Krellen – what is the refresh rate on your monitor? If your monitor refresh rate is limited to 60 Hz (as mine is, and many LCD montiors are), no matter how many fps a game or whatever puts out, your monitor is still limited to 60 – it just can’t redraw the screen any faster than that. Now, that said, I think a game capable of, say, 120 fps on your system on a 60Hz monitor would still likely feel “smoother” to play than one actually running at 60 fps, due to potential fps dips during gameplay, but unless your monitor actually supports 90Hz refresh, you weren’t really 90 fps.

  46. Ben N. says:

    Chiming in to say I’d love to have a screensaver of this. It looks really awesome so far.

  47. Ell Jay says:

    I love the idea of a “stone knives and bearskins” approach. Spock would be proud.

  48. Nick says:

    Manhattan unit should probably be capitalised, I’m basing this on the Kelvin scale:

    http://en.wikipedia.org/wiki/Kelvin

    Its name is based on a person’s name and is written with a capital but SI units are always lower-case:

    http://www.bipm.org/en/si/si_brochure/chapter5/5-2.html

    As for the framerate, I couldn’t detect a difference between 60fps and 30fps

  49. Nick says:

    Hmmm, I can’t edit posts:

    Not Acceptable

    An appropriate representation of the requested resource /twentysidedtale/wp-content/plugins/wp-ajax-edit-comments/php/comment-editor.php could not be found on this server.

    Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request.

  50. Jazmeister says:

    I did a double-take on proc city 9 and now I’m reading it all. There was a great comment on procedural content authoring from Eskil Steenberg where he said:

    “Moore law states that we can draw twice as much content every 1.5 years. So I’m suggesting a “Eskils law of game development” if you will, stating: if you cant double the amount of content per man hour you produce every 1.5 years the way you are working is unsustainable. Today’s we have teams of up to 200 people, if we imagine that the next console generation will be 8 times as powerful, does that mean that we need teams of 1600 people? Even a doubling of the team size is quite unsustainable.

    This is not a question of just money. This is about making good games. Good games comes from being able to iterate, make changes and having time to do so.”

    I have a great interest in procedural games; I really think it’s using our greatest advantage over movies and books.

  51. Ian says:

    Yay for the screensaver. Though unless it creates a view on each minotor I’ll have to stick with my fishtank.

  52. DaveMc says:

    I was reminded of an ancient post of Shamus’s, about a ridiculously tiny first-person shooter game. I wanted to remind myself just how small that game had been, and I found the post here. (Ancient in Internet-age; it’s from 2006.) The answer: the game’s called .kkrieger and weighs in at 97k (not a typo: that’s kilobytes), through the magic of procedural content generation. For those of you just joining us, Shamus has been talking about procedural content for a long while; just search for “procedural” and you’ll see. (Once you get past the first two pages of “Procedural City”, that is.)

  53. onitake says:

    this has mostly been mentioned by other readers, so i’m more or less recapping here.
    the common approach to do blooming is via a post-processing pass. if you don’t want to use shaders, you could even do the post-processing on the cpu, provided that your gpu supports copying textures from video memory back to ram and you don’t mind the performance hit. frame buffer objects might come in handy here.
    1. render the scene in full resolution, but into a texture, not onto the screen.
    2. allocate a second full-screen texture and copy the rendered frame into it
    3. apply a threshold filter to that texture by either masking out everything below a certain value, or by multiplying each fragment by itself. this will enhance large values, but make small values even smaller.
    4. gaussian-blur the texture.
    5. combine the two textures. you can play with the algorithm to get varying results here.
    6. render the resulting texture onto a quad covering the entire screen.
    the biggest advantage of this approach is that you only have to do vertex processing once for each frame. but this comes at the cost of doing a lot of fragment processing, and it increases with resolution.

  54. Canonically, the unit for Manhattan distance (the D is not capitalized, thanks) is “the block”, as Manhattan distance is by analogy to grid-as-city. The term “taxicab geometry” was made up by some wikipedia user, and has no citation because someone has confused creativity with documentation.

  55. Volatile says:

    When you’re talking about manhattan units (and yes I think it is acceptable to talk about “named” units like that in the lowercase), you have a sentence that includes “1.4 kilometers on the digonal but you’re 2 kilometers away in manhattan units”. I think you meant diagonal. Sorry to be the grammar/spelling police.

  56. tmp says:

    Actually, data on what hardware supports them and what doesn’t is as scarce as copies of Duke Nukem Forever.)

    You might want to get yourself OpenGL Extension Viewer for that. It’s freeware, and comes with large database of video cards and their openGL abilities; makes it really easy to check quick what you can throw at your intended audience.

  57. Brad Colbert says:

    Shamus,

    You should use display lists. They push push all the geometry and texture coords down onto the card and return a single ID for you to call. You can make one for every building, or a block of buildings, or the whole thing. You should apply a single transform to the modelview matrix before rendering the display lists. The transform will then happen on the card. Your card shouldn’t even breath hard rendering 40K textured polys.

    Cheers!

    Have a read:

    http://www.opengl.org/resources/faq/technical/displaylist.htm

  58. Neil Harding says:

    I was reading about your use of Manhatten units for distance culling. I used a more accurate distance check that is just as fast, rather than sqrt((a-b)*(a-b)) you can use a | b. (That’s assuming a & b are both positive, otherwise you need to do abs(a) | abs(b). It’s within 40% of sqrt version, and most of the time it’s more accurate than that.

  59. fguille says:

    As a (late) reference for Manhattan distance, which I also knew of, check French Wikipedia article for math distance at:
    http://fr.wikipedia.org/wiki/Distance_%28math%C3%A9matiques%29
    I’m enjoying reading this series, slowing down to keep some for tomorrow…

Leave a Reply

Comments are moderated and may not be posted immediately. Required fields are marked *

*
*

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun.

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!