Project Octant Part 8: The Time-Hole

By Shamus Posted Tuesday May 15, 2012

Filed under: Programming 138 comments

So let’s talk about data structures. I’ve mentioned way back at the start of the project that there are certain costs to using an octree. An octree will make interfacing with ALL blocks a hundred times faster but make dealing with a specific block several times slower. I’m curious how…

Hang on. My program has been behaving oddly recently. It’s like it will suddenly stop building new blocks and I’ll end up stuck on this island floating in empty space. I’ve got the program set to aim for 60fps, and if one thing starts eating too much CPU then the block-building gets choked off. Let’s see where these CPU cycles are going.

I add a little benchmarking loop. Right now there are just five systems running:

  1. Avatar: This moves the camera around and does a little super-cheap collision detection.
  2. World: During block generation, world allocates these tables of handy values to speed up building. This part just looks for out-of-use tables and unloads them.
  3. Scene: This is the heavy hitter. It does those crazy heavy-duty noise calculations, places the blocks, maintains the octree, and turns the blocks into textured polygons.
  4. Window: This bit really does almost nothing. It checks for keypresses.
  5. Qt: Ah. This is where Qt gets a chance to process stuff. Qt is the platform I’m using to write this. Go back to part 3 if you need a refresher.

Let’s see where the time is going:

octant8_1.jpg

I don’t have access to a microsecond timer here, so we have to make due with milliseconds. This is like wanting to measure how long a man’s stride is when he walks, but your ruler is only marked off in terms of kilometers. You have to measure a lot of individual steps to figure it out. These measurements were taken over the course of a second. There’s a thousand milliseconds in a second, so we’re looking to account for those thousand milliseconds. Note that there’s a tiny bit of overhead to taking these measurements and we’re likely to miss a tick-tock here or there on really short tasks, but this is good enough to give us a broad understanding of where the time is going.

Note that the item “Qt & Rendering” is there because I do my rendering during the bit where Qt gets a timeslice. The two items in parenthesis break that number down. (Rendering) is how much time I spend shoving polygons at the graphics card, and (Qt) is how much time Qt eats doing… whatever it is that Qt does.

Hm.

This is not good. In fact, this is exactly what I was afraid of back when I started using Qt. Over half of my CPU cycles are being eaten by Qt, doing… what? I have no idea. Qt is in charge of I/O, so it’s doing some keyboard and mouse processing. But that stuff is so fast that I shouldn’t be able to measure it with this clock. It’s also in charge of drawing that black box with the checkboxes and text in it.

Could that be it? Is Qt burning six tenths of a second on a rectangle of text with a couple of checkboxes? Hm. If I get rid of the box then I won’t be able to see these results. Let’s get rid of the checkboxes and see if that changes anything.

octant8_2.jpg

That made a big difference. Qt is now “only” using half of the CPU to draw this stupid black rectangle and text. For contrast, during the other half of the second I’m rendering millions and millions of polygons. Picture two guys in a library. In the same hour, one of them reads the complete three-volume set of The Baroque Cycle and the other guy reads a single Family Circus strip.

Let’s do another test. Let’s give Qt a bunch more controls and see how it reacts.

octant8_3.jpg

I’ve added a little text label, another text box, and a couple of progress bars. (I dunno. They look kind of like health bars or something. Seems like a reasonable thing to expect if this was a game environment.) And now Qt is eating 71% of our CPU. This would be funny if it wasn’t so sad. This makes it pretty clear why nobody’s ever tried to use this thing for a game. A real game interface would be even more complex than this.

Keep in mind that these controls aren’t changing. I’m not altering the position of the faux-health bars or anything else that would create a need for them to be re-drawn every frame. The text boxes only update once a second.

With performance like this, I might as well go use Visual Basic. (No, not really.) Note that this doesn’t mean that Qt is bad. It’s just built with different goals. Getting a cross-platform windowing system to play nice with 3D rendering like this requires a lot of levels of abstraction. A normal Qt application is just some kind of interface with buttons and sliders and whatnot, without any 3D rendering going on. In those circumstances, the user isn’t going to notice a few missing milliseconds. My program is sensitive to single-digit millisecond usage, but a human being generally isn’t going to notice until it’s in the hundreds, and they probably won’t care until it’s near a thousand. The performance needs of Qt are at least two orders of magnitude from the performance needs of a 3D game.

I find this page, which seems to be from one of the developers behind Qt. It confirms my worst fears: This CPU cost is an inescapable reality of using Qt. Even if all of those optimization techniques worked for me, and even if they applied to every little interface item, and even if I made maximum gains from all of them, it still wouldn’t do more than cut the CPU usage in half, which would still be ten times more than it should be.

If I disable the Qt drawing entirely (and have it print out the timing info to the console window instead) then we get:

FPS: 179

Avatar: 0
World: 1
Scene: 25
Window: 0
Qt&Render: 962
(Qt): 251
(Render): 711
Qt ms per Frame: 1

So the real overhead of using Qt is only ~1ms per frame. That’s reasonable. It’s just that the Qt drawing tools are too slow to be useful. A shame, really. “A platform-independent interface” was the main selling point of Qt for me. I’ve found a lot of other things to like about it since then, but losing the GUI is pretty much a deal-breaker.

When my processing began choking off I wanted to come in here and look for ways to optimize the octree or something. But it looks like the first order of business is “stop using qt” if I care about speed. I had a bunch of ideas for how I might tackle the crazy challenges that Goodfellow faced in part 32 of his series. Seems sort of pointless now. There’s no reason to agonize over the aerodynamics of your car while you’ve got a half-ton of cinderblocks in the back seat.

Oh sure, I could work on my optimizations like this, but the CPU drain of Qt is noisy, so measuring performance would be like trying to play Jenga on horseback. Also, experience has taught me that trying to monitor performance by reading a continuous spew of text in a console window is really aggravating.

I suppose I could take my code and go back to Visual Studio and SDL, which is what Project Frontier used. But that means shopping for a suite of image loading, interface-drawing, font-reading tools. Yuck. I don’t want to unravel some idiotic chain of dependencies. I don’t want to download a dozen different SDK‘s, spew their files all over my hard drive and then try to get them to compile and link with my project. I don’t want to have to choose between the package that only solves half the problem, the package that sucks to use, or the package that ties me to Windows. I don’t want to have to learn a new programming language.

I just – this external packages stuff is such a dang killjoy for me. I really, really, really hate it. It takes all the fun out of programming. When I was younger I tolerated it, but now I seem to have lost my patience. I know why this problem exists and I understand why there aren’t easy solutions. I just don’t have the desire to put up with it these days.

Maybe I’ll work a bit more on this project before I shelve it. Maybe I’ll migrate back over to Visual Studio and just muddle along with no interface. Maybe I’ll just stop here. I don’t know. I’m going to walk away from it for a bit and see how I feel about it then.

Onward?

EDIT: I’ve been meaning to ask: For those of you who played around with Project Frontier, what was the biggest hassle in porting it? I know the capitalized filenames were a problem. “Main.cpp” instead of “main.cpp”. I was obliged to use the former professionally for years, and eventually it became a habit that I didn’t question. I’ve since been making sure everything is lowercase, but what other headaches did you encounter? (Aside from, you know, bugs and stuff.)

 


From The Archives:
 

138 thoughts on “Project Octant Part 8: The Time-Hole

  1. Shabavh says:

    Shamus, it’s obvious you’ve been itching to work on this for a long time. Don’t let it go. All we need to do is take a step back and find you the right tools.

    1. MilkToast says:

      Yes. We’re all intelligent people. We can figure this out.

    2. Volfram says:

      http://www.dlang.org

      Just saying. I think it’ll only set you back about 2 weeks at this stage, and pretty much all of the tools are cross-platform, and the Derelict library includes a set of bindings that make SDL and OpenGL integration about as painless as can be.

      Step 1: include Derelict
      Step 2: initialize Derelict libraries
      Step 3: interface with SDL exactly like before.

      1. paercebal says:

        Am I wrong, or you’re missing the point? Isn’t the problem the GUI library (QT), instead of the language (C++)?

        Did I miss something? Does D offers some D-specific easy-to-link-with cross-platform GUI library able to gracefully merge with OpenGL?

        Because linking with SDL is already easy enough in C/C++ (Step 3).

        What is Derelict bringing, in addition to the obligation to learn D (Step 0) and include/Link (Step 1) and initialize (Step 2) Derelict’s libraries?

        1. Volfram says:

          Thanks to the clarity provided by a month’s retrospective, I can definitively say: yes, I was completely missing the point.

          D has a lot of nice features that would likely solve a lot of headaches. GUI support is not one of them.

  2. Carriage says:

    What is it that makes interfaces hard to do? I’m sure you’ve covered this before but I’m not sure where. if text is the main issue, could something more abstract work?

    1. Paul Spooner says:

      Basically, it comes down to the matter of the GUI. Making an interface based entirely on typing text commands is fairly straightforward. Making an interface with boxes and buttons and sliders and scroll-bars that only works on one operating system for one platform is fairly difficult. Making all that work on any platform is really hard.

      Think about writing a technical manual for something simple, say a ruler. But imagine the user doesn’t know anything about numbers or lengths. How do you communicate that? Got it? Okay, but it also has to be a joke book. Every paragraph has to be witty, funny, and make the user smile. Yikes, this is becoming a challenge! Oh, and translate it from English into Chinese, and Sanskrit. It’s hard enough to funny in just one language, but to get those same jokes to work in several widely different languages is a real nightmare. Plus no one speaks Sanskrit anymore.

      That’s kind of what multiplatform GUI programming is like. “user manual for a ruler” sounds simple right? What makes that hard to do?

  3. X2Eliah says:

    Hm. Can you remove the GUI from operating system / user setup interaction and have just a single internal font-package that gives the images as plain graphical texture to in-world objects positioned as GUI? It seems the biggest hurdle in all the gui stuff is creating the link between what the program wants and what the computer’s OS provides/offers…

    1. Volfram says:

      That’s exactly why he’s using QT. It’s supposed to handle that stuff so he can focus on the fun part.

  4. Jabor says:

    Usually the idea is to get some basic concurrency into the mix. Let Qt handle the UI guff, message processing, etc. on one thread, while your 3D rendering is handled on another one.

    This also has the advantage of keeping the UI responsive even if your rendering hits a particularly nasty scene and starts slideshowing.

  5. ENC says:

    Tell me again why Shamus is up 5:30AM on a Tuesday morning?

    At least it’s 9:16PM where I am.

    It’s a damn shame you’re having frustrations at others for having no sympathy or empathy for the fact that, you know, someone would ACTUALLY BE USING those tools someday.

    But documentation is notoriously bad throughout human history. In the music world, there was a composer alive during Bach’s time who was better than he was. The reason we know nothing about him (apart from a name) is that all his good pieces were destroyed and all his bad ones were kept in a separate vault (a vault designed to hold bad music).

    Also of note is that music order is documentated atrociously, with everyone using different systems and new music from the likes of Dvorak is still being discovered today as the music is literally just manuscrupt with notes on the page which they have to attribute from multiple accounts to whom it actually belongs.

    1. Dasick says:

      If no one has heard his music, how can we tell that he is better than Bach?

      But yeah, humans are notoriously short-sighted. I guess that anyone of real historical value is just too modest. “I’ll use this scrap of paper and not sign it, because who would ever want to read this besides me?”

      1. ENC says:

        He was employed by a nobleman at the time (Bach), but there was someone in a position higher than he was under the nobleman, and Bach was already revered during his time as one of the greatest composers who ever lived.

        Who in the hell would you appoint above that? Except he vault with all the good music was destroyed during a war, and more than 90% of music has been lost forever; so it’s always a rare treasure to find anything dating back before the Renaissance especially (when music became ‘music’ as we know it today).

        I’m also noticing a slight irony in the Mr. Smith picture for the posts, seems coincidental to QT ‘multiplying’ so to speak and running out of control.

        Edit: There’s also very poor documentation in the documentation itself for music, where people would have books written by the scholars of time describing ‘the art of music’ yet leave out very critical details for how music was actually PLAYED (they just assumed new generations would be the same) during that time so we can only roughly guess it.

        1. Dasick says:

          Again, how can we tell he is better than anyone if we have no idea what his music is like? Hansi Kuerch is pretty badass, but is Justin Beiber a better musician because he makes more money and is employed by a bigger “noble”?

          (they just assumed new generations would be the same)

          Or maybe they just didn’t care how music is played, so long as it is played properly. Heh, historians getting screwed over by open-mindedness.

          But to be fair, maybe the scholars were forward thinking. Maybe they were vague on purpose so that people quoting them wouldn’t disregard certain musical sounds because it’s not in the books. Y’know, letting the music evolve.

          1. ENC says:

            My phrasing was poor.

            They were employed by the SAME nobel, of which he only employed 2 composers, and this one held appointment above Bach. They weren’t employed by different nobelman (of which it would be understandable that Bach was likely better).

            So either the guy has a crap taste in music and hired his brother’s friends cousin, or the guy was just that good. Considering the vault was designed to hold the absolute best of things for the ages, we assume the possibility he may’ve been better than Bach, except unfortunately we only have his crap music held in the vault designed to hold his crap music because war ravaged the good one. IIRC the nobleman was a musician himself as well.

            Ehh disregard musical sounds? Ooo boy the church would not like your thinking there, any music they didn’t approve of was evil (non-organ and non-vocal essentially, organ was only allowed within the church, no other instruments for a time at least), except we have no idea how they played or even sung (they didn’t even have RHYTHM dictated then, so we pretty much came long and guessed what the rhythm was like). Try listening to a Gregorian chant and think how we have no idea how it would sound to them, maybe we’re making certain parts too loud and never considered it, etc.

            Nor do we know how the instruments were specified to be played then so have no idea how they would hear things (it is a fact they would find modern music ‘disgusting’ like modern musicians find their music ‘odd’ due to its modality). Composing certain elements were also things they took for granted in writing so we had to extrapolate from what we had.

            1. Dasick says:

              I see your point about Bach. It’s still possible that the other composer (what is his name by the way?) merely held the office due to his expereince/some other quality, but in general, I do see your point.

              When you were talking about music scholars I thought of the Greek ones… though I now wonder whether tehy’d be as open minded as I to intentionally leave blank spaces for future generations to fill in.

              I think the real reason we’re lacking documentation from that era though is the expense of making literature. I imagine recording instructions on instrument, hand/mouth positions, basic notes etc would take a lot of space and still be not as effective as having someone else explain it to you. Music is, after all, about sound, not symbols on paper. So the scholars only bothered to write down the ideas that go into the art, but the practical aspects were left for the word-of-mouth teachers.

            2. Ateius says:

              It’s important to remember that Bach wasn’t always the Bach we know, immensely famous and renowned. Bach was once a total nobody with a talent who had to work his way up to recognition. I’m assuming this nameless noble was the Duke of Weimar; Bach worked for him twice, first as the lowly Court Musician, later as Concertmaster. He was later Director of Music for the Prince of Anhalt and Royal Court Composer for the King of Poland, as well as working for various churches.

              I have no doubt that, at several points during his long climb to success, Bach toiled away with other men ranked over him, regardless of their respective talents, and the fact that such a person held that rank does not mean they were necessarily of greater skill than Bach. There are a lot of reasons why there may have been a composer of “higher rank” than Bach, especially given the way the courts of nobles and lords worked. “Merit” is only one of many ways to gain a position, and often not even the most important.

              1. Loonyyy says:

                Indeed. Also, to attempt to rank talent with regard to, well, rank, is hardly going to work, especially for a subjective subject (The position is given by a person who makes their personal choice, not necessarily reflecting the overall popular opinion), and one relating to the usual details of employment: ie, that your boss might be less capable than you, but is above you simply by virtue of being hired first (I’m sure many have experienced this *twitch*).

    2. Mephane says:

      Oh this is nothing. Just think about what treasures of ancient knowledge and history was lost in the destruction of the library of Alexandria. For example, we know from the remaining fragments that even back at the time they knew that the Earth was a sphere (and had even calculated its diameter with a remarkable precision considering the relatively simple tools available at the time). That event alone might have thrown back our technological and scientific development by many centuries. Imagine where we might be now had that not happened…

      1. ENC says:

        I still find it remarkable the first time the Gravitational Constant was calculated through OBSERVATION!

        1. Dasick says:

          People back in the day were capable of some pretty amazing things. Have you ever seen the pyramids? You can’t even stick a blade between the stones, that’s how smooth they are. Erm, not that I’ve ever tried that. No. That would have been terrible.

          It always pains me a little bit when people portray ancients as barely intelligent cavemen. :(

          1. silver Harloe says:

            “You can't even stick a blade between the stones, that's how smooth they are”

            I’ve heard that before and always wondered:

            do we know the stones were crafted to that tolerance originally? or is it vaguely even a possibility that centuries of being smooshed by heavy stones while expanding and contracting as the temperature changed widened the stones out as far as they could go (to the point where they are now so close you can’t even stick a razor blade between them)?

            1. Mr Guy says:

              If anything, it should be the opposite.

              Consider a set of metal blocks placed very close together on a concrete surface just before dawn.

              Metal expands more than concrete (for this example, assume the concrete’s thermal expansion is negligible. The friction of the blocks with the concrete opposes movement.

              What happens when the sun comes out? The metal blocks expand. Let’s assume the blocks were close enough together that they touch while they’re expanding (wouldn’t be interesting otherwise). The pressure of the blocks expanding will force the blocks further apart than they were initially, overcoming the friction. At their maximum expansion, the line of blocks will be longer than it was before dawn.

              Now night falls. The blocks shrink. But they don’t return to their original positions! The blocks will stay in the positions they were pushed to, because there’s no force acting to counter the friction of the concrete. Expanding blocks push other blocks, but contracting blocks do NOT pull other blocks.

              At the end of the day, the blocks are further apart than when they started, and the gaps between them are wider. That’s what you’d see in any case with thermal expansion/contraction over time.

              If that’s so, why aren’t the pyramid blocks further apart than they were initially? I’d wager it’s because they’re large, have a LOT of thermal mass (it takes a lot of energy to heat them up), and they’re made of a material with very little tendency to expand thermally.

            2. Mephane says:

              Another hypothesis I have heard is that the stones might not have been cut from natural stone, but could have been made from something similar to concrete, i.e. they were cast and not cut. I am not sure how viable that idea is, however, but I would not at all be surprised if the ancient Egyptians had found some mixture that could be shaped very much like modern concrete.

          2. El Quia says:

            And this is why I freaking hate so much the fact that people simply can’t believe that ancient people was able of doing that stuff, so surely aliens must have been involved…

            Imagine if that in 1000 years (or 2000, or 4000), human will look back in history, see our accomplishments and say “no way the primitive humans, with their poor grasp on [FUTURE-SCIENCE] were able of doing that! Clearly, it was aliens” and you will understand why that is both stupid and offensive to those people’s accomplishments…

            1. El Quia says:

              Ok, this is what I get for not re-reading what I wrote before posting.

              “believe that ancient people was able of doing that stuff” should be “believe that ancient people were able of doing that stuff”

              “human will look back in history” should be “humans will look back in history”

              And “no way the primitive humans” should be “no way that primitive humans”

              Well, sorry for the mess! (And I hope I am not introducing more mistakes in this post :P)

      2. Rick says:

        A lot of great ideas died with Nikola Tesla and, no doubt, with many others too.

        Tesla comic on The Oatmeal

        1. Mephane says:

          Of course, Tesla was absolutely awesome. I was just stating one of many noteworthy examples in human history (and one of the few that I know enough about to not blurt out some utter nonsense). ;)

        2. Zukhramm says:

          Yes, Tesla invented space, the internet and light itself but the Space Lizard, the FBI and Thomas Edison destroyed his work and murdered him. And to this day, no one knows of what he did, or that he even existed. Except all the people on the internet that somehow knows that no one knows.

          1. Drexer says:

            This.

            I have a great respect for Tesla’s achivements and works, and the comic goes well for around 1/3rd of the way. But then it degenerates into the same conspiracy-theory-Assassins-Creed-Dan-Brown generalizations that are exxagerations made across the ages and which most have already been disproven pretty throughly, even by the Mythbusters.

            I swear, I’m starting to hate Tesla just because of how the conspiracy theorists use him to support their crazy minds. Which is a shame really.

            1. krellen says:

              It’s still true that Edison was a douchebag, however.

              1. MadTinkerer says:

                Yes, but as I have to always point out to people: Edison didn’t have the idea but he was the guy who made it work.

                Edison absolutely does deserve credit for engineering a commercially viable lightbulb that lasted. All the shitty prototype lightbulbs that burned out quickly (among other faults) were never going to do anyone any good. Edison’s lightbulb is what everyone was actually able to buy and use for a decent amount of time and so Edison is the guy who deserves the credit.

                It’s like Pong vs. that oscilloscope game that is technically the first videogame ever. Yes, the gameplay is the same (Pong’s big innovations were the score system and making it into a coin-op) but do you play your games on an oscilloscope, jackass?

                EDIT: To clarify, I am not calling anyone here a jackass, but rather a theoretical person who would compare a lab project that the creator abandoned in favor of making his own console (because people have televisions in their homes, not oscilloscopes) to a completely separate machine that happened to have similar gameplay.

                1. Sumanai says:

                  So commercial viability dictates the value of an invention and the creator?

                  1. WJS says:

                    Yes. Inventing something that’s no use to anyone is of far less value than inventing something that everyone finds useful.

          2. Dasick says:

            Nice going there. I understand you’re exaggerating for effect, but you just called a lot of people stupid on the basis on things that most people haven’t said or don’t belieive.

            1. Zukhramm says:

              You wouldn’t believe the things I’ve seen people believe about Tesla.

              1. Sumanai says:

                You wouldn’t believe the stuff I’ve heard people say about a lot of stuff, but when someone brings one of them up I don’t accuse them of being part of the Nutjob Party.

                1. Zukhramm says:

                  Well, I do.

      3. Soylent Dave says:

        And the Romans had steam power and used antiseptics – both things we forgot about for a few hundred years after the Empire fell and only really rediscovered in the 18th century!

        Sometimes I wonder how many ‘dark ages’* we’ve had that we don’t know about; how many civilisations have risen and fallen without leaving us anything to remember them by. How much have we actually lost?

        How much further along could we be if we didn’t keep destroying everything every so often?

        (but then I think “nah, archaeologists are pretty thorough”)

        *And yes, I know why they’re actually called the Dark Ages, but they do also mark a point during which we forgot a lot of things, even if we weren’t rolling around in mud

        1. Ateius says:

          The Romans were amazing in a lot of things, and had very impressive heavy industry, but steam power was one thing they didn’t achieve.

          There were primitive steam engines, of course – Hero of Alexandia built one in the 1st century CE – but they were only small, proof-of-concept models. Nobody could figure out how to successfully scale it up or put it to use in a practical application, as it was quite weak and anemic compared to the massive waterwheels that powered their mines and industry.

          1. Soylent Dave says:

            Steam ‘power’ might have been bigging it up a bit, but Vitruvius was writing about them a century before Hero; the design itself owes a lot to the Ancient Greek mathematician Ctesibus (who was writing in the 3rd century BC), although the Romans probably wouldn’t have admitted the Greek influence (barbarians, don’cha know)

            The device in question (which I can never remember the name of) has all the parts of a full scale steam engine, it just never seems to have been attached to anything, so what they used it for is a bit vague, though (perhaps just “ooh, look at this – cool, isn’t it?”)

            I think a big reason the Romans never developed such technologies much further is that they didn’t need to – they had more power than they needed in the form of slave (and citizen) manpower; labour-saving devices weren’t exactly high on their agenda, probably.

            1. WJS says:

              I don’t think that’s a valid argument.
              “The Romans had steam power! Sure, they didn’t use it for anything, but they totally had it! Trust me!”

  6. Mad says:

    You should have a look at integrating QtQuick 2.0 for your UI needs. Not only is it much more elegant and versatile, its also written with OpenGl as the backend so it should be much better for your use-case.

    You could also of course move the widgets out of your OpenGL Window. Though no HUD / in-game UI.

    Another thing you could try is to start your app with “-graphicsystem opengl”, maybe that helps.

  7. Totally off-the-wall, but have you considered Unity? I would assume Unity has a boatload more overhead than something like Qt if it weren’t for the fact that it was engineered to make games.

    1. X2Eliah says:

      Isn’t unity a full-on game engine, though? Shamus is writing pretty-much his own engine here, dealing with rendering and world-storage and so on…

    2. Dasick says:

      As far as I recon Unity is a complete engine in itself. Shamus is mostly messing around with the engine, and I think you need the source code to alter the engine.

      Edit: Ninjas are everywhere.

    3. peter says:

      There’s been quite a few decent games made with unity, as well. Hell, the new Wasteland will be made in it.
      It’s not ‘just’ for browsergames.
      Not sure if the free version has all Shamus needs though, and the Pro version is rather expensive, i think.

      1. Piflik says:

        Pro is 1500$. But that doesn’t include licenses for Android/iOS (not to speak of XBox/Wii/PS3 which need additional licenses…and rather costly ones)

        But yes, Unity is a really good option for creating games.

    4. nmichaels says:

      Also with Unity: no Linux support :o(

    5. Piflik says:

      He already answered this. Unity is more for people with lots of art and little programming, while Shamus is the opposite.

  8. Jash says:

    Go crazy! C#! XNA! Of course, you’ll need to change language, rewrite everything, and lose the cross-platform thing. But XNA handles everything you mentioned in Part 3 except for the interface bits and pieces. But from what I’ve heard, there’s no such thing as a one-size-fits-all game interface library.

    1. Lord Honk says:

      Hell, might as well use Java in that case :P To bring up my favorite quote: “Java is high performance. By high performance we mean adequate. By adequate we mean slow.” It’s entirely possible to get nice 3D games running in Java (hell, with opengl and the easy window management, it’s simpler than SDL), but the overhead of Java just stops everything dead in its tracks.

      But in all seriousness, it’s hard enough to monitor resources on an average project if you’ve written everything yourself in plain C, when you’re using blackboxes like Qt there’s no telling how or where the problem is. Is it dependent on the OS? Is it dependent on the architecture? What about the graphics card?

      As Shamus said, it’s a shame that Qt’s big selling point is also one of it’s big faults.

      1. Dasick says:

        The difference is that C#’s XNA is a GAME creation library where as Java is closer to Qt in the speed and compatibility. So he’d be trading one slow but multi-platform environment for another one.

        1. Darkness says:

          C#/XNA is designed to circumvent the gross limitations of the XBox. No memory, limited GPU, normally little storage and some version of read only game object. Is does that job well. It even hides most of the junk that MS has thrown in the way of gaming.

          That said, it is of little use outside of MS environments. Using the standard MS rule: Embrace, extend, excrement.

    2. Robert Maguire says:

      It really is a shame that they don’t meet his requirements. The C#/XNA combo made creating games fun again for me. And XNA is the only framework I’ve ever worked with that didn’t make me want to tear out my hair in frustration.

      Though I’m obligated to point out that C#/.NET has a cross-platform equivalent, Mono, and XNA is mostly ported via MonoGame, which seems to have matured quite a bit since I last checked (enough that it was used for the Mac port of Bastion).

      1. BvG says:

        Just for completeness sake, Unity _IS_ Mono, kind of, on top of it, riding it over the 3d target line, or something like that.

  9. Dasick says:

    What version of SDL are you using? I’m using 1.2.13 that was compiled for us by our teacher. As far as I know, it handles drawing stuff, handling input, using fonts, playing sound and video, and some other things as well.

    To be fair, I haven’t messed around with it much so maybe it’s missing functionality, but from what he showed us it handles all theese things rather well.

  10. Simon Buchan says:

    If I were doing this? Strip out anything you don’t need: and I don’t think you need a full-blown UI widget library.
    Find a quick way to get text on screen – a quick google gives the excellent looking http://code.google.com/p/freetype-gl/.
    Pick a single canonical texture format, and use whatever standard implementation works for that, instead of a dependancy on some big library with it’s own crazy dependencies. I believe .dds works near directly with glCompressedTexImage2D(), so that’s probably a good place to start looking.
    Other than those, I think you should be able to handle any UI you need yourself without terribly much trouble, since immediate mode widgets are way easier to implement than old-school retained mode (they pretty much just have draw(), click() and key() as an interface).
    Though proper multiline, mouse and selection supporting text input can be a pain, you should be able to get away with just ASCII , left, right, backspace, delete as your input in a game context, which isn’t too hard to implement (keeping track of character widths to put the caret in the right place would probably be the biggest deal) – glConsole does it afterall.
    The real question, of course, is if it would be time well spent, when you have things you actually want to accomplish other than once more implementing widgets.

    re: Porting, was there a reason you picked Cg over HLSL for your shaders? It’s hardly a heavy burden or anything (if anything, it’s a nice simple angle for me to get started on attacking the code), but I was curious why you picked it when it is a little harder to set up and is nVidia specific.

    1. Shamus says:

      The only reason I used Cg was that I’d already used the library myself. I’m thinking GLSL next time I need some shaders.

    2. Jake Albano says:

      HLSL is DirectX. Shamus uses OpenGL.

      1. Simon Buchan says:

        Blarg. That’s what I get for posting right before bed. I meant GLSL, of course.

    3. DrMcCoy says:

      “I believe .dds works near directly with glCompressedTexImage2D()”

      I doesn’t (except maybe with some crazy extensions that are only available in a fraction of hardware).

      You still need to manually parse the DDS header, look at what format the image data, and grab the mipmaps.

      Also, if you’re using compressed textures, there’s still OpenGL implementations out there (software, mostly) where DXTn is not available. You might need to decompress them manually (which might or might not be a software patent landmine. IANAL.).

      1. Simon Buchan says:

        Which is why I said “near” – I should have been clearer in saying “.dds image data directly”. My point stands in that that ~300 line implementation file is far simpler than the code to unpack .jpgs or .pngs. I would probably even pick only one DXTn format (maybe another for normal maps), which would simplify that code even further.

  11. Abnaxis says:

    Also, experience has taught me that trying to monitor performance by reading a continuous spew of text in a console window is really aggravating.

    Why not export the data to a text file? Or keep a running average of what you want to know about? Let the thing run for thirty or so seconds and spit out the results to take the noise out.

    1. Shamus says:

      It’s really useful to be able to see the changes interactively. Point the camera at something (or hold down a button) and see how the numbers change.

      1. Rick C says:

        Do you have a dual-monitor setup? You could write a separate program that would sit on the other monitor that could act as your debug info window, and have the two programs talk via sockets.)

        1. Darkness says:

          One of the old style unix, now linux methods was to simply have an open xterm or a separate terminal connected. Indeed, that was how I debugged getting PixelCity running on linux. I have no idea why it is so hard on Windows to do basic multi-tasking based debugging.

      2. Abnaxis says:

        I know! You can put a progress bar that goes up and turns red as the Qt time goes up!

        Wait…

        1. WJS says:

          You mock, but this was pretty much what I was thinking (except obviously not in the viewport!). Every n frames, print however many characters to StdOut followed by a newline. Boom, progress bar across your terminal, changing in realtime (with history visible too). You could insert control characters to colour it if you feel fancy, although that’s probably a waste of time.

  12. If you’re willing to not embed the gui widgets into the GL world, you can probably put them off to the side in a non-GL area of the screen for much cheaper. For a game you may still need a GL-based UI, but this gives you cheap dev UI, if it works. I think the core of the performance problem is trying to mix the QT and the GL; I’d imagine it renders a frame, grabs it as a frame, feeds it to itself and renders the widgets on top via bit bashing, then feeds the frame back out to the graphics card, and that’s why it’s so slow. Don’t put any widgets on the GL scene and you’re going down a more direct codepath.

    1. Nick says:

      I suppose you could run both the Qt app and the rendering window taking up a given % of the screen rather than fullscreening them

    2. Kevin Reid says:

      I agree with this suggestion. Move the widgets and 3D scene to non-overlapping areas so that QT does not need to redraw its widgets whenever you draw a frame. Sure, having a rectangular sidebar rather than overlays will be ugly, but at least you can get back to hacking octrees and consider better UI incrementally.

    3. WJS says:

      I find this questionable. If the overhead was pulling the rendered frame to the CPU and sending it back each frame, 1: I’d expect that to be even slower, and 2: the cost for doing that would be pretty much constant, whereas adding more controls slowed it down even more.

  13. Kronopath says:

    I’m just going to point out SFML since no one else has yet. It’s like, a more modern, C++-oriented version of SDL.

    1. Arvind says:

      Seconded. It’s worth a look.

      1. Jake Albano says:

        Hey, I have one of your games on Desura! Do you use SFML in your stuff?

        I’m thirding the recommendation for SFML also, although Shamus probably doesn’t want to switch frameworks all the sudden.

  14. Zak McKracken says:

    Hmm… crazy idea from a not-used-to-making-UIs guy:

    Would it be possible to have the interface running at a lower framerate than the rest of the graphics? Factor 10 would maybe be a bit much, but a factor of 5 or so? This means the 3D part would have to do render 5 times in one loop before it lets QT update the UI. Or …. multithreading?
    Almost any current CPU has several cores these days. Wouldn’t it be possible to have the UI run on a different thread, and thus unable to slow the rest down (and vice versa — if the 3D chokes for a second because some huge chunk of new geometry arrived for some reason, the UI wouldn’t mind). Of course that’d mean you’d have to wrap your head around multithreaded shared-memory parallel programming (I won’t claim I know a lot about that), but as I see it that’s the way to go if you want fast execution of stuff on modern CPUs anyway.

    On the upside: I’d be really keen to have you explain parallel programming in some detail here, because I’ve been meaning to wrap my head around that for a while.

    1. X2Eliah says:

      Hmm. Usually you want your interface to be as responsive as possible even if the game itself is lagging. Quite simply, there’s nothing more aggravating than laggy interface/GUI to the end-user – doubly so if it is mouse-based!

    2. Robyrt says:

      Running the interface at slow speeds (say, the laggy speed of your main graphics engine) is fine for most purposes – nobody is shocked when Microsoft Word or Angry Birds drops a couple frames – but for the kind of game that really wants 60fps, you want the user to feel like their input is recognized instantaneously. Rock Band is the poster child here: because your brain is already playing back the song from memory, even a single skipped frame is noticeable.

    3. Shamus says:

      Sadly, this isn’t possible. I don’t control when those interface bits get drawn. Qt tells me to draw the scene. When I release control back to Qt, it’s got interface stuff drawn over it. Even if I found some way to prevent Qt from drawing the stuff, it would just result in the interface not showing up. :(

      1. Zak McKracken says:

        So I understand Qt draws over the 3D stuff, after having refreshed that. So if you re-wrote your 3D routines to draw 3 frames consecutively every time they’re called, that would result in them overwriting the interface, which would then only be visible 1/3 of the time? Annoying.

        There should really be a way (for every piece of software, actually) to have a multithreaded UI. I don’t want the 3D drawing or data organizing, AI, compute-heavy stuff to freeze the mouse pointer, and I don’t want the game to stutter every time I move a window around. Oh wait, Qt does multithreading!
        http://doc.qt.nokia.com/4.7-snapshot/thread-basics.html
        (why is this a nokia page? It’s what you find for “qt multithreaded GUI” on Google)

        So … I don’t claim I understand all this, but it would seem that it should be possible to have the 3D rendering run in parallel to the UI. That’s not helping if the UI alone will limit your frame rate. Except if there was a way to have UI and openGL run asynchronous and then overlay the current state of the UI to the current state of the openGL render sixty times per second. Or something.

        Which begs the question: How do intense graphics programs handle this? I somewhere caught the term “non-blocking UI”, and I’ve seen those in action. Including opulent Qt UIs. I’m guessing there must be a solution. But I’ll shut up now and let the grown-ups talk.

        1. Bryan says:

          > why is this a nokia page?

          Because Qt was originally a Nokia thing; maybe it still is, I’m not entirely sure.

          > Except if there was a way to have UI and openGL run asynchronous and then overlay the current state of the UI to the current state of the openGL render sixty times per second. Or something.

          Something that’s sort of the inverse of ARB_render_to_texture? Hand the Qt code an instance of whatever the Qt equivalent of a pixmap is (a bitmap on windows, but in X a bitmap is two-color; hence “bit” :-) ), have it render there in its own thread, and grab a copy every once in a while?

          Not sure if that’s possible, but it’d be a nice solution if it was. It’s *almost* possible with gdk-x11 (haven’t tried it on gdk-windows), because a GdkPixmap “is” a GdkDrawable, and *most* of the Gtk controls have draw “methods” (though it’s C, not C++) that take a GdkDrawable argument. What I don’t know is whether it’s possible (in Gtk) to force the library to always draw to something other than the screen. It’s definitely possible to dump all the drawing (well, the Gtk main loop) into another thread though.

          1. Eroen says:

            >> why is this a nokia page?

            >Because Qt was originally a Nokia thing; maybe it still is, I'm not entirely sure.

            Qt was originally a TrollTech thing, bought by Nokia a couple of years back when they wanted to use it for their New and Shiny smartphone platform “Maemo”. Afaik they released two models of their Ipad predecessor (w/o touchscreen) before they changed their minds and determined Windows to be the go-to platform for mobile devices.

            1. Bryan says:

              Oh, right, duh, I forgot about Trolltech from back when I looked at Qt the first time. Thanks. :-)

          2. Bryan says:

            Just had another idea — under X11, it’d be possible to duplicate whatever the composite managers are doing. Reading some docs, it appears they’re using the Composite extension (…er, duh, right, I forgot that existed).

            Reading docs of that extension, it lets you redirect an entire (X11) window’s rendering to an off-screen buffer. Then you get the pixmap ID of that buffer, and from there you can get at the colors being drawn. (Or use a GLX call to turn the X pixmap into a GLX pixmap, and then use the EXT_texture_from_pixmap extension to turn that into a GL texture, thus keeping the pixels on the server instead of pulling them back down to the client.)

            Not sure if the performance would be as high as what’s needed here, but this is how composite managers put arbitrary (X11) windows’ contents onto GL surfaces and wobble them around, so it should be.

            The problem, obviously, is that it doesn’t work on windows. So it’s a complete non-starter in this case. :-/ But if someone is wondering how to get it working on a new-enough X11, it’s certainly possible…

  15. Rick says:

    Is it possible/easy to drop in that super easy to use console that you used in Project Frontier? Would that let you move forward with a basic interface?

    At least then you’d be able to focus on the fun and creative parts, which is why you’re doing this in the first place. To get this out of your system so you can get back to writing.

    Then when you get the itch again you could look at QtQuick 2.0, Freetype-GL, multi-threading or any other suggestions that’ve come up.

    All this could be wrong… I’m a coder but in web development so while I thoroughly enjoy reading about it, your development world is very different to mine.

  16. Mad says:

    Another possible source of the giant performance problem might be that you kind-of paint two frames for one.
    In the Qt Eventloop there probably is something like OpenGL::beginPaint/endPaint(). Since you work outside the normal EventLoop, maybe that is a problem for you? If you’d try and move your code into onPaintEvent of your widget maybe that improves things?

  17. Benabik says:

    You might want to try the Qt 5 alpha. Qt 5 has redone the entire drawing stack on OpenGL. It may provide better performance for this situation. The original plan was for a beta around now and the final in a couple months.

    1. Mr Guy says:

      Shamus: “I just want something that works and that’s easy. I don’t want to spend huge amounts of time working out the kinks in poorly documented or hard to use stuff.”

      You: “Try something that’s still in Alpha. I hear it’ll go Beta soon.”

      I don’t doubt you that the newer version has fixed a bunch of stuff. But there’s nothing more antithetical to the idea of “I want it to just work, seamlessly” than using alpha code…

      1. Benabik says:

        I didn’t say it was a perfect answer, just a possible one. A few downloads and compiles would give a hint if it would be easier to work with Qt5 or build things from scratch. Given the option with working with a beta version of the library I’m already working with or re-implementing from scratch, I would personally see how painful the beta actually is before just wandering off elsewhere. Of course, it would probably take me far longer do to all the font, image, GUI, etc handling than it would take to compile a version of Qt. The time tradeoff is different for someone who’s done these things already.

        As far as “easy to use”, it’s supposed to be mostly source compatible with Qt4 so there’s little effort in moving there. And it already has pretty complete documentation. Of course, all that said, it appears that the release has been pushed back to mid-August instead of mid-June. There is a difference between a beta that’s supposed to go stable next month and one that has a while to go.

  18. Glen says:

    Wolfire use Awesomium which seems to work for them and would probably give you more flexibility than even QT. Might be worth looking into?

  19. nmichaels says:

    “But that means shopping for a suite of image loading, interface-drawing, font-reading tools.”

    SDL_ttf and SDL_image cover two of those. I don’t know if you’re planning on making a game with interface elements out of this, but those two plus the wonder-console you used on project frontier might suffice.

    SDL_Image and _ttf are extra DLLs (on Windows; on most Linux distros they’re in the package system) but that’s true for whatever you do and you’ve said before that you liked SDL.

    1. Kagetsu says:

      Yes, the official add-on libraries(mixer, image, ttf, net) all slot in nicely with SDL and provide very nice basic tools which you can use and build on to accomplish things, which is why I love working with SDL.

      The problem is that they utterly fail for part two, interface drawing. I rolled my own checkboxes and dropdowns when I needed them, but I do recall Shamus stating he’s looking for an already implemented system, and most of the all encompassing ‘UI’ libraries I’ve tried out with SDL haven’t been up to snuff. It was less hassle to write my own because at least then I knew how they worked and could fix them when things went wrong.

      1. Bryan says:

        There are a couple of SDL addon libraries for UI, but none of them (last I looked, maybe a year ago) looked terribly ready for prime time. Unfortunately. :-(

        Looking again…

        Not sure if this gets in your way (in terms of trying to take over the main loop), and I’m not sure about FreeType and Boost, but it looks a little better baked: http://gigi.sourceforge.net/

        This one might work as well; it at least *looks* simple: http://www.antisphere.com/Wiki/tools:anttweakbar

        This one might be too simple: http://www.libsdl.org/projects/GUIlib/

        This one also looks reasonable, perhaps, though the built-in font stuff might not work terribly well; might be worth a shot: http://www.zuzuf.net/SDL_sgui/

        All found via the “SDL libraries” page:

        http://www.libsdl.org/libraries.php?order=name&category=-1&completed=0&os=-1&match_name=&perpage=-1

        and looking for stuff that mentioned “ui”, and which was at 100% on at least both Linux and Windows.

  20. Mark says:

    It might be prudent to step back and review your requirements. Based on this series, I’m going to take a guess at what you need:

    Hardware-accelerated 3D rendering so the project can exist at all
    Standard UI widgets so you can control parameters without having to implement them from scratch

    And what you don’t strictly need but would really like:

    A short dependency chain
    Stable API bindings in a language you already know
    Uses OpenGL rather than DirectX

    It seems like GLUI would be the go-to library for doing what you want: just a very simple and lightweight set of GUI widgets designed to interface well with GLUT.

    Upthread, XNA was mentioned. XNA is a very well-reputed library, a relatively abstracted layer around DirectX that can be called from any CLI language (though personally I was none too impressed by C++.NET when I used it), and it’d be well-supported on Windows systems. However, it does things a bit differently than in the OpenGL world, and I don’t know whether it has any facilities for mixing standard UI widgets the way you want.

    In a comment on an earlier article, I brought up the possibility (and another commenter agreed) that WebGL might be a suitable candidate, if you’re willing to abandon the stable world of C++ and OpenGL in favor of the unique challenges of a Javascript library whose implementations are still in development. The browser, after all, has always been all about standard GUI widgets; overlaying a WebGL Canvas with some plain HTML using the CSS opacity property should work. Performance might be a concern; even with the optimizing compiler and hardware-accelerated rendering that Chrome brings to the table, it’ll still be slower than native, though I don’t expect it’ll be too slow on a modern device. One advantage of this approach would be that it’d be easier to reach a wider audience.

    You certainly don’t lack for options. I think GLUI will be your best bet if I’m understanding its capabilities correctly.

    1. X2Eliah says:

      The whole point of this article series was, more or less, related to Shamus not liking to deal with unstable, partial, ill-documented, slightly random, libraries. So…
      “in favor of the unique challenges of a Javascript library whose implementations are still in development.” I strongly doubt that will sound appealing to him.

    2. lethal_guitar says:

      Not to mention that WebGL is based on OpenGL ES, not on the normal GL spec. Which very likely means additional work. In GL ES, there is no support for the so called “fixed-function pipeline” – which Shamus is making use of, as far as I can tell. For example, you have to write and integrate shaders in order to render even the simplest scene, you cannot just dump your geometry etc. to the graphics card and expect to see anything. You also have to re-implement the OpenGL matrix stack if you whish to use it, and so on.

      I really think WebGL is an awesome and promising technology. I’ve seen some cool things done with it and the additional work is certainly manageable. But it would also be another “distraction” getting in his way – apart from learning a new programming language ;)

      1. Jamie Pate says:

        basic shaders are pretty damn simple, you could copy/paste the example code and have it running ‘like a fixed pipeline’

        fixed pipeline NEEDS TO DIE :)

        1. X2Eliah says:

          Why does it need to die?

        2. lethal_guitar says:

          Well sure, very basic shaders are almost one-liners. But if you just want very basic lighting (e.g. gouraud shading, as provided by the fixed-function pipeline), it already get’s a bit more complicated. Sure, you can find code for that all over the internet. And if you’re in for a little challenge, you could certainly take the formulas from your favorite 3D graphics book and implement it on your own. Still, it is more work to do.

          Of course, if you’re going for shiny modern state-of-the-art graphics, everything will be fully shader based anyway. But like always, it depends on your goals. Maybe I’m missing something, but I don’t think Shamus is trying to create CryEngine 4 here ;) And basic graphics programming tends to be easier and more straightforward when you can make use of the “old school” pipeline for certain features, at least in my experience.

  21. hunguptodry says:

    How does Blender do it? It runs on pretty much anything u throw at it and it has plenty of UI. IMHO it is also quite fast.

    1. Abnaxis says:

      I’m think Blender is written in python (at least Blender mods are…). However they do it, it probably doesn’t translate into C++.

      1. Zak McKracken says:

        Naa, it’s written in C++, but it has a python API.
        Meaning: You can use python to make blender do your bidding and rearrange some things. Some functions even come in the form of python plugins, but everything that uses a significant amount of computation time is not in Python. That’d be horribly slow.

        I would, however, like to know how they keep the UI and the main graphics window from slowing each other down, and I’m pretty sure it has to do with multithreading. I am also sure that the Blender UI code is not used anywhere except in Blender. At all.

    2. Simon Buchan says:

      Blender’s UI is OpenGL drawn in C++, but with layout and at least some interaction in Python bindings.

  22. Piflik says:

    Would it be possible/feasible to use a 3D GUI? Just simple polygons in the scene, with textures on them. Dynamic text won’t work then, though, but you could do without the QT drawing completely.

  23. MichaelG says:

    Sent you email. I really want to work with you on these platform issues.

  24. OK, this is going to sound stupid, and I’m not even a programmer. But couldn’t you take QT out of the loop in this fashion:

    a. Do an initial pass of the QT stuff.
    b. Take a picture of what it made. So you’ve got just a .jpg or .png or something that looks exactly like those QT things, but is just a picture.
    c. Iterate your loop leaving QT out of it, just sticking the picture there where the genuine menu/whatever should be.
    d. Have step 4, “Window”, check for inputs that could possibly be relevant to the QT stuff, like mouse-clicks on the piece of surface area occupied by that picture, or keyboard presses that are *not* basic controls used to move around and stuff.
    e. If something like that happens, invoke the QT stuff again and tell it about the input. Let it do its thing. Do “b” again.

    So basically, instead of part of the main loop, you make QT something to be invoked when needed and fake it the rest of the time. Nobody’s going to care if things slow down while they’re using the menus; I get the impression this is not the kind of game where you’ll continue moving while you do that kind of thing, so it won’t need to change the rest of the picture when QT is being needed for real anyway.

    1. Jamie Pate says:

      he already mentioned that qt ‘gives’ him a paint event, so he really doesn’t have that much control :(

      1. Thought he said someone had pointed him towards, and he had decided to use, an approach that let him avoid handing everything over to QT.

  25. burningdragoon says:

    “I might as well go use Visual Basic. (No, not really.)”

    Hehe, made me laugh there.

    1. Tharwen says:

      Would you believe there are still people who prefer Visual Basic to modern languages?

      No, I don’t get it either…

      1. BvG says:

        Visual Basic has several advantages above all other languages provided:

        1. Familiarity: If you underestimate the inertia of man, you’re gonna have a bad time (see also shamus requirements about gui-sdks)
        2. Draw before you program: Drawing your GUI as in MS Paint is fun, hands on and non-literate. If a programmer does things visually, he’ll hate all other approaches with gusto
        3. Proven to work: Unlike more modern offerings, VB has most crappy random bugs hammered out of it
        4. Easy to read: Unlike most other languages, VB (and it’s predecessors, the xTalk type of languages) is easy to read, but hard to write. This has the advantage of less documentation needed, and every programmer hates documentation
        5. It’s the future: More hands on (and also less complex) approaches to programming crop up now and then, and are laughed at by existing programmers. Somewhen in the future one of those languages will overcome the C-mindset, and be the new dominating way of “how it’s done” (Not that that new approach will be VB, but it probably will be more similar to that then to C). Think of it like GUI vs CLI, sure CLI is faster if you know what you’re doing, but that’s exactly it’s largest drawback: You need to know what you’re doing (not knowing what you do is no prerequisite to enjoy a GUI tho).

        1. Tharwen says:

          Just to be clear, I count Visual Basic .NET as a ‘modern language’.

        2. WJS says:

          I reject the common myth that GUI is inherently better than CLI. The comparisons I’ve seen always cheat by comparing someone who has no idea about the console to an expert with the GUI. To someone who isn’t an expert with the GUI, CLI is as easy to pick up as the GUI is. Or have you never seen a complete noob blundering their way around a graphical interface?

  26. Loonyyy says:

    I hope he finds a way around the problem. This and Project Frontier have to be some of my favourite programming related things to read.

  27. Jamie Pate says:

    Have you thought of not giving qt it’s slice (as often)? (seems like qt gives *you* a slice, so nvm)

    what about running the qt graphical elements in a separate window?

    I know in win32 it’s pretty trivial to embed any window (even opengl) in another, so you can have ui elements framing the opengl stuff. Transparent huds are overrated anyways!

  28. Cass says:

    Has anyone managed to compile a working binary for Project Frontier yet? I desperately wanted to run around in it, but the only binary that seems to be out right now (the purple trees one) tells me I’m missing a dll when I try to open it. :(

    1. Atomfullerene says:

      There’s a compiled version on the wiki but last I checked it still has flickering grass and purple trees.

      https://bitbucket.org/shamusyoung/frontier/wiki/Precompiled

      1. Bryan says:

        This works on Linux, at least for me:

        https://bitbucket.org/bryankadzban/frontier

        (My fork of Shamus’s code.)

        I know I fixed a few bugs in the earlier code here, as well, which *should* have affected all the Windows versions, unless MS’s C++ compiler does weird things like add padding to arrays so you can run off the end of them, and nvidia’s Windows version of Cg does weird things like treat two different enum values as the same. :-) Most of these bug-fixing commits were followed up by a comment on the “project frontier: source” post.

        However, I don’t know if I screwed up the windows code when doing this. Might be worth a shot, if you have a compiler, and the other packages available?

  29. I feel your pain. This may be a stupid question: Can you make Qt manage two windows, one with the 3D stuff and one with your GUI, and then set the priority of the GUI really low?

    Off-topic, website stuff: I’m sorry, but I’ve put Adblock back on. The ads are just too obnoxious. When I go here I get your banner and the first paragraph of text on your most recent post; that loads too fast for me to notice. And then I get to wait for something like five seconds while the ads load; during that time I can’t scroll, can’t go to another tab, nothin’ – the browser locks up. I apologise for, in effect, freeloading your site, but this has got sufficiently annoying that, in the end, I’d rather be a freeloader than put up with it. I post this in case it’s something fixable.

    On the off chance that this is something you can do anything about, and also care about: I’m using an old Mac with Firefox 3.6.16. There are outside factors limiting my ability to upgrade.

    > make due

    Make do. As in, “It’ll have to do; make it.”

  30. Steve says:

    “what was the biggest hassle in porting it?”

    It’s not on github so I can’t (as) easily fork and pull request it.

    1. Tharwen says:

      But you can download the source as a zip.

      1. WJS says:

        I hope that’s sarcasm. The idea that anyone would seriously think that a zip of the source is comparable to proper version control is scary.

  31. Steve says:

    Native controls on a rendering window always seem to have issues. My advice would be to use an actual win32 window with your controls on one side and your rendering port on the other. I know, it breaks immersion, but I would only use this for the editor etc. For the in game menu I would use triangles/quads, it’s the only way I’ve seen to get decent performance.

  32. Chris says:

    As someone who has spent the last ten years of their professional life working exclusively on developing a user interface platform… I know what you’re up against. Doing this stuff well is non trivial. Furthermore there’s a struggle to apply a business model to what you want, so nobody out there is really investing in making the multi-platform experience better. It sucks, I know.

    Can I make a suggestion though? You’re a hobbyist. You’re doing this because you love it. And we all enjoy reading about it. Don’t ruin what should be an enjoyable experience for yourself by forcing yourself down the multi-platform rabbit hole. Pick a well proven development environment that meets your needs and just run with it. Ignore the people who complain about not being able to run it on this or that. If you were shipping software with a different goal then this would be worth spending time on. But you are one person, with a family, and limited energy. Spend it on the things that are rewarding.

    What you choose is up to you. I personally really enjoyed doing development on XNA. True that limits you to Windows, Xbox, and Windows Phone, but those are pretty fun platforms. Also true that XNA doesn’t solve your UI controls problem, but the core set you need for game development isn’t too hard to add given that the other traditional troublemakers are well in place. I might even have some code sitting around I could dust off for you.

    1. Victor says:

      I have to say I agree: why does this have to be multi-platform? The aim of the project, as I understood it, was for you too look at certain algorithms and see how they apply to engine design and procedural generation. However, by setting the goal of having a multi-platform project, you are adding a lot of work and worry overhead, and I’m not sure I understand what the benefits are?

      I can understand why you would want this to be available for multiple systems, so that (potentially) more people could look at/play with the project (if you intend to make it public). However, I think any reasonable human being would understand if you decide to stick with one platform, and concentrate on the aspects which interest you.

  33. Ethan says:

    Is this the end of the series? I panicked. Then I saw that this post is from… today.

    Derp.

  34. Bryan says:

    As far as what the biggest pains were when porting: Really, not all that much. But that’s because I’d ported the earlier “terrain” project, and pixelcity, both, already, so I knew what I was in for. :-)

    The Cg stuff was causing problems for a while, but reading their docs and making some temporary changes to simplify the shaders sorted that out reasonably quickly.

  35. silver Harloe says:

    Just a reaaaally dumb thought: but try turning off transparency on the Qt interface and see if it renders faster?

    1. silver Harloe says:

      (I describe it as dumb because I figure you already tried it)

  36. Chris Rasmus says:

    I got on to Mr. Goodfellow at Sea of Memes for the same kind of thing. I really don’t understand why, like Chris above said, someone with limited time and energy (i.e., this isn’t their full time job) is working on something that’s their *hobby* (i.e., for FUN) and then beats their head against the wall trying to make it absolutely 110% perfect in every way (i.e., works on every platform on earth in every language).

    Figuratively speaking.

    Doesn’t common sense say to pick a fun, easy, familiar to develop on platform and work till completion…THEN port?

    Shamus is awesome. Mr. Goodfellow is awesome. I am so happy to have stumbled upon both of their sites. But dangit’, I’m sure I speak for a lot of us when I say I’m scared you’re going to give up if you don’t do what many have already said: K.I.S.S.!

    :P

    Take care.

    1. Eagle0600 says:

      Porting something is even more difficult after you’ve made something. It’s the way most companies work, but most companies have shit ports.

      1. Chris says:

        Totally agree. If you know you’re going to port, it’s better to think about it from the beginning. However Shamus is not a company. He is a dude, with passion and unclear goals as to what he’s going to use any of this for. Simplicity FTW

  37. Darkness says:

    For those of you that are NOT backers of Wasteland 2, the latest update stated that they are using Unity for the engine. Of course, my knee-jerk reaction was, “But, you said linux. I wouldn’t back without it!”. Reading further down it was revealed that they were getting source code so a port to linux was on their plan. Then Unity (whoever they actually are) said they were working on a linux port and gave them access to the Alpha.

    Still not at Shamus’s level since he is doing engine work, but still interesting for those that Unity will suffice and want cross-platform versus Windows only.

  38. Neil Roy says:

    Most games I have seen don’t really have a GUI at all to be honest, or at least a minimal one. You generally have a couple different menus then it’s all keyboard, mouse, especially in 3D. You like to use Minecraft as an example, well, how much of a GUI do you see on screen from it? Aside from basic menus which don’t require a GUI system, it doesn’t use one.

    I like my Allegro 5 library, it doesn’t have a GUI (Allegro 4 does, but it’s a little dated and ugly), but it has everything else I need AND I can statically link it so you don’t have a bunch of dll’s cluttering your folder.

  39. Richard says:

    I know this is really, really late to the party, but:

    The slow bit is the font.

    It’s the act of laying out and rasterising the text each frame that slows the whole thing down. For text that doesn’t change often, you can tell Qt to pre-rasterise it, then just draw it. This is roughly an order of magnitude faster.

    See QStaticText for more info.

  40. WJS says:

    It’s a shame that Qt didn’t work out, but I’m a little surprised at the “shopping around for libraries” part. You’ve been doing this for how long, and you still haven’t settled on things for that? I could have sworn that you had image loading in Project Frontier and fonts in Pixel City. Did what you used for those suck?

  41. James Rourke says:

    This is going to sound crazy, but you are dealing with a hidden statistical oddity…and I am starting to believe that QT is right. (FYI nice job getting through this monster app so far).

    If you think of every game you have played, when you go to a serious menu, in 90%+ of them, the game simply pauses. I can’t remember, but maybe something like dark souls or a part of fallout somewhere? allowed you not to…but what a pain, unless you wanted to as the developer…and perhaps then QT will say that is such a pain I am going to slow it down a touch for the player.

    QT is not messing around with the ‘black’ and ‘white’ side. It has an orthogonal approach that trades everything off at right angles to achieve physio (think metaphysical vs metaphysics): diamond-cut performance.

    If you want those simple, stupid menus?? The ones every game has to have, like a Hit Point and Mana Bar??? It is going to make you draw the damn things somehow manually with transparency and layers once (with raster algorithms perhaps). Then, there will be only one uber-paint. At the ‘right’ angle, this is rarely discussed, can you actually do that…why not keep the mana bar in a thread as a window-overlay and slam it together like normal? Well, after forcing you to do everything in a single paint…you can slide through that right angle and actually do it.

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply to Cass Cancel reply

Your email address will not be published.