Experienced Points: Next-gen AI and Next-gen BS

By Shamus Posted Tuesday Mar 25, 2014

Filed under: Column 78 comments

My column this week is a bit about why next-gen consoles probably aren’t going to bring us any next-gen AI. Although to be fair, it’s been a while since I heard that claim. I was hearing it quite a bit in response to the “The next-gen graphics don’t look so next-gen” comments when the consoles were first being announced.

And while I spent some time talking about strategy AI, I’m not sure how applicable that is to consoles. How often do consoles get complex, multi-layered, turn-based gameplay? They sure aren’t getting Crusader Kings or Civilization anytime soon.

This new gen is off to a strange start. Third party titles have fled Nintendo. Sony’s console is less than six months old and they’re already rolling out new stuff for it. The XBone isn’t NEARLY the disaster I anticipated. That’s is a good thing for the industry in the long run, although I wouldn’t have minded seeing the market rebuke Microsoft a little more firmly for their hubris and creepy approach to digital “rights” and privacy.

I haven’t been following the PS4 vs. Xbox narrative very closely, but every couple of weeks I seeNot that I bother reading the actual article. a headline or forum post detailing how one console is beating the other. Some things never change, I guess.

 

Footnotes:

[1] Not that I bother reading the actual article.



From The Archives:
 

78 thoughts on “Experienced Points: Next-gen AI and Next-gen BS

  1. Chris says:

    The suddenness with which the Xbone has dropped from $500 to $450 indicates to me that sales are at least below MS’s expectations. Has a console price cut ever happened this early in the cycle?

    1. Doctor Broccoli says:

      I’m pretty sure the 3DS price drop happened in a shorter amount of time. It was a larger price drop too, from $250 to $170.

      1. Cody says:

        They also rolled out the 3DS XL relatively fast and that fixed a lot of the problems people where having with the system.

      2. Chris says:

        Ah. Shows how much I pay attention to handhelds (they’re all horrifically uncomfortable to use in my Lana Kane-esque giant hands). Hopefully the price cut will help out. I’m with Shamus; a single dominant console isn’t great for gamers.

    2. kdansky says:

      That might just be a newer way of doing it: Asking for what is basically too much of the first few early adopters won’t stop them, and then you reduce the price to the point where it’s actually fair. They probably never planned to keep it that high.

      If I had a game on steam, I’d totally do that: Start it at $20, with a prepurchase 10% off (to generate hype and free marketing), then do a 25% sale after just one month, and a 50% sale after just three months.

  2. rofltehcat says:

    It is really sad that today’s (combat) AI seems so frigging stupid. Sure, as you pointed out making them strong is easy but strong and smart are also completely different. Why does the majority of today’s AI feel that bad? Did the publishers at one point just cut the AI budget in half or simply froze it while all the other costs exploded? Did they fire all their AI experts or simply didn’t give them trainees?
    I just want 2005 AI back :(
    I don’t even expect it to be better than 2005 AI. But at least it’d be an improvement over today’s titles.

    1. Derektheviking says:

      I was actually shocked by how good the combat AI for Tomb Raider is, especially with its nicely blended animations. It might only be on “Hard” that you can really see it, though, since I don’t remember noticing it on my first playthrough.

      Although there are big concessions for gameplay (you are, after all, one person going up against hundreds) the basic actions seem “right”. Enemies will cower under suppressive fire, dodge sideways or dive back into cover when you aim at them, and flank in a full 3D environment at times (although they are a bit more reluctant to show this off). Their chat shows quite a lot of “depth” to the simulation. If you have the game, give it a shot on hard and see what you think.

    2. MadHiro says:

      I have a memory of watching a demo of Half-Life at GenCon in ’97. There were a bunch of soldiers in a room, and it was set so that it displayed what they were ‘thinking’. Lines plotted out where they were looking, what they considered threats, how they were hiding or moving, and what they were going to do. Fourteen year old me was very impressed.

      I haven’t seen any FPS AI that’s meaningfully more complex than what I saw there. So that’s what, almost twenty years ago and no progress that I can notice?

    3. straymute says:

      With the way today’s critics are that would actually make sense from the publisher’s perspective. For example in the original Gears of War Dom’s AI was some of the worst I’ve seen. Every time a Berserker showed up you had to pray he wouldn’t get stuck again and make you lose the mission. Even some of the missions like the car one were almost broken outside of co-op because Dom couldn’t function.

      The game still got near perfect scores and I can’t remember many reviewers actually mentioning this stuff. Games like Ghost Recon, Rainbow Six, and Splinter Cell also received major dumbing downs of the AI, but again when review time came in it didn’t seem to matter. I don’t think most critics care about AI outside of some fluff in the previews so developers and publishers don’t either.

      1. Eruanno says:

        Good lord, the final boss fight of Gears of War 1 was impossible in single player because the stupid AI partner would run headfirst into the boss and get smacked in the face and thus fail the mission. Grrrr…

    4. Sleeping Dragon says:

      I wonder, could the combat AI defficiencies be related to the shift of shooter focus to multiplayer? For a lot of modern day FPSes (and, I think, especially the ones that push the biggest numbers) single-player seems like something of an afterthought. Even what we do get is usually focused on spectacle with linear or semi-linear levels, it’s probably easier to script the bots to utilise a given set piece in a specific way than it is to make an AI that would adapt to trench combat and the ststue of liberty equally well. And at the end of the day the most gameplay a title will get will be multiplayer, where things like weapon balancing are more important than AI and level design can be much more open because it will be a human mind looking for creative way to utilise it anyway.

  3. Nathon says:

    Pathfinding is pretty resource intensive.

    1. ET says:

      For armies, yes.
      For a single robot, or handful/squad?
      Not really.
      That’s pretty much computable on a baked potato, as long as you’re not using a brute-force…”algorithm”.
      (Hint: look up a real algorithm in a textbook, or online. :P )

      1. Nathon says:

        A* runs from polynomial to exponential time. Dijkstra’s runs in n^2. Baked potatoes doing pathing on a big grid for a dozen different things will overcook pretty fast.

        I have looked up and implemented real algorithms.

        1. Cybron says:

          This. It’s not as cheap as you might think. Most games get around this by cheating on pathfinding, which is one of the many reasons scripted events and linear levels are so common.

          As Shamus vaguely alludes to (but later kind of steps away from) in his article, if you really want to, you can spend countless resources on any given task. The more you resources you allot, the better your pathfinding can be. It’s just a matter of what you consider good enough.

          It just so happens that the bar re:AI for ‘good enough’ is pretty low. And that hasn’t changed this gen, as far as I can tell. You can’t put good AI in a trailer or in game hype screenshots, after all. And since it’s rare for reviewers to spend much time with a game, just give them a few clever scripted moments and you’re golden.

          1. Shamus says:

            Another important point: As levels have gotten more linear and scripted, the complexity of pathfinding has gone down. Navigating a Doom level would probably be an order of magnitude more complex than pathing around (say) Mass Effect 2.

            Then again, actors have more complex states these days. Doom just had moving and shooting. Now we have moving to cover, suppressing fire, grenades, suicide charge, flanking ,etc.) Still, I’d say it makes it more complex to write, but I doubt this eats much in the way of CPU.

            1. Derektheviking says:

              It’s interesting that the pathing problem doesn’t seem to have progressed much in more open-world situations, though. I’m pushed to see any major progression from Far Cry to Far Cry 2 to Far Cry 3* – maybe there’s a slightly greater chance of the enemies fanning out to cross a field to reach you in the latter, but it’s barely perceptible. All that I can see that has happened is that the “base” (?node-mapped) portions of the game have gotten larger/more complex.

              You can see the drop-off in intelligence as soon as you lure an enemy outside. Suddenly, there’s no running from cover spot to cover spot. There’s no attempt to encircle the player. There’s no attempt to corner the player (although I suspect this effect is emergent from flanking AI anyway). There may be an attempt to spread out, but not in a co-ordinated way, and not in a way that is responsive to terrain, sightlines and cover. If the tricks used inside could be expanded outside in the new generation, I would be a very happy player.

              Because I am fairly happy with closed-area pathfinding. FEAR and the Hunters from HL2ep2 were the best times I’ve had fighting in a real-time game, although the only thing that’s come close recently was, as I mentioned above, Tomb Raider. The room to try is the wrecked bunker on your way to Alex where they’re lifting a generator. Every time I’ve played that fight it’s gone differently, with enemies jumping across to me on the second floor to close the distance, or dropping down and flanking, with weaker enemies behind the tank… about the only thing they don’t do is intelligently use their own covering fire, but since Lara is almost always outnumbered, proper AI use of covering fire would probably 1) get pretty boring and 2) be impossibly lethal.

              *with the exception of visibility detection, which was beyond horrible in FC1.

        2. ET says:

          Sorry about that;
          I should guessed that a reader of Shamus’ site might already be somebody who works with this stuff.
          …and now I feel like a jerk, having not messed around with any of this since school. ^^;

          1. Nathon says:

            No worries; we’ve all been condescending at one point or another.

            Thanks Shamus, for attracting such a civil community.

        3. Df458 says:

          I agree, but couldn’t the developers avoid this issue by reducing the number of nodes checked?

          For instance, couldn’t the AI use a graph of predefined waypoints for pathfinding? I imagine that this would seriously reduce the number of checks made.

          Disclaimer: I’ve done a tiny bit of simple AI in the past, but I am definitely a noob.

          1. guy says:

            Nearly every pathing algorithm in use these days has predefined node graphs for a level, as I understand it. Probably with a bit of extra code tossed in to make sure they don’t try to walk through another character, but mostly they’ll be following nodes created when the level is compiled.

            Still, if you want to keep it from being clear the AI is following an invisible grid you’re going to need a lot of nodes, so it’s still pretty expensive.

            1. wererogue says:

              TBH most big games these days (in my experience) use navigation meshes that are generated from the level geometry – something akin to recast.

              Pathing is something that is usually carefully balanced – I’ve seen character budgets in busy areas limited by pathfinding a couple of times in recent years.

              Any real advance in pathfinding now, however, would be in better simulating the way that a human chooses to navigate. We really don’t look for or find the most efficient path most of the time, and we behave differently when we’re unsettled or in a hurry to when we’re distracted, or when we’re on a familiar path, or when we’re exploring.

          2. Khizan says:

            It reduces the checks made, but it also makes the AI worse, not better.

            If you want to see an example of this and have access to Baldur’s Gate, fire it up and crank the pathfinding nodes down low. It gets so hilariously bad that it actually served as a cheat code. There’s a point early in the game where you meet Drizzt at the edge of a pond. His scimitars are the best melee weapons in the game, and you can get them at L1 or L2.

            All you have to do is give everybody in your group a bow and a ton of arrows, position them across the pond from him, and then save the game. Quit the game, open up the config, and drop the pathfinding nodes down into the gutter. Save the change, open the game back up, load your save, and open fire on Drizzt. His pathfinding will be so bad that he will never manage to make it around the pond to attack you. He’ll just run back and forth until you roll enough 20s to kill him.

            Try that with full nodes and he rushes around that pond and chunks you in maybe 5 seconds flat.

            1. Benjamin Hilton says:

              I’ve always found an interesting disparity between “smart” AI and realistic AI.

              The system that runs the game is the same system that runs the AI. And at any time the system can let the AI know right where you are.

              There was a period in the early 2000’s where developers tried to make their AI “smarter” by just giving it more information about the player. This resulted in those games where you alert one man to your presence and suddenly everyone knows exactly where you are.

              So in the end it’s not about making the AI smarter, it’s about dumbing it down from it’s omniscient state to one which resembles realistic actions. Unfortunately they usually end up either dumb deaf and blind, or hive minded super army.

              TLDR:
              It’s like being a DM: You could be unfair and send your monsters right to the party at any time, but it’s only fun if you try to make them act realistically.

              EDIT: yeah I totally wrote this before getting to page two of the article, so I’m just kinda repeating Shamus.

  4. ET says:

    The one thing I have to disagree with, is the CPU cost of many-actors AI vs the cost for the graphics of those same actors.
    I suspect* that the algorithms to get graphics to good-looking scales, with respect to the number of actors, (i.e. number of evil robots/armies) much better than strategy AI.
    Like linear vs exponential kind of “better”.
    Just because, with graphics, there’s only so many pixels where robots could show up, and you get all kinds of occlusion and other stuff, which can be used to easily reduce the number of polygons you render.
    However, AI needs to think “What actions will army X perform against army Y?” for every combination of X and Y.
    The trivial case is checkers with X being “the other player” and Y being “me”.
    A game with lots of AI armies, will have then, AI which scales with the square** of the number of armies, which is why I suspect we don’t have many strategy games which allow you to play with more than a dozen AI armies. :|

    * Read: It’s been about seven years since I studied AI and graphics in depth. Take this rambling estimate with a huge grain of salt. There might be better-scaling algorithms I’m not aware of.

    ** OK, probably O((n-1) * (n – 1)), but Big-Oh notation let’s you simplify that to O(n^2) anyways, so…orders of magnitude, and all that.

    1. TMTVL says:

      Well, things like transparency and anti-aliasing take up so much processing power that it’s hard to say whether AI or graphics scale better or worse.

    2. Cybron says:

      I think you may be underestimating the number of passes required for graphics there. If I’m not mistaken, just because you can’t see all the pixels doesn’t mean cycles haven’t been spent to draw them.

      But graphics are far from my area of specialty, so I could be quite wrong.

      1. rofltehcat says:

        Shamus actually did some articles a while ago about 3D rendering. It probably wasn’t up to date even back then but it was pretty interesting and showed that it might not be as easy as one might imagine.

        1. Cybron says:

          Yeah, Shamus is probably responsible for 90% of what little I do know about graphics.

          The specifics of the issue escape me at the moment (guess I’m a bad student), but I remember it being more complicated than just throwing it up on the screen.

          1. TMTVL says:

            Maybe you’re thinking of his plasma screensaver?

            Specifically where he mentions the screensaver chugging because of the many passes needed.

          2. guy says:

            Well, the first step is to find out what you want to draw on the screen. You start by converting everything to a coordinate system where things on the screen are inside a cube and everything else isn’t, then determine what’s in the field of view of the camera, and chop objects to fit, which is actually surprisingly difficult to do without creating weird artifacts. Next, you need to figure out what objects in the field of view are obscured by other objects. This is extra fun if you have objects that aren’t opaque.

            [i]Then[/i] you have shaders perform lighting and texturing and bump mapping and so on. Except actually in a modern game you had to do lighting first because you’re probably using a lighting model that allows non-static objects to cast shadows.

    3. Piflik says:

      In my experience the two most demanding parts of a realtime-application are graphics and physics (the ordering of these two is interchangeable, depending on how sophisticated the physics are). AI does take up some CPU power, but just calculating a collision between multiple moving bodies is by itself a nontrivial task (with sub-frame intersection-tests, rollbacks and binary-search algorithms, all of which have to be performed multiple times each frame for each pair of objects that are likely to interact in that frame), even with only bounding-spheres for collision geometry. When actual triangle-triangle collision is needed, this can explode very quickly (after you have detected likely collisions using bounding-spheres, you have to test each triangle of object A against each triangle of object B…you can cut this down quite a bit with BVHs – Bounding Volume Hierarchies – and only test triangles in a certain part of the objects, but this is still a high load on the processor). AI are still essentially finite-state-machines, very sophisticated ones with probabilities (that can change during runtime) instead of fixed state-changes, but still they don’t need much processing power, even with multiple actors.

    4. Zak McKracken says:

      Point in case:
      It was a pretty big achievement for BLiyyard to allow armies in Starcraft 2 to pass through choking points fluidly. In SC1, if someone was in a choking point (even if they’re part of your group and moving in the same direction), that would lead to the passage being “blocked” and units trying to route around it or just stop. Now it works pretty well, and the first videos of hundred zerglings negotiating a choke point and a densely populated base were quite impressive.
      This, however works independent of your hardware (as opposed to graphics, where you’ll notice the difference quickly) which leads me to think that the trick was not having much more processing power but finding an algorithm that’ll make it happen with very little computing.

      Regarding scientific computing, I heard this quote a while ago:
      “I’d rather have today’s software running on 1985 computers than 1985’s software on modern hardware” — and indeed the speed-up from software improvement in the field has been greater than through faster hardware. I’d imagine it’s similar for AI in games.

      … I’d like to know how that pathin in SC2 actually works, btw…

      1. guy says:

        I imagine it runs A* or something but is smarter about responding to units blocking the path.

  5. Irridium says:

    Actually, consoles did get a Civilization game. Civilization Revolution it was called. It was much more simplified than the main series, but it was still pretty good.

    1. Matt K says:

      I especially enjoyed it on the DS. Although hand held Civ is a dangerous thing.

    2. Humanoid says:

      Civ2 was on Playstation!

    3. Eruanno says:

      I think that was free with XBL Gold the other month. I downloaded it and never tried it, maybe I should…

  6. Groboclown says:

    One interesting aspect of modern combat-like games AI is the ability to more “intelligently” react to events in the environment – think FarCry with the situations where a tiger can attack enemies, who are now dealing with the player and the tiger as targets.

    To me, the really interesting AI that’s being developed right now is the new Dwarf Fortress adventure mode (for example http://www.bay12games.com/dwarves/index.html#2014-03-23).

    1. Cybron says:

      Dwarf Fortress is also a perfect example of a game where AI actually IS your primary cost. You let it get enough dudes in an area (or build your fortress in a way that promotes sub-optimal pathfinding) and your processor will beg for mercy.

      When I start seeing some DF-style silliness on consoles, then I’ll believe they’re pushing ‘next-gen power’ for AI.

      1. ET says:

        If they spend all the money on AI, how many polygons will they be able to afford? :P

    2. Aitch says:

      But wat about tze emoshuns? De polygon, they are no there – how to emoshun??

    3. Abnaxis says:

      Yes. I haven’t played DF in a while, but I still subscribe to Toady’s feed so I can see how the AI is progressing, though it’s as much to read about quirks as it is to see the project coming together. My favorite is how a dorf’s relatives will immediately register any body part as belonging to a corpse–including non-essentials like teeth and nail-clippings–when the dorf dies, regardless of distance, and be stricken with grief.

      I’ve always wanted a look under the hood in that software, because all the AI stuff strikes me as though it would benefit from parallelization. I mean, let’s be honest–all those teraflops in the GPU are never going to be used for graphics, so they may as well be used for something.

      Alas…

      1. Cybron says:

        Toady is awesome, but he is not a professional programmer, nor even a particularly good one. What little we’ve seen of his code is pretty cringe inducing. He is almost certainly not willing/capable of performing major optimizations like parallelization on his own.

        He is also extremely protective of his code. As I recall, he has only let someone in to perform optimizations (which may have actually been offloading certain operation to the GPU; don’t remember) once, and even then pretty reluctantly. And even then, he didn’t let them access everything.

        I don’t expect we’ll see significant improvements in DF’s performance any time soon.

        1. Deoxy says:

          And this is a big part of why Minecraft came and stole his thunder, took his lunch money, and made him say he liked wearing dresses to school. I like Minecraft, but I SO SO SO wish DF had managed to do that instead.

  7. Alan says:

    In your column you mention FEAR’s excellent AI. Seems a good opportunity to link to the paper describing how FEAR’s AI works. (And which I suspect I originally found through an earlier Twenty Sided post.) As a non-AI person (but I am a programmer), I found it quite accessible and fascinating.

  8. Ehlijen says:

    I think the most resource intensive AI is somewhere in between the two examples you list, Shamus. Let’s call it tactics AI?

    Think Chess or Go, or for my favourite example, the MegaMek testbot AI (an attempt to create a Battletech playing computer, hilariously incompetent AND slow so far at times).

    The real challenge comes when you give an AI many different but mutually exclusive options that need to counter similar choices made by the player.

    Each turn in chess or the like is very simple. Pick a piece, pick a legal destination, move it. The real trick is in the first part and to figure that out, you need to calculate the outcome of every possible move before you know which piece is best to move. But you don’t know which is better without analysing the next response turn of the other player for each of your possible moves and to know which of those is best and therefore most likely until you calculate etc etc…

    That’s where hardware limitations come in; you can only put so many moves into memory and you can only branch the decision tree out so many times before the CPU gets too busy. That is an aweful lot of manys with current hardware to be sure, but if that is the kind of game you are making, more hardware would make the AI better.

    Of course, that is not the kind of game that is traditionally released on a console, and particularly not thief, so I also don’t know what the thief developers were on about.

    In civilisation style games unit counts are usually small and combat is fairly simply. The economy has important decisions, but not too many (unless you’re MoO3 but then you’re MoO3 and there is no cure anyway). In realtime games (strategy or shooters) decision depth isn’t as important because the time pressure keeps the player moving fast and not fully utilising his own full decision depth either.
    But in turn based games with high decision counts, the AI has to be able to pass some muster in skill and speed or it will not convince. We’ve gotten there with chess through hundreds of years of analysis of the game. I’m told Go still isn’t anywhere close to good AIs. And newer games with high decision depths just haven’t had the research time thrown at them.

    (Challenge, try to make an AI to play a 3e DnD 10th level wizard, including spell preparation choices, against a variety of opponents from the monster manual :p )

    1. Cybron says:

      “Each turn in chess or the like is very simple. Pick a piece, pick a legal destination, move it. The real trick is in the first part and to figure that out, you need to calculate the outcome of every possible move before you know which piece is best to move. But you don't know which is better without analysing the next response turn of the other player for each of your possible moves and to know which of those is best and therefore most likely until you calculate etc etc…”

      Brute force algorithms aren’t really practical, even for games like chess. Generally it’s better to employ heuristics, which vastly cuts down on the number of states you need to generate.

      “(Challenge, try to make an AI to play a 3e DnD 10th level wizard, including spell preparation choices, against a variety of opponents from the monster manual :p )”
      Then you’d have to write an AI for the monsters, and that’d just be silly.

      1. Ingvar M says:

        However, a good heuristic evaluation function is not always trivial. And without that, your trade-offs are “improve average-case time” (by selectively decide where to deepen the search) and “make the AI worse” (by pruning or by lessening the search depth).

    2. Tizzy says:

      Talking about the built- in limitations: I remember playing chess on the entertainment system of an airplane. At any difficulty level, I could take the computer’s queen on the third move. Consistently. I replayed that move sequence more than once.

      I guess the systems limitations really didn’t lend themselves to a chess game. Stick to bejeweled.

      1. guy says:

        A search depth of three is far more terrible than I’d expect from much of anything. Did you win those games?

        1. Tizzy says:

          I think the poor quality was due to speed concerns: entertainment systems in airplanes are notoriously slow, and already the wait fir the computer to take its turn was agonizingly slow.

          And yes, I won those games. First time I could beat a computer, because I never learned to play chess properly.

    3. Zak McKracken says:

      I remember reading some months ago that the current Go champion has been beaten by a machine. Go might be more complex than Chess but it’s still a fairly “mathematical” game, and therefore it stands to reason that a computer will eventually win.

      For Chess and Go (and checkers before them), though, one major contribution was software development. AIs have moved away from creating a prediction tree for each move, but work from databases for some parts of the game, which are either pregenerated or taken from previous games and contain stets of constellations: “if this is what the board looks like, do this” — around 2000 the first Checkers software moved to using databases exclusively, thus “solving” the game: The computer _cannot_ lose because for every possible move, there’s an appropirate answer in the database.
      For Chess, databases play a huge roll as well, and then with “big data” and what have you, DB software and algorithms have become a lot better.

      … I think it’d help a “good” AI more to have more (and faster) RAM than more CPU power. Since time needed is still exponential with search depth, though, and you really don’t want the quality of the AI to be dependent on a computer’s outift (or, worse, the current load due to graphics), I guess it’s just toned down to keep it save. Also, to have more “control of players’ experience”.

      1. Cybron says:

        I would note for that checkers example, that’s actually terrible AI for the purposes of gaming – you don’t WANT an unbeatable opponent. You want something that’ll challenge the player to some extent while remaining beatable.

        Ideally, you want it to be unpredictable as well (no one likes an opponent who will always lose the same way as well) so introducing a certain degree of randomness to the AI’s play is pretty much essential. Straight databased responses will never produce an interesting opponent.

        1. Tizzy says:

          That’s why the term AI in the context of gaming is hotly contested by the people who do AI to simulate intelligence (rather than behavior). And to be fair, they were there first.

      2. Tizzy says:

        GO: Also, 10 to 20 years ago, a group of mathematicians were able to beat professional Go players on certain end-game challenges (when both players are close to tied). That’s just how mathematical the end-game is: eventually, sitting down and doing the incredibly abstract math can beat people with a lifetime of experience and good instincts.

    4. Deoxy says:

      Chess is not the best choice, really – it’s too simple.

      Honestly, I’m surprised chess hasn’t been fully solved yet (that I know of) – every possible position of pieces, linked with every possible position that could be the result of those positions, with win states marked.

      The outcome wouldn’t be all that big, by today’s standard – 64 locations, 12 total piece types (6 of each kind for each player), max of 32 pieces, whose turn it is, and what game state (win for black, win for white, draw, none of the above). Give that an identifier and make a cross-reference table against itself for which states it can move to…

      Heck, with a good notation system, the “identifier” for each state would contain the state, and all you would need is the cross-reference file.

  9. Piflik says:

    I just recently created a ‘game’ for university (or rather a prototype), where I wrote an AI for the enemy. It is a very simple state-machine and if you don’t know how it works, it can be really hard (but as soon as you see through it, you can bait it to actively catch your shots). Writing combat-AI (as you call it) is really hard, if you don’t want it to wreck (and frustrate) the player. My AI has nearly perfect aim, makes ‘optimal decisions’ (in the scope of the really simple gameplay mechanics)…there are some rough edges and bugs, but it is the best I got on my first try…I don’t even know how I would go about making it less efficient…

    The game was roughly EA’s take on Pong, with tanks and explosions (no day-1 DLC, though, or online DRM) and visually inspired by Blood Dragon…at least I tried…

    If you’re interested, you can download it here…nothing special, really…also you might need a rather up-to-date machine (mine is roughly 2 years old) to run it properly, since I suck at optimizing (which I learned the hard way, when I wanted to showcase it on a quite old notebook and hat to create a new build with reduced effects)

    1. ET says:

      Neat graphics! :)

      1. Piflik says:

        Main purpose: Hide the shallow gameplay :D

        But thank you.

        …just like in the AAA industry…

  10. Jeff R. says:

    The lack of good console grand strategy games isn’t because of their AI capacities, it’s an interface issue. And not an unsolvable interface issue either, just the fact that the existing franchises have been designed around KB+M Interfaces for decades and would have to have UIs redesigned from the ground up. Or else actually understand that, yes, CIV V on the console would be more than sufficient to get a whole lot of users to buy/find a USB keyboard to plug into their console in order to play.

    1. Piflik says:

      I am actually one of the poor fools who bought Stormrise on console…I really tried to like the game…twice…but the user interface is just abysmal…it is lacking both overview and control, and these two are essential for any RTS…

  11. Taellosse says:

    As I understand it, in raw numbers, Sony has sold around 6 million PS4s so far, while Microsoft has sold about 4 million XB1s. Not the drubbing that we might have hoped for, given how Microsoft was acting at first, but then, they also backpedaled a lot on those policies. I suspect the disparity is reflective of, more than anything else, the $100 price difference between the two devices. It would probably be a larger disparity if Microsoft were not benefiting from an overall stronger launch library, and the gap may well end up closing noticeably with the recent release of Titanfall, which by all accounts is selling quite well. I’d like to think Infamous: Second Son (a personal favorite series of mine, and the reason I got on board with a next-gen console this early) would do a similar job for Sony’s own sales, but the realist in me recognizes that it just isn’t as big a thing.

    1. TMTVL says:

      Well, most serious console gamers nowadays probably just buy them both. if only for the exclusives (which, considering Demon’s Souls was PS3 exclusive, I fully understand).

  12. Volatar says:

    If you want an example of an AI that hits the procession cycles really hard look no further than AI War Fleet Command. RTS/4X hybrid which can scale to tens of thousands of ships. The AI thread is consistently the hardest hitting part of the game engine.

  13. DrModem says:

    I haven't been following the PS4 vs. Xbox narrative very closely, but every couple of weeks I see a headline or forum post detailing how one console is beating the other.

    War never changes

    1. ET says:

      Lame graphix!
      Worst game evar!
      8.5/10!!!

      In all seriousness, though, that game just went onto my Steam wishlist. :)

  14. J. Random Lurker says:

    If you were to grab a random AI programmer in a game studio and ask him (or her) what he does all day, you’d likely hear about writing gigantic state transition graphs, with such nodes as “in the middle of changing weapon” and “slowing but still running” and arrows between node labeled “just heard the noise of a gun reloading” and “cover less than 10 feet in front”.

    Except in modern games, the AI has to fight for its slice of the game frame with the animation system, which is busy going through its own state-transition-graph and tree-expressions to compute which animations to blend from its banks and where to place the character’s feet to represent someone slowing down from a run while drawing a weapon and cocking his ear toward a noise.

    In fact the two systems might be completely intertwined and worked on by the same programmers.

    Here’s an document from Valve on their Left 4 Dead AI, as illustration: http://www.valvesoftware.com/publications/2009/ai_systems_of_l4d_mike_booth.pdf

    1. Cybron says:

      Super interesting read. Thanks for the link.

    2. Tizzy says:

      Yeah, except for the AI for F.E.A.R. Apparently, there is a good reason why it was considered so revolutionary: it is precisely trying to avoid the huge finite automata paradigm.

      A fascinating read! I am ever so grateful that these incredibly smart and knowledgeable people are willing to share their science with us.

  15. Ronixis says:

    Kind of surprised to see the NCR in the mook list there. They don’t really seem to belong in that company.

  16. Nonesuch says:

    The thing about the thief AI mentioned might not be 1998 levels, but I remember running into stuff like that playing Nightfire (a gamecube era 007 title). The enemies in the level have three settings. They either haven’t seen you and go about their scripted peaceful patrolling, they’re alerted to your presence which causes their patrol routes to change a little and seems to improve response times a little when they see you, or they’re engaged in active combat with you.
    What I’ve noticed is it can never go back to the peaceful state you generally have at the start of the level. Other sections of the level might still have peaceful NPCs that haven’t been alerted yet but the ones who have reached that state can only become more agressive again. A quick wiki search shows that this is a game that’s over a decade old (released in 2002). I don’t think changing enemy states is cutting edge.

  17. Daimbert says:

    The difference between Strategy AI and Combat AI reminds me a lot of the difference between Computer Science AI and Cognitive Science AI. In Computer Science AI, you want the AI to complete a task or set of tasks as efficiently and correctly as possible, without any mistakes. In Cognitive Science AI, you want the AI to act as human as possible, including making mistakes, because that’s the only way you’ll get insight into how human intelligence works.

    Most people working on game AI would have Computer Science AI training.

    As for the new console generation, since I only bought the PS3 when I went to a high definition TV, and since beyond that I only buy systems for Persona games (a Vita for P4 Golden, a PSP for P3P, Persona, and Persona 2 Innocent Sin) and Persona 5 is supposed to be for the PS3 I can’t see my getting a new console anytime soon.

  18. bionicOnion says:

    On the note of AI not maxing out its CPU budget during the last generation:

    AI maxed out its CPU budget last generation. Case in point: I attended a talk at GDC last week during the AI Summit where one of the AI programmers from Volition talked about his experiences developing AI for Saint’s Row 3 and 4. During his talk, he called particular attention to the number of ray traces that were budgeted to the AI system per frame in Saint’s Row 3–4. To put this in perspective, a ray trace is a crucial element for determining things like line of sight between two characters, and a linear increase in the number of characters would necessitate an exponential increase in line of sight checks. To compute line of sight for every character onscreen during major conflicts took over 10 seconds at that rate. Line of sight is crucial to any intelligent looking AI; a good 10-20 minutes of the talk was dedicated to ways to work around this problem (the good news is that they found a way to get that count up from 4 to 80 for Saint’s Row 4 by utilizing the magic that is multithreading).

    My point here is that AI is allocated a pitiful amount of the CPU’s resources, and there’s a hell of a lot of stuff that needs to be done with what’s there. If we truly want to have ‘next-gen AI’, then the technology is out there–combining existing techniques with increased complexity could go a long way–but we need to show developers and publishers that we care enough. Sure, the graphics are the shiny part, but if we settle for some more modest visualizations (or some other equivalent trade-off), we can give more CPU cycles to the AI guys and hopefully get something amazing out of it.

    1. Abnaxis says:

      Sorry to nitpick, I’m pretty sure it’s actually a polynomial O(x^2) increase, not an exponential one.

      I wouldn’t normally be a pedant like this, but it’s a pet-peeve of mine when people use the word “exponential” to mean “scales poorly”

      1. bionicOnion says:

        You’re absolutely correct. It is indeed polynomial; I got my terms confused. Whoops.

        I’m also noticing now how many times I wrote the phrase ‘AI’ in that first sentence. I need to learn how to write like a real person at some point.

  19. wererogue says:

    For strategy AI, I really want to see somebody invest in tuning something built from the Berkley Overmind. The Overmind is tuned to win, but I’d love to see it applied to playing like specific types of player, capturing nuances in playstyles.

    The main challenge would be that in order for it to work, you’d have to capture a lot of data – and the best stuff wouldn’t be available until after you launch your game!

    Still, once you’ve got the ball rolling, you could roll out bots based on your top 10 players on a regular basis, and give everyone a chance to play in the big leagues.

  20. Decius says:

    The way to use better hardware to make better AI is to develop an AI that understands that it has incomplete information about the world, and knows how to increase what information it has, and then supply that AI with information that it’s avatar should have.

    “Information” should mean something like “Hostile was sighted at (x,y,z) N seconds ago, moving north”, combined with information about where hostiles are not.

    Yes, I do want a stealth game where enemy behavior is emergent from information that the enemy agents have, and they preferentially search spots near where I might be, even if I haven’t been spotted there recently. It’s critical that they only respond to information they have, rather than e.g. searching where I hide after I have broken contact with them.

    Doing this right requires tracking the information state of each enemy separately, even if they communicate, and tracking for each enemy where they think the player might be, based on their information.

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply to ET Cancel reply

Your email address will not be published.