Experienced Points: Is the World Ready for Deep Network AI Opponents?

By Shamus Posted Monday Jan 25, 2016

Filed under: Column 120 comments

This week: I talk about the lack of apparent progress in opponent AI in games, why that is, and what challenges we might face if we wanted to put REAL AI (such as we have so far) to work playing games.

For the record: The description I gave for how deep learning works is pretty sloppy. So don’t read that and think you know what deep learning is. It’s actually way more complex than I made it sound. You’ve got to understand something really well before you can translate it into plain language, and I am pretty far from an expert in this stuff. The article still works (because my points aren’t based on HOW deep learning works, only on the expense and effectiveness of it) it’s just that I want to make clear that my explanation is a gross over-simplification.

 


From The Archives:
 

120 thoughts on “Experienced Points: Is the World Ready for Deep Network AI Opponents?

  1. Da Mage says:

    Oh my god Shamus, don’t you know that AI is improved simply by adding more hamster wheels to the AI matrix.

    :P

  2. Narkis says:

    That sounds a lot like the process Stardock used for the AI in Galactic Civilizations 2 and now 3. They had the AI learn from what the players did, updating its algorithms with each patch. GalCiv 2 had the best AI I’ve seen in a 4X game, and while 3 is not there yet, it’s been getting better and better since its release.

    1. Ninety-Three says:

      Was the AI itself learning though, or were the programmers studying data to determine the places the AI was weak and shoring up those places with manually-coded changes to the AI routines?

      1. Narkis says:

        Good question; I got no idea.

        Does the difference matter the the end-user though? In either case the result is an AI that becoms better by watching the players, no matter if the process is manual or automatic.

        1. Ninety-Three says:

          It doesn’t matter to the users, I brought it up partly out of curiosity, but partly because Shamus was talking about business cases, and it’s a very different proposition to say “Let’s build a complicated learning AI and set it loose to self-update” versus “Let’s build a regular old AI then periodically sink more hours into patching it to be better”. I’m not sure which case is more appealing to the business exec, but it’s definitely different.

    2. Arctem says:

      GalCiv2 doesn’t have any kind of learning AI, just very good AI writers. They also do something that most turn-based games don’t do: they do most of their AI computations on the player’s turn rather than on the AI’s turn. This means that they can do more complicated calculations (consider more possible moves, for example) than in a game like Civilization, where the time the AI can spend thinking is far more limited since you don’t want the player to get bored while waiting for their next turn.

  3. Decius says:

    Making AI is exactly as easy as creating a scoring system that can reliably tell you which of two different game-states is better. If you can’t do that at all, you can’t make an AI, and if you can do it perfectly, you can make a perfect AI.

    1. Ninety-Three says:

      Not quite. For example, if Alice and Bob play Starcraft against each other, and it is known that Alice has better micromanagement skills than Bob, that makes Bob want to avoid certain fights, effectively shifting how all the game-states are scored. A good AI wouldn’t just score game-states, it would be able to play the meta-game of finding and exploiting the strengths and weaknesses of its opponent.

      1. Zak McKracken says:

        yes, writing a really good, non-cheating Starcraft AI must be hardcore. Except unless it was programmed to resemble a human, it’d always have the advantage of the quickest reactions ever, thus never overlooking that one scouting drone trying to put proxy-cannons in the back of its base. It’d see all that can be seen, and it’d react directly. It’d have perfect micro, too, even while fighting two battles, one drop and building two extensions, because it could task-switch at 60 frames per second.

        Not sure if that would make up for tactical shortcomings, but it’d surely be interesting to see. It’d definitely beat me, though.

        1. Tizzy says:

          A good Starcraft AI can simply eschew understanding its opponent’s weaknesses and go for the overwhelmingly best startegy always ever. And beat both humans and other AIs. That’s what the Berkeley AI team did with Brood War.

          1. Ninety-Three says:

            Starcraft might be a bad example then, but it’s easy to imagine a game where the balance places a bit more importance on the metagame, making that relevant again. If I had to guess at what such a game might be, I’d say DOTA.

            Are there people writing DOTA AIs? I’m suddenly more interested in that than I am in actual DOTA.

            1. kanodin says:

              Yes, I remembered reading a while ago there was an all AI tournament: https://www.reddit.com/r/DotA2/comments/2ov5qg/ai_ti_the_all_bots_tournament_casted_by_rustsedje/
              https://www.reddit.com/r/DotA2/comments/2lf0ok/congratulations_winners_of_the_very_first_dota_2/

              I assume people are still working on the ai’s and just haven’t had another trial run since then.

          2. Daemian Lucifer says:

            Except that wasnt a non cheating ai.Compare the man vs machine battles and see how the ai has 10 times more apms.Not to mention all the other little exploits like being able to pick the correct unit from a tight clump which no human can do.When ai win in starcraft,its not because they picked the superior strategy,its because they had machine reflexes.

            Its impressive,sure,but it definitely is not fair.

            1. Alex says:

              APM is not cheating, and neither is fine motor control. Its impact on Starcraft II is merely evidence that there’s too much Cookie Clicker in the game for it to be a good RTS at the technical level.

              1. Daemian Lucifer says:

                If the ai can individually select each of the 20 units and give them a different command every second,that is most definitely cheating.The rest of what you said makes no sense.

                1. Ninety-Three says:

                  You have a very strange definition of cheating. Cheating is when you hack the client to dispel the fog of war, not when you play exactly by the rules of the game but your reflexes are better than everyone else’s. It’s certainly unfair, but lots of unfair things aren’t cheating.

                  1. Decius says:

                    It makes sense to either limit AI APM or to put AI and humans in different leagues.

                    Limiting AI APM is just going to create an arms race of micro with fewer APM.

                    It might be interesting to have a league of APM-limited AI.

                    1. Ninety-Three says:

                      In a way, Starcraft already has that functionality built-in. If you had a superhuman AI play a bunch of ranked games, its rank would outstrip all the human players, thus putting it into a league of its own. Either that or it would rise to a point where the superior strategic ability of humans could outplay its insane micro, and it would settle into a stable rank with a 50% win rate.

                      That said, I don’t know how the matchmaking would handle that if there was only one super-high-ranked AI. Would it get paired down with the top human or would it be unable to find a match?

                  2. Daemian Lucifer says:

                    The point of good ai is to recognize what its opponent is doing,and then find and do a good counter for it.If the ai uses an inferior strategy but wins through superhuman micro,thats cheating.

                    So if the ai sees that the human is building tanks,and then switches into mutalisks,thats a good ai.If it continues building zerglings,but wins because its able to outmaneuver the human and avoid the splash damage through superhuman micro,thats a cheating ai.

                2. Decius says:

                  How many APM is cheating?

      2. Phill says:

        Depends if you want an AI that plays a perfect game in the sense of being the best possible game against another player making no mistakes.

        Being able to exploit an opponents weakness might be stronger, but also opens the avenue to AI manipulation: being hussled. You trick the AI by playing systematically sub-optimally in certain areas where it doesn’t really matter, so that it will play sub-optimally to try and exploit that, giving you an AI weakness to exploit when it really matters.

        A better AI naturally anticipates the way in which you are trying to lure it in to sub-optimal play and at which point you are going to spring your trap, so it has its counter trap ready. But I think we are obviously way beyond the capabilities of any AI in the foreseeable future here. Heck, most humans don’t have that kind of capability.

        So the best AI strategy is probably to stick to the ‘perfect’ style of the first paragraph – optimal play assuming your opponent is also playing optimally. It is the best in the sense that it provides the fewest opportunities for the opponent to exploit it, rather than gambling for a stronger strategy against an opponent who won’t see how to beat it.

        1. Decius says:

          There is no way to gain an advantage by playing suboptimally. If you think you can, you are probably mistaken about what the optimal strategy is.

          1. Ninety-Three says:

            But there are plenty of games without a unifying optimal strategy. Chess has an optimal strategy, Goofspiel still has good and bad strategies (making highest-possible bids on lowest-possible cards and vice versa guarantees a loss or a tie) but it also has a rock-paper-scissors metagame of strategies.

            If I’m playing a game with randomness, it’s easy to end up in a situation where I can commit to one of two strategies. Strategy A gives me a flat X% win rate, strategy B is guaranteed to lose if the opponent plays optimally, but guaranteed to win if they make a particular suboptimal play. How do we define which strategy is better there?

            1. Decius says:

              You generate a probability distribution of your opponents’ strategies, and multiply that distribution by the outcome (which hopefully can be made a binary win/loss).

              There’s an element of recursion here as well, since what strategy you envision the opponent using depends on what strategy you think they expect you to use. Which is why game theory is a thing.

          2. Phill says:

            I’m talking about the difference between optimal play vs perfect (optimal) play by the opponent (i.e. two players following the same strategy) vs optimal play vs a particular non-optimal opponent.

            Consider a stupid example from chess: it is not optimal play to play for fool’s mate, because it can easily be countered by any competent player, and if it is countered you are in a worse position than your opponent. But if you happen to know that your opponent doesn’t know how to counter fool’s mate (and will not stumble on the counter by accident and so will lose to it, it is the quickest way to win, and a guaranteed win).

            Lets make it more obvious by say you are playing against a poor chess AI that due to a bug or just very poor algorithms will be beated by fool’s mate every single game. Then that makes it optimal play vs that particular opponent, despite being very far from optimal play in the more usual sense of perfect play vs a perfect opponent.

            If you want to argue about whether that is suboptimal or not, I’d that I’m specifically talking suboptimal play being optimal play vs a flawed opponent, as in the fools mate example. That might be quibbling over definitions though.

      3. Decius says:

        That’s part of scoring game states.

    2. Zerotime says:

      until(winning) { win(); }

      There, jeez, I don’t get what the big deal with this AI business is.

    3. Shamus says:

      In the case of something like Civilization, that scoring system IS the AI.

      Is it better to build a city on this vulnerable spot with lots of resources, or on this protected spot with fewer resources? That depends: Are there other players nearby? How likely are they to go for early aggression? Are they expanding in this direction? Are they stretched thin already? Maybe I should let them grab it, and then take it from them. Maybe that resource isn’t really valuable to me, but it would be a boon for the nearby rival, so I should grab it anyway. Based on the units we see roaming around, can we infer anything about their early-game strategy? How will these city layouts impact our ability to control map movement in the mid-game?

      When someone says they have a “gut feeling” about this or that approach, they’re really saying that it feels right based on past experiences. “I’ve played a few hundred games of this now and I know the map generator pretty well. Based on what I’ve uncovered so far, I bet there’s a really big unexplored chunk of land right here. Genghis and Cleo are fighting already, and I know Cleo is on the same landmass with me, so the other player probably has that chunk of real estate to themselves. Therefore I should let Cleo and Genghis fight and I’ll make some ships…”

      This is why players are able to spank the AI so frequently, even with the AI cheating its ass off. It would be impractical for a programmer to attempt to re-create all of that complex thinking through code. So instead they let the AI peek at your cities, give itself free shit, and then make short-term plans based on who has the most guns.

      1. Ninety-Three says:

        The Civ AI gives itself free shit? I know that the difficulty modes make the AI start with more resources, and they increase some of its production rates, but the way I’m reading that sounds like it occasionally just cheats and spawns in a unit or adds 50 gold to its coffers, is that right?

        1. Shamus says:

          As I understand it, that’s how it works on the higher difficulties. I don’t know how much. I always assumed it was just a direct bonus of some sort. Like, AI earns 25% extra gold per turn or whatever.

          1. Hector says:

            It varies from game to game, but in general, a high level AI opponent in Civ games gets substantial bonuses to all forms of production. This also means the AI tends to ignore things like “buildings”, which are required for the player to actually function. Also, in some editions the AI can functionally achieve technologies without actually researching them and spawns military units for free. Truth be told, Civilization barely has a functional AI at all. It solely exists to roadblock the player as it doesn’t interact with the basic rules.

            Now Paradox games. That’s some AI. With a very few exceptions, it uses the same rules as the player. Although Lucky Nations can go STRAIGHT to hell.

            1. Matthew I says:

              Eh. Realistically speaking, the AI in Paradox games tends to be as dumb as a rock — take for example Europa Universalis 4, where multiple people have started as Ryuku, as poor and isolated as game mechanics will allow, and literally conquered all of Asia, and Europe, and Africa. There are all sorts of ways players can abuse AI quirks (a common strategy is exploiting strait-crossing mechanics to trap entire armies on insignificant islands). One gimmick of paradox games is that the playing field is the whole earth, and the player can play as literally any country that existed in the time span the game covers — which means that at any time, there has to be an AI process running for each of the hundreds of countries that are playable. I suspect that the Paradox games are one of the few cases where performance is a major concern.

              Incidentally, there was a post on the Paradox forums a while ago about how the AI does and doesn’t cheat, but it’s quite out of date. (I think one new addition is that AI nations get a free diplomatic relationship slot, that they will only use to form alliances with a human-played nation. Does this count as an AI or a human cheat?)

          2. Omobono says:

            Depends on game and difficulty, but it can include free units (including workers and settlers) at game start, free techs at game start, less gold maintenance for units buildings and/or cities, less gold for upgrading units, discounts on all costs (science, wonder and non-wonder production, culture) (as in, instead of having x% more yelds everything costs x% less), more happiness, more health when applicable, sometimes bonus experience on new units.

            As far as I know the only area where the AI never cheats is combat resolution*.

            *Unless it’s civ 5, where the AI can sometimes do attacks that would be illegal for humans. Even there, the actual numbers are legit.

          3. Phill says:

            IIRC in civ 1 and 2 the AI would sometimes get extra free units out of thin air under certain circumstances, but I’m pretty sure that from 3 onwards AI advantages have been (aside from extra initial units) cheaper research and buildings, bonuses to happiness / health / whatever city-growth limting mechanics there are, lower corruption, ability to support larger armies and often, the ability to not worry about running a large cash deficit.

            In some versions the AI has also been unaffected by fog of war, which is more or less a cheat (particularly when it unerringly hunts down your ships) but that is as much to do with the considerable difficulty of programming an AI that handles uncertain information like fog of war effectively as deliberately deciding on that as a cheat. And it worked against the AI in some cases. I think in 3 you could keep an AI forever failing to attack you by shifting units around to alternate which of two cities was undefended. The AI would head for the undefended city, and you could keep the army shuttling back and fore between the two undefended targets as you moved troops in and out, and never get around to attacking. This was eventually patched so that the AI would start heading for the undefended city, and lock on to that. Which was also semi-exploitable since you knew the AI was going to ignore your other cities, but at least it attacked, and if it had an overwhelming army, would win.

        2. wswordsmen says:

          I am going off pure memory of what Civ4 would do:

          1) The difficulty would give the AI bonuses (or penalties for the bottom 2 levels). They ranged from small bonuses for Prince to an extra settler (free city basically) on Deity.
          2) The player would have penalties on various things like research time based on the difficulty level. This would mean that the player would take longer to do what the AI would do.
          3) Luck would be worse for the player as difficultly increased.

          Don’t think it ever actually got free stuff (after the first turn) from nowhere but it had the tools to get stuff cheaper than the player.

        3. EmmEnnEff says:

          On higher difficulty levels, the AI production/happiness/growth bonuses in Civ 5 are absurd. Only paying 60% happiness penalty, production multipliers, 2x growth multipliers, starting with all four basic techs… Mean that a mid-game city founded in the Arctic by a Diety AI will be of size 16. End-game AI empires can have >100 happiness, despite having many large cities.

          And yes, the AI will never ran out of money.

          Unfortunately, all of this is necessary, because the AI is completely incapable of focusing its efforts… Or waging war. As far as I understand, its algorithm for fighting is ‘try to move into random tiles.’

    4. Falterfire says:

      Except in most games the only really important score is whether a state leads to eventual victory, which you can’t know without also knowing subsequent states. Whether a given position is better than another really depends on which paths each game state is likely to lead to, which means that for any competent AI ‘scoring a game state’ really means ‘scoring a game state and every decision for X seconds/turns after it’.

      1. Ingvar says:

        Well, the whole idea with “scoring the game state” is to give (effectively) a probability that a given static game state leads to a win.

        If you have infinite time and computing power, you can simply simulate all games “reachable from here” and pick the move that leads to the most wins (or the most losses for your opponent, or whatever other thing you want to achieve), but since time and computing power is limited, you go N moves “deep”, use your static scoring function on all game states needed, then roll that up, to see what move is (probably) best.

        Well, at least that’s what you do with something looking like a relatively classic chess AI.

        Deep learning networks seem, to mee, from the descriptions I’ve seen, to be essentially a mechanised gut feel.

        1. Decius says:

          With a good chess AI, you can filter out outcomes based on your choices.

          1. Ingvar says:

            Speaking of deep-learning neural networks for game AI… This article crossed my radar yesterday I believe. It may be of some interest (as may the paper in Nature that the article has a link to).

      2. Decius says:

        “Scoring a game state” is done correctly if you are more likely to win game states with higher scores, and wrong if there exist two states scored such that you are more likely to win the lower scored game than the higher scored one.

        You can cheat a little bit for AI purposes if “turn count” or some other variable is outside of manipulation; games with a turn count of 1500 don’t have to be compareable to games with a turn count of 10, or game states with different worldgen parameters. It still needs to be able to score states that have minor differences, like which specialization each city has, with very high accuracy and precision.

  4. Ninety-Three says:

    Speaking of the business case for it, I think you touched lightly on another reason it’s not likely to happen. You point out that A: Games are still sold on claims of better AI and B: You’re not seeing better AI.

    That indicates that either better AI is invisible, or they’re just lying about their AI being better. If better AI is invisible, then obviously they don’t want to do spend work to accomplish it. If they’ve been lying about better AI, then the amount of time it’s been going on proves that you can get away with lying about better AI so why bother putting in the actual work?

    In either case, it seems like you’re spending programmer-hours on a feature for no benefit.

    1. Felblood says:

      Um.. you’re overlooking the key possibility space where publishers are bragging about their awesome AI, but not actually giving their developers any money to spend developing their AI.

      J Random Idiot may find that better AI is “invisible” but hardcore players can and will notice.

      Even in a game that doesn’t require a lot of good AI, even a marginal increase in the intelligence/unpredictability of an AI enemy can be really palpable, to a player who knows the game well.

      I remember when Half-life 1 came out and we finally had AI mooks who would try to pin you down and flank you, and everyone was talking about how great AI was going to be in The Future. Almost 2 decades later, nobody even goes to the effort to make AI as good as that. Even Half-life 2 mooks generally just run at you and shoot, unless there is a specific level script driving them.

      The Serious Sam model won out in the end, and I could probably count the games I’ve played that bucked that trend on my fingers. (Warframe, Halo, SWAT 4, GalCiv4 and um… hmm…. I guess some of the Rainbow Six games, but then they decided they wanted to be Serious Sam too.)

      1. guy says:

        Speaking from experience with plopping down unscripted Combine, the HL 2 AI is quite capable of more sophisticated tactics than that without help.

      2. Anorak says:

        I remember reading something about how the Half-Life soldier AI was actually not that smart- but it gave the illusion of being smart by having it trigger voice from the soldiers to say things like “flank him!”.
        This adds to the verisimilitude, without actually being real because they never did flank you.

        1. Decius says:

          The cover-based shooter helps the AI out by telling it explicitly “this is cover, and it faces this direction”.

          Figuring that out without help is a hard AI problem.

        2. Felblood says:

          –but they did flank. They, just didn’t actually need to communicate these desires, or do it in response to the barks of their fellow soldiers.

          The HL1 AI controls all the soldiers in a given squad as a coordinated hive mind, and any scripts where soldiers shout orders at each other are made to help conceal this fact.

          The hive mind generally divides each squad up into 3 elements. 1 group will always consist of ranged attackers, who try to fire from a distance if they have a line of sight, or chase you if they lose sight of you. The two additional groups will usually have melee weapons, SMGs or shotguns, and they will attempt to come at you from the sides.

          Most players will, consciously or not, dive behind some terrain obstacle, and then counter rush one of the two flanking groups, so as to not get caught between all 3 at once. Success against large groups of ranged foes in HL1 is largely a matter of defeating them in detail.

          HL2’s metrocops generally do not seem to realize that shotguns and SMGs require different tactics than automatic rifles, and the more open yet linear environments in HL2 only make this more apparent.

      3. Sabrdance (MatthewH) says:

        I wonder if we’d notice better AI. At present, or at least a decade ago, programmers had lousy AI and they cheated to make up the difference. Now, AI is better -so it seems reasonable to also step back the cheats. In the old system, if you did something well outside the realm of normal behavior -sequence breaking in Half-Life, for example -you’d see the AI’s failures because you’d catch it relying on cheats.

        In the new system, you wouldn’t be able to catch the AI out of position, because it is not relying on the cheats. But other than that, the system would look exactly like it always has.

        And how often did you really catch the old AI cheating?

    2. Daemian Lucifer says:

      Ah,but worse ai most definitely IS visible.So if your game has worse ai than a game from a previous generation,you will get flak for it.It doesnt matter if the ai is actually the same and its the environment that got more complex.Not helped that if your competitors do make a better ai,youll have to keep up with them as well.

    3. Ant says:

      There is better AI today than yesterday. As example :
      in RTS, I can defeat 6 AI opponents with another person in the max difficulty settings so long as they don’t rush too hard in Starcraft and Age of Empire 2(keeping in mind that the Age of Empire 2 get at the max difficulty enough advantage that I think I could easily win against 2 people on my level with those). I can’t defeat yet the most difficult non cheating AI in Starcraft 2 on a 1 on 1 and the new IA in the Age Of Empire 2 provide a far harder challenge without cheating at all (I wouldn’t reliably win in a 6 on 2 with it).

      On the FPS side, the IA in Far Cry 3 is definitely better than the one in FEAR: different IA for animals and the soldiers (with behavior adapted for sneaking for the animals, to sniping for rifleman to straight charging for berserker) which provide a lot of different reaction (spotting a fire, bunkering after firing the alarm, take cover, flank…
      You simply don’t notice it because you are enormously helped by the game (quasi invisible in the bush, wall hacking if you spot someone with the camera, a lot of health…), and that a lot of the goal of the IA is to make the game funnier, not harder.

  5. Zak McKracken says:

    Hmm… interesting … I’m working with Response Surface Methods (RSM) in Engineering, and most of them work like this: Train your model with whatever (expensively generated) Data you have. Depending on how much that is, it can take ages and consume extreme amounts of RAM. The resulting model can then be evaluated for cheap, extremely quickly, and it’s fairly small

    So … I’m by no means an AI expert, neither for games nor for deep learning networks, but until someone who knows better comes along, I’ll just pretend I knew something:

    Neural networks work (kind of) similar to some RSMs(as far as I know…*): You have a small-ish network, where at every node output is related to input according to some simple function, with a coefficient or two. The complexity happens when you try to work out which nodes need to feed into which other nodes (several into one, one into several), and how those inputs should be weighted. But that complexity is still nothing compared to the effort needed to train the network: If you show it one input and the output that goes with it, the solution is trivial, but you need to show it a number of inputs plus corresponding outputs which is some multiple of the number of variables in the system, and the number of variables scales at least exponentially with the number of neurons, and the number of neurons dictates how many different things the AI is capable of. The network then tries to find coefficients and internal weightings to (approximately) match all of the inputs to all the outputs. That requires some iteration for each input/outputs couple.

    => At least for discrete round-based games, there aren’t too many inputs and outputs so I think it should be quite possible to create several neural network models which impersonate a pro, a mediocre, a slow and strategic, aggressive, passive … player. You’d train them on games between humans at the appropriate level, and that should work. You might even train some extra-deep networks on pro-AI-vs human (or AI-AI) games, asking the AI to emulate whoever wins, and so on, to train a superhuman AI.

    … how well that works, though, is entirely dependent on the complexity of the game and whether that can be transported into a learning network, or whether the deepest thing you can get to run on the average PC would be the equivalent of an image interpretation network that sees dogs everywhere. And I’ve no idea at what point on that scale ordinary games are these days …

    *kind of similar: radial basis function RSMs are equivalent to a one-layer neural network. I kinda know how they work. That’s as far as I’ve actually gone.

  6. Bruno M. Torres says:

    AI discussion? Insert mandatory Quake 3 Arena bots urban legend.

    Disclaimer: Very amusing, but sadly false.

    1. Daemian Lucifer says:

      Heh,funny.But much funnier is when ais actually spar with each other:

      http://www.youtube.com/watch?v=vphmJEpLXU0

      1. Andrew says:

        The male avatar on the right reminds me of John Hodgman in the “I’m a Mac, I’m a PC” adverts, and I’m wondering if that’s intentional and if the female avatar on the left is meant to reflect anyone specific.

      2. djw says:

        How can you relieve your thirst with females?

        There is a question for the ages.

  7. mwchase says:

    The basic way to evaluate how complex a neural network is going to be to evaluate is to consider how many inputs it has, how sophisticated you want its internal state to be, and how complicated its output needs to be. To a first approximation, evaluating a neural net is just a bunch of (large) matrix multiplications, interspersed with element-wise functions, like tanh.

    The secret to a good network is finding the right values to put in the matrices. That’s what all of the heavy lifting is for.

    An actual network for object recognition can fit comfortably on a phone. It may not be very impressive, but, well, this is beta software I’m talking about.

  8. Tizzy says:

    a counterpoint to a specific claim in Shamus’s article: no, the AI would not need to be connected to a server farm. All it needs is to have been sufficiently trained.

    You don’t need to be working in AI to realize this. Anyone remembers those Q20 pocket devices that came out around 2004-2005 and would play 20 questions with you? Those would usually guess within the 20 question span, even though there was no strategy to their line of questioning. No cheating either.

    Trained online, but worked offline. It wont learn new things, but pretty damn efficient if its been trained well. Surely enough for most basic game uses, where it mustnt be too good anyway..

    1. The problem with this is that the complexity of the task is orders of magnitude greater. If it “trains” online, that means it creates the algorithms online and there’s no means to MODIFY them when the system is OFFLINE. So any reasonably intelligent player will be able to play the pants off of it simply by observing its algorithmic behaviors and adjusting. It’ll seem stupid.

      What the AI needs to do is to be able to MODIFY its “training” on the fly to respond to CHANGES in the conditions instigated by the player. For this it needs to be able to access the full database of information to enable it to locate and make use of modifications to its algorithms in order to continue to act in a reasonable way.

      In theory, you could just upload the entire database as part of the game, but I suspect it’d be HUGE–probably bigger than the game. And it probably wouldn’t be noticeably superior to existing AI.

      1. Fists says:

        Would just have to be updated regularly to curb exploitative behaviour, same as mmos and games like Diablo with dynamic ‘win’ conditions, it takes time to uncover the exploits so as long as patches come before the system degenerates it should be fine. The AI doesn’t need the entirety of results available to analyse against the game state at every step, just have previous results incorporated into the decision making algorithms. Leave the network constantly learning then push updates periodically or as holes become evident. Would need some sort of built in variance/randomisation between different choices to actually be bringing in comparable data but that would be a good thing for giving the player variety.

        Not sure if that would really count as a deep network AI but would bring their quality up either way.

        1. Tizzy says:

          Yes indeed! You are getting to the heart of the point that I was trying to make yesterday, which is that your AI does not need to have constant access to the full data (which is way too large anyway to be handled in real time).

          The training takes place offline in the big server farms, and then once it is implemented, you don’t need to see the data any more. And, unlike in my Q20 example, you could push regular updates to the video game AI to prevent players from exploiting its blind spots. (And more generally, allow the AI to keep up with the meta-game, à  la Starcraft.)

          Of course, the real problem in my example is that the point of the Q20 toy is to beat you relentlessly, while the point of a game AI is to offer an appropriate challenge. But the approach remains quite viable, and the problem now shifts to finding the appropriate objective function to train on. Not an easy task by any means, but once it happens, expect AI to progress by leaps and bound.

          Also, expect your notion of AI to change drastically: if we ever figure out a way to capture what a good challenge means, then the AI’s role will become to customize the players experience to their strengths and weaknesses. No one will ever play the same game as the next person, probably for the best. Game devs are already paying an enormous amount of attention to what players do in their games, and are always looking for ways to make us play longer.

          1. Fists says:

            To stop the AI becoming OP you’d just set the desired outcome to ‘wins 30%’ or whatever a perceived ‘fair’ rate is, rather than always wins.

            As for there being too many variables, the data would probably be too murky to set it free on all available game modes straight away but but we do already have AI working in this environment so the ground work is there for what parameters they watch and respond to. Still likely to be slow as with so many moves it will take large data sets to actually prove whether a technique had any influence on the outcome.

      2. Zak McKracken says:

        If you think of an AI as some sort of look-up table of “if this is the situation then do that”, you’re right. But neural networks don’t quite work this way. It’s more like “Doing A in similar situations to the current one is correlated slightly more with winning than doing B”. This means you’ll need a lot more data to train on than you would for a look-up table but evaluating a situation would not need all of that data but just the statistical regression you’ve derived from it. Some of the main trends would just be “shooting enemy units is good, enemy shooting my own units is bad”, with some quantifiers for how bad and good those are compared to each other. The AI can then pretty much live in the moment and always do whatever is most strongly correlated with winning in a given situation. Depending on how complex your network is, this may result in incredibly lifelike behaviour.

        … and depending on how complex the game state and the required response is it does eventually become unsustainably expensive to train even on supercomputers. If I had to guess that would happen way before it becomes too expensive to evaluate on an average PC or too large to distribute via a game update (heck, I just updated GW2 and SC2 for 5+ GB each in December, and in January they had another one…)

      3. Decius says:

        A well trained AI will already be able to react to the strategies that it was trained on, and an online model won’t be able to react to a novel strategy either.

        1. Felblood says:

          No AI will be able to react to a truly novel stratagem, however a deep network AI would be able to adapt to those situations where a once-novel stratagem has become the new standard strategy. In effect, it allows the AI to keep up with the current meta, and adapt it’s strategies to combat a broader range of tactics than what was understood during beta testing.

          1. Decius says:

            The holy grail of AI would be one that could predict the novel strategy and preact to it in a manner that made it useless.

            1. Felblood says:

              You can’t–

              Well, okay. I guess if your AI was robust enough, it could simulate playing a few thousand games against itself, and use a genetic algorithm to sort potentially viable cheese from a white noise of random moves, and then try to develop counters for those cheeses…

            2. Zak McKracken says:

              The holy grail would be to have an AI which can do this immediately following a rule change, without additional training. For deep learning networks this means you’d have to train them on a very large number of rulesets which would then allow them to interpolate to whatever ruleset they have in front of them.

              => I don’t think the current breed of AIs could be made to do this. You’d need to have something which has an actual (simplified?) model of the game itself, combined with an appreciation of what the opponent might be doing… not anytime soon I’d say, but probably possible.

              1. Decius says:

                Not even humans can react to rule changes without thinking about it.

    2. Xeorm says:

      The issue is that the data would likely be too big to reliably grab from on a user’s computer. A guess 20 is a really easy AI job, as the input and output along the decision graph is fairly simple. Binary graphs are easy to search across, which is roughly what the game is. Most of the data is likely spent finding which answers are common to place at the top, along with filling out the tree, and you’re doing pretty well.

      In a video game, you’re exploring a very strange tree. There’s a lot of data to process, and the developer can’t know which bits of data are important until running through the training algorithm, and it’s likely much of the data would be important all throughout. It’s easy if you’re looking at simple heuristics like calculating the value of a city location through tiles, but problematic if you take into account all the other factors a player might.

      The issue with AI graphs is that they tend to get big fast. Hence the usage of heuristics to make them manageable, which can often lead to AI stupidity. And that’s before needing to worry about all the other aspects of AI besides making it win the game. We don’t only want the AI to play against us, but also play in ways that are interesting. How do you, say, make a religious AI play like a religious AI, even though such moves are supremely suboptimal, but still make them challenging? Shooters are the best example though. Making them gods is easy, making them challenging is difficult, because the optimal play is the wrong one.

      1. Tizzy says:

        There is no doubt that the tree is easier to parse in the Q20 game. But the point of this example is that what was learned from that could be packaged in a trashy piece of disposable crap, totally offline.

        If you have a video game running on a decent computer, that you can connect to the internet at least occasionally for updates, and your AI is developed using a reasonably-sized server farm (that you can rent from Amazon, e.g., no need for a large investement in hardware), then you ought to be able to do the hell of a lot better than a Q20 game.

        Also, the whole point of using large amounts of game data is precisely to reduce the game space. Instead of having an Ai who’s thought of everything, concentrate on one that perform wells on the typical player actions.

        I discussed this above already: the biggest challenge to this approach must be choosing an objective function that appropriately represents the challenge that you’re going for. I think it’s an extremeley hard problem, especially because it cannot be solved incrementally. But if any progress is ever made on this, expect an AI revolution, because the methods are well known, the computing power is probably here already, and the heuristics can be worked on once you know what you’re trying to optimize for.

  9. It occurs to me that there’s very little real value in making the AI behave like a human player. What’s really wanted is not a faux human but a system that’s capable of *beating* a human while still following the same “rules” as that human. And, at the same time, it has to be possible to beat it as a human.

    I generally dislike strategy games (and I almost always win them by exploiting AI weaknesses rather than by playing the game well–put me up against a human player and I might as well concede straight away), but in general the AI is never so dumb as when it’s trying to “win” the game, and never so smart as when it’s just maliciously screwing with the player’s ability to win. AI doesn’t “win” games by outsmarting humans–it wins by getting the human player into a state where they CAN’T win. And an AI that’s designed to *make your life difficult* (by, say, nuking your resources, or sending its units to kamikaze your expensive units, or making you expend your resources defending stuff that you ought to be able to forget about). AI acting batshit insane is not in itself a bad thing–in fact, if history teaches us anything, its that batshit insanity may be an effective strategy far more often than not.

    1. Felblood says:

      Have you tried some of the Rock-Paper-Scissors bots out on the web.

      Take a neural net that looks back over the previous 100 rounds against a given player, and let it play against hundreds of bored randos from around the web.

      The key that makes it all work is that it is not a cleverbot, regurgitating the strategies it has seen in the past. It does not play like a human being. It plays like an AI designed to consistently defeat a human of average skill, more often than it loses.

      The way it feels when you play it is somewhere between playing against a blind child picking moves at random and playing against a wizard who can see the future.

      Finally, you realize that those random moves at the start weren’t just to feel out your playstyle, but to lull you into a false sense of security, and you will never have a win streak like that again. It’s .. no t exactly fun.

      1. Decius says:

        The first few moves were to feel you out, the next few moves were to figure out how many levels deep you are trying to outsmart the bot, and every move after that is adding to the predictive model of how deep you are trying to anticipate the bot.

    2. Phill says:

      I’m not sure I can agree with that. I see plenty of complaints about game AI being simultaneously incredibly stupid and outrageously cheaty.

      For example, take the game Worms (and its 10,000 sequels). It is pretty easy to program the worm AI to make an accurate bazooka or grenade shot. It could hit your worms from across the entire map with 100% accuracy every time if it wanted. On the other hand, it is very hard to program the AI to usefully understand the terrain to know where there are safe places to stand, which places are dangerous to stand, and how to take shot from places where you can then get into a good hiding place.

      Meanwhile, humans have the opposite skill set – we have all this mental processing power conveniently provided for us automatically that makes 2d image analysis absolutely trivial, so we can find safe spots and understand movement easily, but have a hard time judging shots across the width of the map and frequently miss entirely.

      So naturally you seen complaints about the AI pulling of ‘cheesy’ accurate long range shots while being so dumb about positioning that it is quite easy to beat. And if it is good enough to reliably beat people, the complaints rise a thousand fold (whilst also complaining that it is dumb).

      The issue is entirely that the AI has a very different skill set to a human player – calculations of long range ballistics are trivial, evaluating a 2d terrain is fiendishly difficult. The net result is that people don’t enjoy the challenge it presents because when it does something good they feel it us unfair, and when it does something dumb, the player feels like they only won because the AI was stupid.

      What people actually seem to want is something that plays much more like a human player, with the same strengths and weaknesses, which is much harder to do.

      (As an aside, I suspect that most people who say they want a challenging AI actually don’t – they want something that presents the impression of a challenging AI while not being good enough to beat the human. They want an AI that looks like it is being clever, so that when they beat it they have it confirmed that they are even more clever…)

      1. Well, I was speaking specifically of strategy AI. Yes, as Shamus says in his article, programming a combat AI to be perfect is trivial and feels incredibly cheap. That’s not the point of the exercise. You’re *never* going to get a computer AI to play a strategy game “like” a human. But you can make it MEAN enough (without cheating) that it’s fun for a human to play against it by giving it screwy priorities that are aimed at making it harder for the human player. Things like:

        1. When attacked, focus on destroying as many of the human player’s units as possible, instead of trying to do something abstract like “defend the base”.
        2. When attacking, spread out (even if this means losing units) and locate/preferentially destroy all resource-generating units and buildings.

        This results in weird behavior but it also makes it very hard for the human player to win unless they play smart. But it’s easy for the computer to “play” this way because it reduces an abstraction (attack and defend) into concrete actions. It’s also very easy to “tune” the difficulty this way, because you simply set the AI to be more or less aggressive. You can also have multiple different AI’s with slightly different behaviors–one preferentially targets scout/detection units, for instance. This can lead to human-seeming behaviors because it’ll seem like the scout-killer is “working with” the resource-killer. It also results in emergent behaviors as things change–if the scout-killer is further away than the resource-killer, for instance, you may have a slightly different sort of situation because the resource-killer is likely to attack you first, so maybe what you have to do is to put out a few resource buildings to act as “bait”. . .

        Put 5 or 6 AI’s on a board with different priorities and now you have a situation so complex that even the very best human player won’t be able to “play the AI” any more except on a situation-by-situation basis. Some basic strategies will emerge but all will have faults that the AI will seem to “exploit” completely accidentally.

        Ultimately this comes down to OODA loops. (look it up) If you want your AI to be “hard”, you need to get the AI’s “loop” INSIDE of the player’s loop. That means that the AI is acting slightly faster than the player can keep abreast of. It doesn’t have to be a genius, it just has to be FASTER at executing loops and it will FEEL like a genius.

        I do agree, though, that usually when people complain that they want a “challenge”, they want nothing of the sort. The reason why I think this is that the people I know who make this complaint in DDO always play the most disgusting, broken, Flavor of the Month builds. If you only play easy cheesy, of course the game’s not going to be a “challenge”. And they scream when the cheese gets toned down by the devs, too.

        What they really want is for the game to make them feel 1337 when they beat it, like they belong to an exclusive members-only club. :P

        1. Phill says:

          Well, I was speaking specifically of strategy AI. Yes, as Shamus says in his article, programming a combat AI to be perfect is trivial and feels incredibly cheap. That's not the point of the exercise. You're *never* going to get a computer AI to play a strategy game “like”Âť a human. But you can make it MEAN enough (without cheating) that it's fun for a human to play against it by giving it screwy priorities that are aimed at making it harder for the human player.

          Okay, I see what you mean. You’re probably right.

          Strategy, almost by defintion, is the higher level pattern recognition and abstract reasoning that humans are so much better at than computer algorithms – we’re talking about the realm where the AI has no areas where it performs better than the human, so ‘humanising’ it in the sense of limiting its strengths in the areas where it is better doesn’t come up (at least, I’ve spent a while thinking about it and not come up with anything an AI can do better strategy wise).

          And I agree that there, an AI that doesn’t play with the same goals as a human player, but instead plays to deliberately provide a challenge rather than to do its best to win might be the best option.

          1. Felblood says:

            I think there is a balance point where things turn the other way.

            For example, look at Shogun 2. What is the leading complaint about it’s AI?

            AI “players” will break character, to make the game harder. If the player is being to successful, AI characters that supposedly represent self-interested samurai lords or even anti-samurai religious factions will suddenly forget their own agendas and ally with the Ashikaga Shogunate, if the player attacks Ashikaga, even if they supposedly “like” the player more than Ashikaga.

            That’s a cardinal sin for games ostensibly about diplomacy.

            We’re supposed to think of these other lords as rivals who might have a shot at taking the throne before us, but all they really care about is keeping whichever lord the player controls off of the throne, and it diminishes and depreciates them as both foes and tools.

    3. Tizzy says:

      The AI-as-human-approximation might be good, even if only in specialized circles. I imagine that, if you could create a good enough one, professional Starcraft players might be interested.

      Also: humans tend to anthropomorphize objects and animals. we like to think of things we interact with as human. So a human-approximating AI would be much more satisfying to play against than a good-but-clearly-alien AI.

    4. Zak McKracken says:

      I guess it depends on what you would use the AI for: Is it to guide mooks in an action shooter, or as a sparring partner in a strategy game? Or for NPCs in a simulation?

      In the first case you just want good pathfinding and response to player actions, but nothing clever because they’re still supposed to be gunfodder.

      In the second case you may or may not want to have an opponent who behaves like a human. Playing against a non-human opponent might lead you to some cool strategies which also work on humans. But you also might want to simulate a human opponent with specific biases etc..

      In the last case, you would mostly want realistic human behaviour, except when it starts to conflict with the boundaries of the simulation in a bad way…

      Right now, I think most people writing AIs for games don’t go for realistic human behaviour because behaving human and playing well at the same time is hard to implement, and it’s hard enough to have either non-cheating AI that plays well or a human-simulation that does not behave crazy sooner or later, or starts to conflict in an obvious way with the boundaries of the game world. I’m kind of thinking that the deep learning methods should be an option but as I said before, that’s very much a question of the games’ complexity.

    5. Decius says:

      All of the strategies you describe seem like good multiplayer strategies to adopt. Why don’t humans use them? (Do they? Is it easy but not simple to counter those strategies, so at high level human play they are ineffective?)

  10. Syal says:

    I feel like Chess AIs have to be mentioned somewhere. It’s entirely possible to make an AI that just won’t lose against a human, as long as you have enough time to dig into the mechanics.

    1. Ninety-Three says:

      Chess is a bad point of comparison for videogames though, because the average chess game lasts about 40 moves, and each of those moves consists of picking one of at most a few dozen possibilities. 40^40 may seem like a mindboggling possibility space, but it’s nothing compared to Civ where games last hundreds of turns and each turn has four chances to pick one of four things, for 4^4 possible moves per turn, for a possibility space of 256^200.

      Chess bots can’t brute-force-solve an entire game, but they can look at every possible game-state ten moves into the future, which is where a lot of their power comes from, and that doesn’t translate well to a game with more options. God forbid you get into a realtime game where a “turn” is essentially one frame, for 1800 “turns” per minute of gameplay.

    2. Lachlan the Mad says:

      Another odd problem with chessbots; in chess, if you assume that the opposing player is going to make the best possible move available to them, and make your own move to counteract that, then you’re probably going to come up on top even if the opponent makes a move which is not as good as the best possible move. That is not true of a game where “best possible move” is an ambiguous or extremely broad situation.

      1. Syal says:

        I’d need an example of an ambiguous situation, because the “best” move in chess is calculated based on piece power and board position and I can’t think of any games where it would be much more ambiguous to calculate.

        1. Decius says:

          The “best move” in Galactic Civilizations is a lot harder to calculate.

          Even if you try to calculate the chess score Properly, rather than using simple heuristics.

      2. Lanthanide says:

        Depends what you mean by the ‘best’ move.

        It’s quite conceivable that there may be a move to make now, that mean the player is in a weak position for the next 4 moves, but on the 5th move they can check-mate you.

        If your computer ignores the immediate weak move on the assumption the player will never do it, they can easily fall into the trap and be check-mated.

        I used to play a lot of Hearts online, on MSN game servers. A particular variant called Jack of Diamonds, where the Jack is worth -10 points, dramatically changes the complexity of the game.

        I became a very good player and could win most of my games. The hardest opponents were, apparently paradoxically, those who didn’t know how to play. I would learn the cards that others had in their hand, based on how they played. It’s easy to predict “because that player just played the 10 of hearts, they would have played the queen of hearts if they had it because it is a better card, therefore they don’t have the queen of hearts and it means someone else does”. After 3-4 tricks you could have a pretty good idea of what cards or ranges of cards most players would have in their hand.

        But if that player DID have the queen of hearts, but didn’t play it because they’re a noob, it would very easily throw me off.

        I remember one pretty fun game where I ended on something like -80 and had the other 3 players up around 80-90 each (you lose when you get 100 or more). I was able to dodge the queen of spades and grab the jack most times, or avoid the ploys the other players set up and let them take the jack themselves, as long as I always avoided the queen.

        1. Decius says:

          When people get smart enough to think about what you know, and make plays that are superficially suboptimal (like discarding the nine of hearts when they have the queen) in order to give you bad information, then you have interesting rounds of hearts.

      3. Peter H. Coffin says:

        Which gets us back to Jennifer Snow’s point upthread that ‘AI doesn't “win”Âť games by outsmarting humans”“it wins by getting the human player into a state where they CAN'T win.’ And I’m not entirely convinced that’s not how better chess players play at least against unknown opponents: Make the move that is LEAST advantageous for the opponent. A move that leads to capturing an important piece 2 times in 3 may well NOT be better than move that forces the opponent to sacrifice a smaller thing no matter WHAT happens.

    3. Spammy says:

      The fact that Chess has been “solved” into a library of board states is what led to a computer engineer named Omar Seyd to create a game called Amiraa. Amiraa can be played using a chess set and pieces, but was intended to show the weaknesses of chessbots by being far more difficult for an AI to play at a human level, because an AI can’t reasonably brute-force its decisions in Amiraa, because:

      1: The starting positions are not fixed so any library of opening moves will have to cover all 64 million possibilities.
      2: Rather than moving one piece in a fixed pattern, in Amiraa each piece moves one space at a time but you get four moves to distribute among your pieces in any way that you please. The designer’s web site says that mid-game if a computer tries to look two turns ahead it can have to consider around 160 million billion possibilities.
      3: No end game databases. Amiraa can end with no pieces being captured so attempting to create a database of end-game board states is not feasible.

      That said the creators of Amiraa also challenged AI programmers to make an AI that could beat the best human players and held a competition yearly to pit the best AIs against the best humans for a $10,000 prize. Arimaa was released in 2002 and the AIs finally won in 2015.

      tl;dr Designer got fed up with AI winning chess by memorization and made a game easy for humans but impossible to memorize or brute force to challenge programmers to make an AI that could beat human champions. Took 13 years.

  11. Daemian Lucifer says:

    Thing is,this has been done already as early as civ3(maybe even before).There were mods for both civ3 and civ4(and Im sure there is one now for civ5)where the ai would learn from you as you play against it,and later you could upload your database to be merged with everyone elses(and download that big database to have a good ai on your machine as well).Of course,seeing how these are mods,they arent nearly as big as an official database implemented in the game from the start would be.And their successes werent that amazing.

  12. Jsor says:

    I’m a PhD student in AI/sequential decision making. While Google’s Deep Reinforcement Learning is definitely promising, there’s still a huge, huge problem with using “deep nets” for AI.

    The thing with Neural Networks (and almost all classifier-based AI) is that it assumes any data is “independently and identically distributed”, meaning each input is sampled from the same distribution as every other one, and the previous input you received is in no way related to the current or any future ones. This is… obviously an asinine assumption for most real data, much less games. If you wanted to be strictly “correct”, you’d need a large corpus of data and a separate neural network for every single imaginable game state, which is impractical.

    Games are more accurately modeled as Markov Decision Processes, which means for any state, s, given some action, a, there is a probability of landing in some other state s’. (The probability could be based on the opponent’s strategy, but it’s still some unknown probability).

    There is work on (imperfect) reductions from MDP learning to IID learning, and that’s a big part of what I work on, but doing so well is an open problem. The other problem is that while unsupervised learning exists, the best methods involve learning by example. In fact, one of the more promising methods we have is basically “have a tree search play thousands of simulated games, then train the neural net off of the choices the tree search makes.” Which is an okay solution, but has natural problems as a solution to an AI in a game with a changing meta. Not to mention that the tree search is time and depth limited and may not be perfect itself.

    Don’t get me wrong, we could have much, much better AI than we do now. And it could even use neural nets — especially on systems with good GPUs (Neural Nets are approximately as parallelizable as graphics and rendering) — but there are a lot of natural problems with it, and when it fails it fails pretty spectacularly.

    Also, while you hear about small papers “learning genetic algorithms for Starcraft” or whatever, they’re considered “not very interesting problems” because they’re only tested on one domain. Generally publishable research endeavors to be applicable to multiple domains (games), so you probably won’t see a great Total War AI come out of a university unless it’s a fun project. It would be more likely to produce an AI that’s okay at a bunch of games, but has some major flaws that prevent it from being commercially viable.

    Finally, a big part of the problem with these AIs is that they make different mistakes than humans do. This may sound obvious, or dumb, but humans really, really notice when computers make mistakes they wouldn’t, even if the AI is on average better than the human at the task. It’s almost like an AI corollary to the uncanny valley. Which is why even if self driving cars are safer statistically, the second one crashes, or fails in a way a human wouldn’t, there will be a panicky smear campaign. A big, big part of game AI is making it operate like either an obvious game AI so people view it purely as a system, or making it feel like a much dumber human. You can’t underestimate how badly humans respond to AIs that are really good, but “feel” too artificial.

  13. Thomas says:

    Magic Duels (the new version of Magic the Gatherings Duels of the Planewalkers) seems to have a real Ai not only play against you, but build the decks it uses itself.

    I think the idea is that as they release new cards for every Magic block (2 blocks a year) they wouldn’t need to reprogram the Ai twice a year.

    It’s much less fun than the handcrafted AIs though, at least at the moment. Instead of playing against strategies with a lot of widely varying personality, most strategies tend to be a generic hodgepodge of good cards. The old Ai would play a deck of nothing but priests trying to summon a demon. The new Ai would never do that because its not a very strong strategy and it can’t measure that it’s fun.

    It does a decent enough job actually playing the cards though. Not many glaring missplays.

    There’s also a twitter account where self-healing AIs _make_ Magic cards and it’s beginning to produce designs which are pretty good. (After a hilarious start where it gave creatures ‘tromple’ as an adaptation of trample.)

    1. Falterfire says:

      I think you’re overstating the quality of the Duals AI, especially with regards to deckbuilding.

      From what I can tell, the deckbuilding is far more akin to something like an algorithm that has a few rough archetypes that it builds to. So rather than having an AI doing any sort of meaningful thinking before building a specific deck, the AI fills a deck with this many creatures and this many spells that fit roughly this mana curve and are all in these colors.

      This is why you’re seeing a generic hodgepodge of good cards all the time instead of specific themes. It’s not that the AI decided against the demon/priest deck and in favor of the hodgepodge deck, it’s that the deckbuilding algorithm always produces generic hodgepodges.

      The gameplay AI is pretty solid, but I wonder how dynamic it really is. Unfortunately I don’t have any particular insight into the workings of the gameplay AI, so I can’t say for certain how complex it really is.

      I do have serious doubts that they could port it to a larger base of cards (Say, Magic Online) without it falling apart though.

      1. Warstrike says:

        Although the game ai will usually pick a line of 4 over a L shape of5 gems that would give it an extra turn. Not sure what’s up with that.

    2. Spammy says:

      Microprose made a Magic: the Gathering game back in the 90s. The AI can be hilariously bad for not understanding what the abilities on cards mean, aside from damage. When it comes to just playing creatures and attacking with them the AI can be alright. And with the right deck built around that you can have a challenge.

      But then the AI can play Time Elemental. If Time Elemental attacks or blocks an attack, you sacrifice it and lose 5 life, which is a quarter of your starting life total. The AI will pretty much always block with Time Elemental.

      What’s even worse is that a human would know that Time Elemental can be used to return an attacking creature to its owner’s hand, meaning that Time Elemental never has to block a creature and the AI will do it anyway because it can’t read anything but power, toughness, and keywords.

  14. Mephane says:

    In theory, you could make an AI that would just try out lots of actions chosen randomly, then evaluate how the game may continue afterwards, and calculate how likely each move now would lead to a win for the AI. Repeat for every move (turn-based) or frame (real-time). It’d require enormous amounts of memory and CPU processing and a very good RNG, but if given all that, it should be indistinguishable from an AI that truly thinks, learns and understands. Almost a bit like a philosophical zombie.

    Now I have no idea how much memory and CPU power would actually be necessary for that; some comment above mentioned chess AI doesn’t brute-force an entire game to the finish, so I suppose we are still far away from having computers with the capabilities to just brute-force any situation.

    ————————–
    On an entirely different note, I personally loathe cheating AI. Whether it is that they start with free extra stuff, can see into your cards, get things cheaper, have larger or even infinite quantities of resources (e.g. infinite ammo) etc. I guess it is okay when the cheating is subtle, like a bit more health or a bit more damage, but when the AI can do stuff entirely utterly impossible to the player, it destroys the illusion that you are fighting an actual opponent.

    1. Daemian Lucifer says:

      Brute forcing is definitely out of the question.You can do it at the start,when basically everything follows predetermined paths with very little deviation.But once you get further down the development it balloons extremely rapidly.Just check out this video that explains why you cant brute force chess,and imagine that instead of just 64 tiles the board you are given had 512*512(or even way more)tiles:

      http://www.youtube.com/watch?v=Km024eldY1A

      1. Mephane says:

        Well, part of my idea of a Monte-Carlo approach is that the AI would not check out every single possible move, but only a random selection of all possibilities. Maybe the number of “dice rolls” would even become part of the difficulty setting, a harder AI “thinks longer” and thus has a higher chance to find a better move.

        1. Daemian Lucifer says:

          For this to work you would either have to have a plethora of random moves,because too few would lead to some incredibly stupid bunches leading to idiotic behavior,or youd have to make a good algorithm to remove the stupid moves,at which point you stop using a brute force method.

          1. Decius says:

            You’d need only a handful of randomly generated moves at each point. The majority of options considered should be generated through conventional programming.

          2. Mephane says:

            Well the question is how easy or hard are the too stupid moves detectable. I honestly don’t know, but maybe the seriously silly ones are possibly more easily detectable, and so the more expensive calculations aren’t done on each attempt.

            Maybe one would have to approach it entirely differently, but I still see some merit in a Monte-Carlo approach. It would create lots of variance where an AI that attempts to find only the most optimal move could end up either too predictable or AI-like (noticably super-fast reflexes, inhuman micromanaging) whereas an AI that just tries random possibilities until if finds one that satisfies some threshold how good a move is supposed to be as per the current difficulty setting (or has exhausted its allotteed “thinking time”) may feel more human-like in the end?

            1. Daemian Lucifer says:

              Well the question is how easy or hard are the too stupid moves detectable.

              For a human,easy.For ai,pretty hard.For example,tactical retreat.Thats pretty hard for an ai to pull off,and if you bungle it you may end up with ai retreating from a single unit that it could easily destroy,which is silly.And what if the rng spews out a loop?Each step may seem sensible on its own,but taken together youll get a unit going in circles for no reason(not patrolling,but going in circles somewhere useless).Etc,etc.

        2. Jsor says:

          This is already a thing, it’s called Monte-Carlo Tree Search. It’s what Rome 2: Total War’s AI uses, actually.

          1. Mephane says:

            And in the meantime I have learnt that an AI has used this (and other techniques) to beat a professional player at Go. :)

  15. Steve Online says:

    Wasn’t this basically what they hired the modder Sorian to do for Supreme Commander 2, minus the whole ‘using the cloud’ thing?

    IIRC, he built a neural net, trained it with hundreds of games, and then had to overwrite chunks of it anyway because the AI hated losing whole armies of its units to the automatic self-destruct that went off whenever it killed a player.

    1. el_b says:

      well the only way to win is not to play :P

    2. Decius says:

      if the AI hated losing armies to win, it was trained to build large armies, not to win. That’s the result of a poor heuristic, which indicates that it isn’t an emergent strategy but an artificial value manually added.

      The problem with using machine learning properly is that the AI will teach you about the balance problems in the game.

      1. Decius says:

        Looking through the link, I’m sure now that they were training the neural network on how to skirmish and do damage to the enemy army, not on strategy or tactics for winning the round. Which explains a lot of the errors the SC2 AI makes.

  16. Duoae says:

    Shamus is right – i’d hate to have all my games shackled to a server somewhere. Not to mention that someone with a better connection will have a better experience for systems that run on continuous feedback.

    So to continue the economic argument against including this type of AI learning in games: developers would be required to develop not just one but two AI systems, a local one (user’s machine) and the learning one on the servers. Otherwise players could end up without any bots updating their behaviour or randomly reacting but which in a logical context would be explained by them reacting to events that have transpired minutes ago and the AI is continuously confused because of poor connection.

    That’s quite an extra cost!

    If I’m quote honest, my preference for AI in action games (cause strategy isn’t my strong point) would be something akin to left4dead’s director + unit AI.

    I want to play a fun game, not be continuously challenged to the point of breaking. I think a director fulfils this role and separates the tactical decisions from the strategic. You can have bog standard AI for the bots themselves as currently exists but then train the director to use them in intelligent-seeming ways such as flanking or mixing up unit types during attacks or even ambushes.

    Even things like just cause 3 (assuming it doesn’t have one past the very simple star system for reinforcements) could be improved with a director to set up roadblocks if you’re in a land vehicle and mobile SAMs if you’re in the air. (From my observation it seems like roadblocks are automatically set up just outside the area you’re in regardless. )

    Or it could even set up interceptions based on your bearing. “Oh, the player was spotted heading towards this direction. .. there’s a town there. Best spawn more vehicles than usual!

    1. Decius says:

      The thing about the director AI is that is isn’t trying to kill the PCs, nor is it trying to make them win.

      The Director was built based on what made the game fun, and if it “cheats” to do that then everybody wins, even when the zombies eat brains.

      1. Duoae says:

        That’s kinda my point.

        I want to play a fun game that can be challenging (I think L4D and L4D2 can be considered both) that also isn’t a rote walk to the end (see games like CoD) where the same thing happens moment-in, moment-out.

        You don’t need superior combat/individual unit AI to accomplish this. It’s compete overkill. What is more valuable to me is managing that entire experience and so far, that is most accomplished through the director in the L4D games.

        FEAR was a good game when it came to individual unit actions but their macro tactics and the overall pacing of the game weren’t that great.

    2. Zak McKracken says:

      I think most neural network AIs should be able to run smoothly on a regular PC, unless they are so complex that training them on a supercomputer becomes a challenge. Training a network is a bit like showing the computer a bunch of real-world data and asking it to find the function that describes it. That takes ages, and it generates a very condensed description of the general correlations in the data (“killing your opponent’s units is good”.”Killing Zerglings with hellions is good”, “attacking buildings with Banelings is usually bad, unless the buildings are blocking a ramp and you have a large army in front of the ramp”… the amount of detail depends on the depth of the network hierarchy). Evaluating that function later is waay easier, and it does not require all the original data, only the condensed-down rules which have been painfully derived from the original data.

  17. Arctem says:

    I’m curious what you think of games that avoid the AI smartness problem by having their AI follow completely different rules from the player. The two examples I can think of off the top of my head are AI War and Invisible, Inc.

    In AI War, the AIs are supercomputers that rule most of the galaxy at the start of the game. They spawn units for free and can basically do whatever, but they consider the player’s to be so far beneath them that they won’t even bother to attack unless angered. Additionally, the computer’s processing is decentralized, so attacking one system doesn’t anger the entire empire of the AI. The result of this is that the AI doesn’t need to care about resource gathering, moving troops long distances to reinforce planets, or carefully setting up attacks.

    In Invisible, Inc. the AI isn’t operating at a higher strategic level. Rather, each unit just has a list of rules that it follows. If a guard hears a sound, it moves towards it. If it sees someone, it stands still and points its gun at them. A “smart” AI might take a longer route if they thought it would flush out the intruders, but because of the nature of the game it’s more fun for the player if the AI is mostly predictable. Thus, the focus is on making AI behavior interesting and mostly predictable rather than simply good at winning.

    I think the design space for things like this is much larger than more games where the AI and player are playing the same game. Computers are good at different things than us, so use that to make interesting game mechanics!

    1. Decius says:

      Games that are clearly asymmetric have a special place in AI design. Instead of letting the AI ignore or cheat past a player limit, it can be designed to make sense that the AI doesn’t have an obstacle there.

      Or conversely, instead of trying to handle the problem of giving the AI imperfect information and hoping for an intelligent response, the AI can ignore some of the information it’s “side” has, as in Invisible inc.

    2. Zak McKracken says:

      Yeah, I think most shooters would just not work at all if the AI was actually intelligent by human standards. Those masses of mooks could retreat a bit, coordinate with their friends and then ambush the player all at once, rather than attacking individually and not realizing when the player shoots their friend 3 meters and a crate away… but that kind of behaviour was already standardized in 1980’s action movies, and they were even using humans for those. So really, there’s not much need to improve your game’s AI beyond that standard :)

  18. el_b says:

    i heard that some of the time (mainly last of us), a game will have ai thats too good and be dumbed down for the player during testing, but most of the time i personally think they just go for something generic and cheap.

  19. “For example: If someone ran up to my car and grapple-hooked it to a cow and then put rocket boosters on my car, I’m not sure I could come up with an intelligent response, so I can’t fault the denizens of Just Cause 3.”

    I’d like to think my response would be “Wtf” followed quickly by “If you want to win a Darwin Award, go for it, but use your own damn car and leave the cow out of it. My car insurance premiums are high enough as is, and I’m not paying for the vet bills.” Realistically, it’d probably be more “Ack! WTF! HELP POLICE!” but that’s nowhere near as clever.

    1. Nidokoenig says:

      I think you’d give it some thought if you knew someone was doing that in the area, though. Sacrificial pigeon spikes for the rocket to attach to and rip off?

      I played through Saints Row 1 through Third and the police would always pull up windshield first so that capping the driver was easy, even if I couldn’t see him. Just giving them a few different strategies, rating their effectiveness and letting different units update their strategies on a schedule would make AI seem a lot smarter without doing a whole lot more work and would push the player to vary their tactics, or save up a specific tactic for a big heist so the AI won’t be ready, giving them a big payoff for delayed gratification.

    2. Zak McKracken says:

      I think I’d try to get out of the car and confront that guy — or run away and call the police, depending on how capable and willing of fighting me he seems to be …

      But then, I think the ideal reaction of an NPC should depend on the NPC because no two people would probably react the same. If you wanted to get that into an AI … yeah, I guess deep scripting with huge handmade look-up tables will go much further here than deep-learning neural networks…

      To make it kind-of okay, though something like this should work:
      if protagonist moves about
      and in
      and isvisible:
      say: “Wait, what’re you doing with a ?!”

      if protagonist manipulates in :
      say: “Take your fingers off my or ”

      …etc. I’m sure there’ll be instances where that reaction will be hilariously wrong but working with lists of conditions may help catch a lot of different crazy behaviours efficiently.

    3. Tom says:

      The grapple hook guy would probably say something like “Hey man, hold my beer. Check this out.”

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply to Jsor Cancel reply

Your email address will not be published.