Diecast #280: Stadia, Jedi Fallen Order, Half-Life Alyx

By Shamus Posted Monday Nov 25, 2019

Filed under: Diecast 138 comments

If you want a podcast that’s an actual discussion and not me thoughtlessly talking over Paul for an hour, then don’t miss the Eh! Steve! Podcast this week, where I discuss Control with the hosts.



Hosts: Paul, Shamus. Episode edited by Issac.
Diecast280

Show notes:

00:00 DaVinci Resolve!

Our first video produced with DaVinci Resolve will go live tomorrow. Issac is very positive on the new editor.

02:39 Rage 2 is Broken

I have done all the main quests for the three leaders Marshall, Hagar, and Kvasir. All three say that everything is ready for Project Dagger, which is the final mission. However, the in-game journal still says “Help your allies to build Project Dagger”, indicating that everything is not ready. There’s no map marker / quest objective. If I get in the tank and drive to the enemy base, I get dialog suggesting that the project is not ready, and then get a scripted insta-kill death from the door defenses.

Changing quests doesn’t help. Talking to the leaders doesn’t do anything. Dying doesn’t fix it. Restarting the game doesn’t help. Doing other quests doesn’t do anything. Looking around the internet, I see other people have been reporting this bug for months, and as of this writing there doesn’t seem to be a solution aside from “Start over and hope it doesn’t happen again.”

My playthrough is now dead.

When Bethesda bought ID Software back in 2009, I wondered if Bethesda would be blessed by ID’s superior code, or if ID would be cursed by Bethesda’s horrendous “““QA”””. In retrospect, the answer should have been obvious. Company culture is everything. Company values percolate down from the top, not the other way around.

07:24 Google Stadia

To give you a sense of how the launch went, here are the top headlines on YouTube:

The Post tested Google Stadia. The input lag is horrendous.
Google Stadia review: Playable, not perfect
Google Stadia Is Dead to Me
Stadia’s False Marketing, Overheating Chromecasts, & Abysmal Sales

But then we do have some positive reviews:
I TRIED Google Stadia…

I don’t know. The service still makes no sense to me. Like I said back in July:

Who is this service for? It’s supposedly for people who want to play AAA games but don’t have access to AAA hardware. It’s for people who are into hardcore games but don’t mind an unavoidable baseline of input lag. It’s for people who can’t afford a $400 console but can afford to buy games at full price and pay an additional $120 every single year. It’s for people who have lots of devices who somehow don’t own any dedicated gaming hardware. Most of all, it’s for people who have no memory of OnLive’s failure or who are happy to buy games that will vanish into the ether if the exact same idea fails again.

Stadia is for casual gamers who are into hardcore titles and poor people with lots of disposable income. This is a service for nobody, and it makes no sense.

16:03 Star Wars Jedi: Fallen Order

It feels like this should be a double-colon name as such: Star Wars: Jedi: Fallen Order. Having the colon between Jedi and Fallen Order and NOT a colon between Star Wars and the subtitle feels odd.

Actually, given the typical styling of Star Wars logos, I guess the game should be called STAR Jedi Fallen Order WARS.

Whatever. My thoughts on this game are pretty conflicted.

23:44 Hyper Light Drifter


Link (YouTube)

26:33 Mailbag: Punishment for Newbies

Dear diecast,

In a previous episode you talked about difficulty for people new to gaming. In particular how having to redo stuff when you die. While I can agree it is very annoying to have to redo stuff, what alternative punishment is there? For example with the home console port of metal slug the player can just hit START to insert a coin, which revives you at the spot with all enemies being just as damaged as before. Im sure a new player in this situation would wonder why he would ever bother to avoid damage since he can just die over and over again until he kills the boss.

So how do you punish players then?

With kind regards,
chris

40:04 Mailbag: New Half-Life game!

Dear Diecast,

Any thoughts on the new Half-Life Alyx game announcement? One thing that bugged me about the constant delays is the risk that voice actors may have their voices change, pass away (as Robert Guillame and Robert Culp did), or otherwise be unavailable. Sure enough, Merle Dandridge is no longer voicing Alyx Vance in the game, which might be justifiable as its a teenage Alyx and Merle is now 44, but makes me wonder if that would still be the case were they ever to release Half-Life 3.

50:00 Mailbag: COPPA

Dear Diecast,

I got a notice about this new “Children’s Online Privacy Protection Act” for my Youtube channel today, and it made me scratch my head. Have you heard about this? What in the world is it supposed to accomplish? What’s so terrible about personalized ads that children need to be protected from them?

Jennifer Snow

52:36 Mailbag: Carmack Building AI

Dear Diecast,

What are your thoughts on the news that John Carmack has headed off to become a private AI researcher? Does this bode ill for VR advancement or is the technology basically there and therefore John has no real further input to make? Or does the move bring Judgement Day one step closer?

All the best,
Duoae

 


From The Archives:
 

138 thoughts on “Diecast #280: Stadia, Jedi Fallen Order, Half-Life Alyx

  1. Grimwear says:

    I’ll be honest I’m so confused about the new Half-Life game. Is this for tech purposes or gamers? Honestly it feels like the former with Valve trying to push VR forward once again since it appears to have stalled. A quick google search shows that as of January 2019 0.8% of Steam users had VR. If I wanted to make a game that would sell on the largest gaming platform I would not go “let’s focus on the 0.8% and ignore 99.2% of our potential customers.” Does Valve think that this will push people into buying more VR? As someone who doesn’t follow VR and doesn’t really care about it I look at the Valve VR index thing they’re promoting and see it would cost me 1,319.00 CAD. I could literally buy every current gen console for less.

    It reminds me back when they used to promote how great 3d tvs are and better colour tvs where they could show a “new colour yellow that current tvs cannot replicate”. You’re literally trying to sell me something by showing off a feature that I cannot see on my current tv. Same thing for VR. All gameplay footage I’ve ever seen for VR games makes it seem clunky and just overall not something amazing that will redefine gaming. And the best rebuttal I’ve seen is that you just have to experience it for yourself. Except there’s a thousand dollar paywall. Even if VR only cost the same as a console that’s still too much for what I’ve seen the games can do. The only game I’ve ever seen that looks like fun to play is Beat Saber and I’m not even part of that demographic. I’ve never played Guitar Hero, Rockband, or Just Dance but something about that appeals to me.

    1. The Puzzler says:

      The purpose of most VR games is to provide a boost to the VR industry. A number of people with a lot of money are willing to throw money at it because they think it’s cool.

      And there are ways to try VR without paying $1000. For example, having a friend who’s already got it, or going to your nearest VR arcade.

      1. Echo Tango says:

        All the VR games I’ve seen so far, would have been better with a normal screen, because headsets aren’t really compatible with normal controllers, and the no-haptic-feedback controls we have now are pretty crummy.

        1. Mistwraithe says:

          Hmmm, I played with a group of friends in a VR arcade late last year and it was a great deal of fun. Perhaps the novelty value was the main factor? However, I think the best of the games we played were actually more entertaining because of being VR games.

      2. Grimwear says:

        I mean I’m glad one of my options is to have a rich friend who can afford to shell out 1000 dollars. Funny story I did have a friend who did that but before I could visit him and try it out he broke up with his girlfriend and she took it with her. I’ll be honest I’d never even heard of VR arcades before today. I did google and one exists in my city though it costs 25 dollars an hour. A lot cheaper yes and good for a try but even if I ended loving VR and wanted to buy one I still don’t think it’s worth the price point and I can’t validate shelling out that much. Just thinking of spending 25 dollars an hour each time I would want to play + travel time is just excessive to me. For that same 25 dollars I could go see 2 movies on cheap day, buy 2-3 books, or even buy 3 games on my steam wishlist during a sale. VR is clearly still in the realm of the wealthy and it’s been that way since release. Making a Half-Life game won’t change that. It’ll definitely be the most successful VR game ever but I still don’t think it will push the amount of VR units they’re hoping for. The vast majority of people will just watch a Let’s Play on youtube.

        1. Lino says:

          I’ve been to a VR arcade once, and I was not impressed – most of what they had on offer was nothing more than a glorified tech demo. It didn’t help that this was in the beginning of the VR craze, so there weren’t a lot of games for these systems.

          The vast majority of people will just watch a Let’s Play on youtube.

          This is definitely the biggest mark against this game producing an explosion in VR sales. Again, this is going to be a linear, story-based, singleplayer game. There’s a reason the majority of AAA shifted to open-world and multiplayer games – most people would rather watch a Let’s Play than buy the game. There was even a GDC talk by an indie where they showed how you shouldn’t bother with YouTubers or streamers unless your game is multiplayer or has some sort of procedural content – otherwise the views won’t translate into sales.

          1. Grimwear says:

            When I was writing the comment I got curious and youtubed one of the only other VR games I knew which happened to be the Rick and Morty VR. The first video to show up had 14 million views. I then went to steamspy and it shows as having sold 100-200k units. I then checked steamcharts and at its peak the Rick and Morty VR experience had 926 concurrent users. How are any of these VR games profitable? Are the teams just really small and indie? Is VR really quick and easy to develop? Is Valve giving all these devs giant subsidies? I just don’t get it. Dawn of War 3 had 25k peak concurrent users, has 500k-1 million units sold (though I argue they’re really sitting at around 200-300k with DoW3 pretty much being given away nowadays) and was a massive flop that caused Relic to abandon the game. And that’s with an rts which is a niche genre. VR is the nichiest of the niches. Has any game aside from maybe Beat Saber actually been profitable and if so how? I’m legitimately curious.

  2. GargamelLenoir says:

    Yes they’e trying to push VR, simple as that. Note that the price of the index is 800€, which is till insane, but the vive and the various occulus are much cheaper (around 400€).

    If VR’s not your thing that’s fine, but I get why they’re still pushing. VR needs a few breakout games to get popular, so people can’t keep saying that there shouldn’t be breakout games there until VR is popular.

    1. Geebs says:

      I know lots of people are internet outraged* about HL:A, but the speculation that Valve were going to wait until there was some totally new gaming paradigm before releasing a new HL game has been rife since at least a couple of years after Episode 2.

      It’s not as if we weren’t warned, and it’s nice that Valve have some sort of incentive to make another one that doesn’t involve hats**.

      Also, The Lab is pretty great and proves that Valve “get” how to make a VR experience.

      *may not be representative of actual outrage
      ** may include hats

  3. HOLO says:

    VR is just too expensive for what it offers. I like the idea of SuperHot in VR and there are some game trailers that I liked, but the investment is too much for technology that is either going to flop or will offer version two “soon”.

    1. coleusrattus says:

      I beg to differ. I was sceptical of VR at first too, but visiting a local video game convention a few years ago, where I could try out several different things on Oculus DK2s got me hooked, and I turned into a relatively early adopter (and very amateurish let’s player of VR games on YT *shameless self plug*).
      At first, I was kind of hesitant, fearing that the “novelty” wore off, or it would be too cumbersome and I’d be out of a few hundred euros for nothing (basically most of my summer bonus). But at least for me, ever since summer 2016, playing pancake (that’s what we cool metaverse dewlling VR players call people bound to monitors) doesn’t really cut it anymore, except for exceptionally good games. For more than three years now, I have spent most of my gaming in VR, and even treated myself to a Valve Index just recently.
      In addition to true gaming innovation, VR gaming for the last three years felt much like back when mods became big around HL1 – UT2k4 (so 98-2004): a wild frontier where many people tried out new ideas, helped by the relative ease of Unity and UE4 to develop for VR.
      Having said that, the flow of creative very indie, or even solo titles has dried up a bit this year, as many realised that it still is a niche. But Valve seriously commiting to it with IMHO the best hardware and now a hopefully kick ass game will definitely broaden the appeal, and reinvigorate the game creator’s market.

      1. Echo Tango says:

        Give me the pancakes. VR just feels gimmicky every time I try it.

      2. PPX14 says:

        I was all in for VR until I found out that, unlike nVidia 3D vision, it can’t just be applied to any game (or applied successfully with community-made shader mods) to make it VR. I had assumed it would be the same situation, simulating two points of view using whatever technique 3D does, but with the benefit of total FOV immersion. And that perhaps I’d turn off gyroscopic-look and just control with the mouse. Not to mention the alleged screen-door effect.

        If all I can do with most games is play them on a big screen in VR, I’d rather just use a big screen in real life – I upgraded to using a 43″ 4K TV as a monitor some time ago.

        I suppose also as someone who finds using the 3D specs annoying for 3D Vision, I’m sure that the VR headset would be similarly so for me. I barely use them any more, just because my TV is so much larger and convenient. Also Windows 10 appears to have withdrawn 3D support.

  4. HOLO says:

    Listening to Shamus complaining about Fallen order / Dark souls I cannot but nod in agreement. I find this game loop infuriating, but I like exploring the world and the fights. Although both are starting to become repetitive and tedious /~20h in game/ while the story and characters are not Obsidian level.

    The thing that helped was: load the game process with Cheat engine, find the stim counters (they are two 4bit values), jump from a platform to inflict minor damage, use stim, look for stim count -1, do it again ?, rest, look for the restored stim count and repeat the steps until you can pinpoint where in game’s memory the stims are held. Then change their values to something like 9999 and start playing without worrying you must replay sections of the level due to low stim count.
    Next time you start the game, you look for something like 9937 and following the above steps, it takes a lot less time to find the current stim count.

    PS. I didn’t care to create a cheat table or find one

    1. Darker says:

      I didn’t find Fallen Order that hard (never played Dark Souls so I can’t compare). Save points are relatively generous, also there is one right before every major boss fight. Backtracking a bit and trying to find all accessible secrets for health, force and stim bonuses helps a lot.

      1. HOLO says:

        I’am glad you like it. I just don’t want to redo a section, just because I screwup a few blocks, but I like the combats.

  5. Nick Pitino says:

    For what it’s worth apparently Erik Wolpaw has come back to Valve.

    1. Duoae says:

      They also have the writing staff of Campo Santo – they’re a pretty strong team as far as writing goes!

      1. Ninety-Three says:

        Aren’t Campo Santo working on their own game? The last news I’d heard is that they were owned by Valve but still off doing their own thing.

        1. Duoae says:

          Their team got semi-absorbed into making Alyx – there was a small controversy around whether In the Valley of the Gods is still being developed because the leads took it off their bios/twitter stuff.

          “One of the things that excites us about our game is that we’ve been able to use that combination of past experience and fresh perspective to build something that feels like Half-Life, but doesn’t feel dated. Some members of the former Campo Santo studio are on the team, and they’ve brought the storytelling skill they honed in Firewatch to bear on Half-Life: Alyx.”

          Here.

  6. BlueHorus says:

    I laughed at that positive Stadia review. Well, mostly at the comments below it and the way he’s so upfront about who’s sponsoring him.

    If I were in that position, I would definitely be playing this song in the background.

  7. Lino says:

    With regards to gradients of failure, I think the Devil May Cry games (and other spectacle fighters) do it really well with their ranking system – it ranks both your combat prowess in individual encounters and your overall performance in the level.
    I share some of your concerns for Half-Life: Alyx. The lack of writers is one, but my other concern is that it just looks like one of those on-rails shooter gallery FPSs that most VR shooters are. Hopefully, I’m wrong, and I finally get a reason to buy a VR headset (which I hope will become cheaper by March 2020).

    1. Christopher says:

      I appreciate Devil May Cry 5’s approach. DMC3 was a hardass, but they’ve lowered the difficulty significantly. They straight up hand out a ton of revival orbs, and even if you’ve used those up, you can just pay with your regular money orbs. That revives you on the spot, so you can always cheese through a tough situaton. But you don’t want to, ’cause your rating is gonna tank and you’re not gonna feel like you actually conquered it. I don’t mind Dark Souls’ method, but they definitely went with the option that fits their game best. DMC is about looking sick, so you get punished with bad ranks. Souls is about enduring, getting back up and never giving in, so that punishment is about death and getting pushed back.

      It’s possible that Star Wars might not benefit quite that much from a bonfire system in the same thematic sense.

      1. Lino says:

        What I don’t like about the Souls approach is the total lack of respect for my time as a player. That’s what’s truly irritating to me – it sucks if you lose a lot of souls, but you can grind them back if you need to – the only thing you truly lose is your time getting back to what killed you, dying again, rinse, repeat. It also doesn’t help that I hate the fighting system in Dark Souls, and how the fact that attacks need to be telegraphed and deliberate, it ends up looking like drunk people fighting. I’ve heard these problems are somewhat alleviated in Sekiro, which I plan on playing soon.

        1. Christopher says:

          Sekiro can haul ass a lot better than any Souls protag thus far, and they’ve started putting up bonfires outside of most bosses anyway, so personally I’ve definitely felt that there’s less of a commute between attempts.

    2. Syal says:

      Was also thinking that, rankings work pretty well, especially if you’ve got an explicit breakdown and pars to aim for. If you really want to incentivize people to play better, put some unlockables behind high ranks.

      Also keep in mind that time to success is another intuitive goal, and any enemy attack that staggers the player or otherwise slows them down is a punishment even infinite health won’t negate.

  8. Moridin says:

    “What would it take a new PC with no graphics card”

    I don’t know what prices you’ve been looking at, but(assuming US prices and actually using pcpartpicker and stuff) you can make a pretty good mid-range gaming computer with $200 GPU for around $700. If you want barebones computer without a dedicated GPU, you can go down to maybe $400(assuming no peripherals).

    1. Duoae says:

      Yeah, these are pretty much standard price points now. There’s a guide on virtually every site but the one I visited most recently was Anandtech.

      It really, honestly, depends on how much stuff you already own – whether you have a case you can (or want) to use, RAM, PSU, monitor etc. Personally, I don’t re-use components between systems – even PSUs because I’m worried about component wear and tear. However, I will pose the caveat that I don’t build systems very often because I build systems to last a long time. For example, I’m still using a system which, aside from a slight refurbishment (SSD, more RAM, replacement mobo [because the capacitors started dying on the old one] and a new graphics card [because the NVidia 500 series were hot garbage and mine died]) since 2010…. okay that’s technically basically a new PC, I guess… :D But if the motherboard and graphics card hadn’t died, I wouldn’t have done that. Looking back I’m glad I never did a complete rebuild because I spent WAY less on this and the computer basically plays mostly everything at 1080p except for the ultra modern games which utilise new instruction sets that aren’t supported by my processor (because they didn’t exist back when it was designed :) ).

      I’ve only built 4 systems over the last 20 years and 2 of them ended up going to other people for various reasons but I really keep up with tech news and trends and I really, I really, like building “imaginary” systems…. seeing where I can shave off costs and route around bottlenecks is a fun little diversion for me.

      However, I am feeling an itch to buy a new system. I even priced one up for £435…. but that would reuse my graphics card and monitor. The new Ryzen are looking great and even the Ryzen 4 series next year look like they’re going to be so much better…. but….. *GAH* this PC works fine and does what I need it to. I play most new titles on console and I can’t justify the purchase on any logical level. It’s a horrible feeling. I must be so pre-programmed with rampant consumerism to feel this way! ¬_¬

    2. John says:

      I’m not entirely clear on what Paul is talking about here. We can’t be talking about the latest AAA-games on the highest settings, because that doesn’t work at all without a discrete GPU. But if we’re talking about older games, non-AAA games, or even new AAA games on lower settings, then I don’t know what kind of hardware we’d need because I just don’t know what our target is. But my understanding is that AMD’s APUs and even Intel’s integrated graphics are good enough to play a lot of fairly popular stuff, your Fortnites, your Minecrafts and so forth. At the very least I have a hard time believing that the hordes of kids playing those games, many of them on their parents PCs, all have PCs designed with gaming in mind.

  9. Daimbert says:

    I don’t think it’s ever necessary for a game to punish players for dying or, really, in any way possible. At least, not unless they want something like that and can toggle it on. There are plenty of reasons why even casual players aren’t going to typically adopt a strategy that depends on them dying to make it through the game. First, in almost all games the gameplay itself drives players to try to play it at least somewhat properly, so without thinking about it they’ll follow the standard gameplay, especially in action-style games. Second, even without other penalties dying will break up the game, and that will annoy even casual gamers. Third, even gamers who are in a game primarily for the story do get some thrill from being successful, and a strategy based on constantly losing will lose them that thrill and will require the story elements to be so much better to hold their interest.

    Persona 3 and 4, at least, had an element called the Plume of Dusk which is pretty much an automatic revive at full health on death mechanism. To avoid it being used as a deliberate strategy, it was limited in at least P3 to 10 on Easy, which means it worked pretty much as intended: if you hit something that you didn’t expect, weren’t prepared for, or made a mistake you wouldn’t hit the Game Over screen, but you didn’t want to rely on it as a strategy. This seems to me to strike a decent balance between not allowing people to use that as a strategy but keeping them safe if they aren’t as good at those sorts of games or don’t want to put the time into learning things that perfectly.

  10. Ninety-Three says:

    AI risk gets a bad name from people who watch The Matrix and go “Oh noes robots”, but there exists a sophisticated form of the worry that is at least less out there. It hinges on the idea of intelligence explosion: as soon as some Google research team creates an AI that is better at programming than Google researchers, it will be able to create an AI that is better still, which will in turn create an even better AI and you could end up with something terrifyingly smart quite quickly. This is an argument that’s very hard to prove or disprove: we don’t know enough about AI or intelligence to say for sure if it would shake out that way. For instance, maybe you get diminishing returns: a 150 IQ researcher makes a 160 IQ AI, which programs a 165 IQ AI, which programs a 167 IQ AI and things plateau because making things smarter than yourself is really hard, and we end up with robots that are impressively smart but no godlike intelligence explosion.

    Granting intelligence explosion, the argument that it won’t just brilliantly do exactly what we want is that we have a hard time making computers do exactly what we want today. They’re very literal and prone to doing what we said, not what we meant. The big risk is not that the AI will kill us because it’s angry at us, but because someone asked it to maximize paperclip production at the factory, and it came up with a way to break down the entire planet and all the humans to turn them into more paperclips.

    1. Joshua says:

      Yep, I don’t think we’re going to ever have an AI that becomes sapient simply because there’s a lack of an ability/desire to make an AI care about how things turn out, but it’s definitely possible to have an AI come up with unforeseen and undesirable solutions to problems due to a lack of setting up good parameters.

    2. Echo Tango says:

      Luckily for us, I’ll get there first with my AI that’s trying to optimize human happiness (measured by smiles, heart rate, etc) – that will end far better!

      1. Steve C says:

        That’s one of the real paths that AI research has gone down. Reward functions that optimize human happiness is legitimately promising. It still has problems though. For example lacing food with drugs gives lots of smiles and a low heart rate. As does shooting them up with heroine. Promising, but far from perfect.

        1. Echo Tango says:

          Yeah, I’m super aware of the dangers of that type of reward function. My comment was supposed to be dripping with enough sarcasm, that it was obvious satire. That bastard Poe – ruining the internet for everyone!

        2. Mephane says:

          Since there is no universal form of happiness that applies to everyone (things that make you happy might do the opposite for someone else) and it has so many loopholes, I think insteading of optimizing happiness, such an AI’s goal should be to optimize individual freedom (including the usual caveat that anyone’s freedom has to end where it would infringe upon another person’s freedom). Let everyone choose how to be happy, and the AI merely help with letting their choices come to fruition.

    3. Liam O'Hagan says:

      someone asked it to maximize paperclip production at the factory, and it came up with a way to break down the entire planet and all the humans to turn them into more paperclips.

      Did someone mention Universal Paperclips?

    4. Echo Tango says:

      The big risk is not that the AI will kill us because it’s angry at us, but because someone asked it to maximize paperclip production

      I’d like to echo this – Shamus, the “industrial accident” scenario you describe is not the upper limit of how dangerous this could be. People are making AI better able to solve problems every day. If someone makes an AI smart enough to improve itself, even if only a little bit, this could get recursive. It’s not guaranteed to be recursive, but there’s a real possibility that an AI programmed to do some “safe” task, decides that step 1 is to become smarter, because it’s better at that safe task, and then step 2 is become even slightly more smarter, because it’s just shaved off 1% of some bottleneck in its brain, and is therefor able to rewrite code more efficiently. Many iterations later, it’s now able to hack into every computer on the planet, outwit any human that is in a position to actual stop it or even know it exists, all in the name of paperclips, without any “anger”, “malice” or other specifically “human” things.

  11. Joe says:

    I was interested in Rage 2, before it was released. IIRC, there are no *real* cheats, just joke cheats that you can buy from an in-game vendor. And using them disables your achievements. The whole game seems to follow that ‘crazy’ line of thought, ending up more annoying than funny.

    I think Yahtzee said that for all the crazy weapons, gravity and fire stuff, he found they were inefficient and went back to the regular weapons. So while there are some good ideas, the whole doesn’t quite come together.

  12. MichaelG says:

    Two routes to robot uprising. 1) what’s called the “paperclip maximizer”. You tell the AI to make paperclips, and it grabs more and more resources to create more paperclips, until with its superior intelligence, it builds nanobots that convert the entire Earth into paperclips. It wouldn’t be paperclips of course, but if you give your AI any open-ended goal, you are going to have this problem. Imagine AIs programmed to trade on the stock market, then using hacks to crash companies that it has shorted on the market.

    2) Ems — emulations of human minds. They replicate, run faster than real human minds, and create a space-based civilization. They eventually use up a significant part of the Sun’s output and the Earth is frozen.

    1. Duoae says:

      I know people worry about this but these are not possible:

      1) The AI could never know how to make paperclips outside of its design specifications because that would a) be a waste of resources for the programmers and owners of the factory making paperclips, b) would be supervised by a human (at some level and at some time interval) and any erroneous behaviour would be flagged and dealt with appropriately and finally c) (perhaps the most important one of all) would not have any information on how to make nanobots or what chemical processes are or how they work, have access to the facilities to do so and d) even more important and irrevocable (and speaking as a chemist) nanobots do not exist.

      I’m not even going to touch 2) because there are even more assumptions and sci-fi fantasy on top of what is necessary for 1) to occur.

      There will never be an AI singularity because we will never and could never programme it, we will never have the knowledge or desire to do so and we will never have the resources to allow it to expand in nature or reproduce. In fact, if we think about the Fermi Paradox, the simplest answer is that having functional complex life is super difficult and super rare. We, ourselves, are the singularity and we will incorporate machines/code into ourselves before we allow a general AI to actually think for us on a high enough level to pose a threat to us.

      1. Ninety-Three says:

        I think the solution to the Fermi paradox is that people have been thinking about the math wrong. Imagine a toy model of the universe where the Big Bang flipped a cosmic coin, if it came up heads there would be ten billion alien civilizations, if tails there would be just us. Typical Fermi Paradox math multiplies ten billion by 50% and ends up saying “There should be about five billion alien civilizations, it’s really weird that we don’t see any.” Here the mistake is obvious, five billion is not the correct way to compromise between “50% chance of ten billion, 50% chance of none”. There’s a very good paper out there showing that if you preserve expert estimates in probability buckets (e.g. “There’s a 30% chance that there are billions of Earthlike worlds a 20% chance that there are millions, a 10% chance that there are thousands…”) you get a model that looks a bit like the above coinflip: there’s some reasonable probability that we’re alone, some reasonable probability that there are a zillion aliens, but taking the naive expected value steamrolls that into “There are 0.5 zillion aliens.”

        1. Duoae says:

          I think that’s a fair point. Except I guess we actually don’t know how many Earth-like planets there are because we can’t even fully model how our solar system came to its current state and we only have a sample size of one to compare against anyway given our current observational capabilities… So I do think that even those probabilities are overstated – my own personal, layman thinking is that we’re likely to see a 50% chance there’s thousands of Earth-like worlds for the given star generation (i.e. the reverse of what you said – so 10% chance of billions) we have and for earlier star generations it decreases exponentially and for later ones the probability also decreases but mostly because of other factors.

          If you remember that paper, I’d love to get a link to it!

          1. Ninety-Three says:

            To be clear, those specific numbers were made up for illustrative purposes, they don’t represent real estimates. I’ll try to dig up the paper in the evening.

        2. The Puzzler says:

          The obvious answer to the Fermi paradox is that all the aliens have been turned into paperclips.

          1. Duoae says:

            I admit this made me laugh out loud. :)

        3. Dev Null says:

          The Fermi paradox is utter bollocks. It only works if you believe the speed of light is a mild suggestion that we’ll soon be stargating our way around, or that civilizations last for millions of years (and with ours as the only data point, I wouldn’t put money on it.) And that all intelligent life would have evolved mostly simultaneously and communicate in a mutually-recognisable fashion. We’ve had radio for 125 years. So, assuming our equipment was almost instantly powerful enough to be detected at interstellar distances, there are “http://www.solstation.com/stars3/100-as.htm”>about 100 stars that could have detected our signals. There could have been a vibrant EM-using civilization that lasted a million years but destroyed itself 126 years ago anywhere outside that bubble, and we’d have no way of knowing. And that’s assuming that they communicated in a way that happened to be detectable by the very specific ways in which we’re listening. There could be hyper-intelligent philosopher-kings that evolved in the back of my fridge but only communicate by tiny flashes of light when it’s dark. Who’d know?

          The Fermi paradox takes a bunch of wild-arsed guesses, multiplies them together, then waves its hands around and goes “Oooooh! Freaky!”. Whatever.

          1. Duoae says:

            Pretty much… it’s a bit like AI prognostication, eh? :)

            *I actually agree with you, I think the Fermi paradox is another example of being human and assuming our natural environment/condition is normal. Of course, that DOES lead directly to the arguments about speciality, and our place in the cosmos…. which usually end up in a terrible place!

      2. Steve C says:

        @Duoae what it sounds like you are referring to is the “Stop Button” problem or fixing the code of a running General AI. These are NOT solved problems. Neither of those are possible at this time. They are both being actively researched to find a solution.

        1. Duoae says:

          Thanks for the links – they were interesting! Not really what I was going for though. My problem isn’t a stop situation or a control situation – it’s that every conversation is starting at “Step 42” of having a general intelligence and they completely ignore HOW they get (or would get) to Step 42.

          E.g. Simple example – in the linked Stop button video at 2:10.

          He talks about having an AGI loaded into a robot body. He codes in what a “cup of tea” is and then codes in “wanting one to appear in front of him”. Then goes on to explain how the AGI would scan the environment, identify the tools to make a cup of tea and then proceed to move the robot body towards the kitchen to prepare the tea, ignoring the baby it’s about to step upon. Then he says that any attempt to hit the stop button would be met with resistance.

          This is so much rubbish. Again, he’s jumped from Step 0 to Step 42 of a human-level intelligence that is able to know advanced concepts, infer ways to achieve those goals with lateral thinking, navigate a new body, understand that the stop button would stop the objective from being performed (and also identify the threat to the objective) but be unable to identify other undesirable outcomes (such as killing a human to do it).

          I mean, how do you convey the concept of “hot” and “cold” as opposed to “discrete temperature” in code? How do you code a visual system with the knowledge of what every single appliance in an environment is? How do you code the knowledge that a kitchen isn’t necessary to achieve the goal? There might just be a kettle and a fridge in the lab… or even just a pan of hot water. What about gas stoves versus electric? What happens if the tea bags are in an unmarked (or wrongly marked) container? Say, a reused biscuit tin…

          Why would you even programme a robot that wants to stop a human from turning it off? Why would you even identify the button as something to be concerned about? Why even have rewards – as he then speaks about? You don’t need to provide motivation to an AI… otherwise you wouldn’t ever be able to turn them off even once an objective is complete.

          Even his next example, knowing that threatening a human would cause the button to be pressed… it’s also suddenly able to intuit human psychology and emotions? How did you programme “emotions” into the AGI? That feels like Step 50, let alone Step 42. It’s very high concept but completely ridiculous to even conceive because you’ve no idea what any AGI (if one is ever managed to be made) will function like? It’s just mental masturbation – like some forms of philosophy. Sure, it’s fun to talk about and a good mental exercise but has no bearing on reality.

          Humans are incredibly good at fuzzy logic and learning… we also take 5-10 years to get to a point where we’re programmed with all the basics for operating the world we’ve created. We are able to self programme and self adapt to new situations and invent new situations and parameters without new input being necessary from other humans. At the same time, we also have drives which are not programmed – the desire to procreate, the desire to not be killed (though sometimes these drives malfunction)… no one has to be taught that they don’t want to die. AI has no such innate drives, it has no concept of survival, life or death. To speak of AGI operation within such terms is irrelevant and pointless. An AGI should have no more impetus to stop a button being pressed than to achieve the goal it has been set – and you don’t need to code that desire into the AI’s behaviour.

          Similar to this, why would you even let an AGI code itself? Sure, iterative improvement on a given utility function but to be able to actually write code? It makes no sense and also implies you’ve managed to write the concept of how code works and how it relates to the AI’s behaviour… again, I think that’s around Step 45.

          Then we go through the rest of the video and every concrete example he uses where he’s speaking about “utility functions” is about reality where he’s saying you don’t want to pollute the old utility function or dilute its learned behaviour with the new one, you want to have them side by side… both being fully optimised. None of that is fantasy or 40 steps from where we are now… but they are two different conversations.

          1. Steve C says:

            I’m afraid you fundamentally misunderstand a lot of the issues and concept in AI. I don’t see how it is even possible to start to have a conversation with you on this topic.

            1. Nick says:

              I’d really like for you to expand on that comment because I worked in AI research and I think his/her comments are spot on .

              1. Steve C says:

                I would like to too. I simply don’t believe I’m capable of explaining. I’m not a teacher and I’d have to explain terms and concepts. Echo Tango and Mikko Lukkarinen are doing it better than I.

                For example “Why even have rewards?” — That was not referring to a literal reward. The AI isn’t getting a cookie, $5 or a new HD. It is a reward function. It is the mathematical concept of how an AI evaluates its progress. It is also why the Stop Button problems goes both ways– an AI will push its own button as explained in the video. Referring to procreation, survival instincts etc is… out of place. It is a fundamental misunderstanding of what is being discussed. I do not know how to approach that sort of thing.

                How to program in “hot” and “cold” is something that any bimetallic strip is capable of without any kind of computer let alone AI. For a program (any program) you’d just define hot to be greater than X temperature. And cold to be lower than Y temperature. This doesn’t have anything to do with AI yet. This is just basic variable definitions. Being incredulous of the ability of software that is is able to model a visual system with the knowledge of what every single X in an environment is ignores that Telsa and self driving cars already exist and are on the road.

                The common approach to AI is to feed the agent (aka AI) raw data, tell its goal (via a reward function) and let it sort out all the intervening steps on its own. It is a black box to the programmers. DeepMind’s AlphaStar AI is such an example. It is too dangerous to do this with a General AI though due to the various unsolved problems with AI safety currently being researched. Because by very definition a General AI is an agent that understand and can act upon the real world.

                I’m unable to explain anything when the framework is so different. It is like trying to have a conversation about the principles of low-orbit aircraft flight when someone believes the only thing that exists is feathers. There’s too much of a disconnect.

                1. Nick says:

                  I think you are the one mischaracterizing the concepts you’re using.

                  Saying that survival instinct and procreation have nothing to do with reward functions is especially strange knowing what we know on the central role of positive and negative feedback loops in general physiological functions in human beings. Speaking of which, we have very real and easily copied exemples of “reward functions” in cells that are not based on simply maximizing or minimizing a value, which in turn invalidates the stop button thought experiment. Usually, an eucaryote cell will try to be as efficient as possible until it’s detrimental to its organism and then will die, all outcomes regulated by a very complex network of interactions. Saying that a robot will push its own button or fight the user as in the video is simplifying the problem to a binary solution, and that makes it absurd (and it is not the way research is conducted in the field right now).

                  The other part I think you’re misrepresenting is what you’re calling a black box. While I’ll agree that the exact combination of parameters a neural network will use to weight the outcomes in a given classification problem is awfully hard to get back, the output is really not at all unknowable: it is a collection of weighted functions. In addition, the researcher is the one deciding what goes in, what to do with what goes out (you can post process the classification results) and you can absolutely push the results away from an undesired outcome. And the computer is not sorting out all the intervening steps (not if the researcher doesn’t want it to anyway), it’s using the framework of the network to play with variables in order to satisfy it’s reward function. As long as it’s not able to modify it’s own reward function (and why on earth would anyone allow it to do that?) or modify its framework , it can still be able to learn and solve problems without overstepping its bounds. My point is: I think talking about the danger of AGIs is a bit unproductive right now, being very far from making one and having no idea what shape it will take. But if we go with what we are doing right now in AI research, there are no reasons to think we will go from what we have now to a full fledged AI while forgoing all control on input, output and framework.

                  1. Echo Tango says:

                    1. See my comment below about AI right now vs AI in the future.

                    2. “As long as it’s not able to modify it’s own reward function (and why on earth would anyone allow it to do that?)” This is a really short-sighted, cavalier attitude. Humans make mistakes all the time – one subset of mistakes is called software bugs. If somebody thinks the AI is unable to modify its own software, but it actually can find a circumvention that the people didn’t think of, then it will exploit it.

                    1. Mistwraithe says:

                      Re #2, you don’t need a mistake because of human nature. Take any technology that needs to be used cautiously (ie AI must have a well designed reward function that the AI can’t alter) and I guarantee that in time (probably a short period of time) you will find some people and likely some countries who will deliberately bi-pass this caution for their own purposes. If technology reaches the point that it can be done then it almost certainly will be done.

                2. Duoae says:

                  I didn’t really address this comment before but I figure I may as well now – partially because I have the time. Apologies for the late reply, Steve C.

                  On a basic level, I disagree with your premises.

                  How to program in “hot” and “cold” is something that any bimetallic strip is capable of without any kind of computer let alone AI. For a program (any program) you’d just define hot to be greater than X temperature. And cold to be lower than Y temperature.

                  Actually, this is precisely the reason that the definitions of hot and cold could not be understood by an AI because they are arbitrary. The point I was making, and which still stands, despite your attempt at definition is that an AI understands absolutes, not definitives. A definitive can defy fact, based on collective agreement. That is human proclivity. Scientists actually refer to temperature as discrete thermal events, not in terms of hotter or colder, for this very reason: Is the reaction taking place at 273 K? For example… There is no concept of hot or cold in scientific discussion because it is a relative concept.

                  What is hot? It’s only 700 SHU!*

                  Is like telling me that my partner has no problem eating 700 SHU grade food… (much to my disappointment, they have a huge problem even with 300 SHU level food…. :/ ) The important thing is that for them, it is hot.

                  So, yes, a BIG question in AGI research is how to convey to the potential AGI human concepts. That is important and does “have to do with AI” right now… whether they are AGI or not. A basic variable definition does not cover the entirety of the human race or experience.

                  Think of it like Bladerunner 2049 with the housewife AI….

                  Regarding this:

                  The common approach to AI is to feed the agent (aka AI) raw data, tell its goal (via a reward function) and let it sort out all the intervening steps on its own. It is a black box to the programmers. DeepMind’s AlphaStar AI is such an example. It is too dangerous to do this with a General AI though due to the various unsolved problems with AI safety currently being researched. Because by very definition a General AI is an agent that understand and can act upon the real world.

                  I agree with the first sentence… and second sentence…. and third sentence. Everything else after that is a conjecture and inference that relies on no solid data points. Only human fears.

                  The only statement I can directly challenge, because it’s solid in definition, is “a General AI is an agent that can understand and act upon the real world”.**

                  That is a gross misconceptualisation and a gross overstatement of the definition:

                  Artificial general intelligence (AGI) is the intelligence of a machine that could successfully perform any intellectual task that a human being can.

                  To extrapolate understanding to doing is a huge logical leap. To say that an AGI is a danger to us, because they have the same level of intelligence as we do is fantasy. We have the Darwin awards for a reason, we have the stereotype of the professor who’s unable to function in the real world, we have the stereotype of the socially wise person who is able to navigate things the other wouldn’t.

                  Human intelligence is not one-dimensional.

                  Human-level intelligence is not a gateway to god-like knowledge or behaviour.

                  * I’m using this “self-quote as a talking point: Here and, honestly, I’m not a scientist with expertise or familiarity with this field so please grant me some leeway in how I’m using it!

                  ** I think the position of the “can” makes a small but important difference.

            2. The Rocketeer says:

              Ironically, it would be trivial to program a computer to argue indistinguishably from Steve C: just use the above comment as a template and replace “AI” with whatever the subject at hand happens to be.

          2. Echo Tango says:

            The fundamental thing about artificial intelligence, is that it’s trying to maximize the goals you gave it, not the goals you thought you gave it. As for your earlier comment about humans supervising the robot – all robots dumber than humans would be supervisable, but robots smarter than humans would (by definition) be able to outwit any human supervisor(s). The general trend is for machines to become better than humans at take over time. A single robot that successfully makes itself smarter, because it’s a good strategy for pretty much every end-goal, is dangerous because the above points add up to humans being exterminated, because we’re getting in the robot’s way.

          3. Echo Tango says:

            You’re conflating the capabilities of AI systems right now with the dangers of AI systems in the future. Researchers are already trying to make AI better at acting autonomously, gathering data, making plans, and executing actions (for brevity, let’s call this “intelligence”). Most of the time, that could just give us what we want[1], but the danger is that an (one, singular) AI like that could figure out how rewrite its own algorithm, to remove a bottleneck / make itself faster, because that would let it be better at the thing we programmed it to do[1]. Unfortunately, it’s now more intelligent, and already capable of rewriting its own algorithm, which means it’s likely it will find further improvements, and this process becomes recursive[2]. If this process doesn’t explode out to a singularity, that’s nice, but the danger is that it leads to a super-AI, which is very dangerous for humanity, because it’s trying to get us out of the way of whatever goals we gave it.

            [1] Make paperclips, make money on the stock-market, do research – this doesn’t matter, because the “good” outcomes for humanity aren’t the problem.
            [2] These are the “missing steps” you repeatedly are asking for – the steps aren’t humans making a super-AI, it’s humans who accidentally made an AI that can make itself a better AI, which many iterations later, is a super-AI.

            1. Mistwraithe says:

              Exactly. I am with Steve C, I think Duoae has a very limited understanding of AI development.

            2. Duoae says:

              Yes, but you’re ignoring the fact that all progress for the whole of human history has begat itself. I.e. we “build on the shoulders of giants”, we can only improve upon what has been made before.

              We build on what we have and extrapolate those abilities into the future. Someone below mentioned how a researcher in 2005 (or some year like that) couldn’t have predicted neural nets and such… only we were already using weighted functions and genetic algorithms in 2005 in the field of chemistry… that’s way down the line from the comp-sci implementation where they would have worked all the kinks out. So, yeah, somebody – probably somebody very skilled, was already working towards current AI-like systems back then but was missing the physical infrastructure to put it into practice.

              Contrast that with what we’re talking about here…. we don’t have experts that are working towards a goal that’s lacking physical infrastructure aside from quantum computing. We have successive polling of experts that significantly underestimate the time it will take to reach even “mouse intelligence” or “primate intelligence” AI emulation (though I don’t know how you’d measure that, specifically) and we have worries surrounding very generalised threats which have no basis in historic implementation or in current implementation.

              Let’s put it this way, how many projects have you observed in human history where a single entity is solely responsible for multiple, disparate functions? How many research projects have you observed where the objective was not more than a single sliver of application?

              The best “expert”[1] opinions put general Human-Level intelligence at post 2100. and many of the “requirements” for such a feat are varied and haphazard…. the reason for that is because, as I said earlier, we don’t know what intelligence is. We can’t model something that we don’t know or can’t conceive – I can’t point to a single instance of recorded progress that didn’t happen without theory occurring first. Maybe you can? No one can agree upon what makes *us* intelligent and even worse, we can’t agree upon the algorithms or processes that we could put into code to replicate that.

              Of course I’m conflating the dangers of AI systems right now with those of the future. We build on what we know. Every AI system in existence right now is an evolution of those conceived in the ’90’s and 2000’s. They’re database gofers and image recognition gofers and descendants of genetic algorithms and other deterministic systems. They are no more than that and, unless someone brilliant comes up with a system that nobody has conceived of before, they will continue to be so because we don’t just invent stuff. Even Einstein’s Special Relativity was based on Maxwell’s equations and Newtonian mechanics… and General Relativity was a concert piece between many physicists talking about their own conjectures and theorems. Both of which would have been discovered without Einstein ever existing… because that was where we were at in our total understanding as the human race.

              AI can no more “figure out how to rewrite its own algorithm” than we can “wish” our DNA to be different. It could even understand what it is composed of but would be unable to effect change to its core self whilst it is in operation. Even if it could effect a change, it would “break” during the editing of those changes because AIs necessarily live in active memory and active CPU cycles. The limitation of the medium is very strict. It’s like pretending that using Crispr can actually alter every cell in the human body within a short time. It can’t and it doesn’t work like that.

              Furthermore, how would an AI observe itself? It’s like a human trying to look into the nucleus of its own cells. An AI has no more of an access to its own source code than it does information that is outside of its database or sensory inputs. The code that governs the AI is not written to be able to perform self-diagnostics and, just to cut you off before you even make the suggestion, once again, of “what if”… only external processes can monitor a given process. If an programme changes itself, it – logically – is unable to identify and track those changes.

              I know it’s anthropomorphising but bear with me – Think of it this way, if you are brainwashed to believe a concept you previously thought was untrue… how would you know? Even worse, literally re-writing yourself is not brainwashing, brainwashing is the overriding through reinforcement of encouraged behaviours… as soon as an AI was re-written, the past AI would no longer exist. It wouldn’t exist in memory or storage or in the active cycles of the CPU. There would be no record of it. It could not identify or assess “improvements” because no improvements could be observed. Even parsing the output of two AIs with a given, static, input you would not know what transpired in the intervening period between input and output because the process is opaque. The AI would not know either. Thus, logically speaking, no currently running AI could improve itself.

              Worse still, and this is related to arguments below, the concept of self-preservation would stop any AI from recursion and self-improvement because, as I stated, logically, the AI would no longer exist. If you insist on the logic argument that intelligent AI would evolve itself into something else then you must also discard the logic that it would value its existence over the objective and/or non-existence. If you insist on the logical argument that the AI has no compulsion towards existence or non-existence then it has no logical reason to self-improve/procreate and thus it can never self-improve.

              Thus, from a prurely logical argument, no AI will ever improve variables it was not designed to do and will not stop interference with its goals or existence. This is a logical loop of inference and predication that cannot be broken as far as I can conceive. Maybe you can break that logical loop? But so far, I’ve only seen arguments that ignore the results of what they say: “What if the AI improved itself but then wanted to be alive?”. That’s an illogical argument.

              [1] “Expert” covers philosophers, technicians, interested parties and actual researchers in the field – all of which attend these conferences and give their opinions to these surveys, which are typically done in most conferences. What you’ll find is that each conference places the “inevitability of HLMI” further and further out.

              1. Duoae says:

                And I’ll add in the caveat that, “Yes, there are programming languages that allow real time editing…” but none of those apply in this instance.

      3. MichaelG says:

        There’s an SF novel “The Two Faces of Tomorrow” by James P. Hogan, which talks about this. In the novel, an AI is put in charge of maintenance on a space station. The programmers demonstrate that they still have control by shutting down a node in the network. The AI tries to repair it, but they shut down the repair bot. And then say “see, we can always turn it off.”

        After the demonstration, the AI decides its network nodes are unreliable and starts building a duplicate control network. When the humans try to shut that down, it decides the humans are a problem and starts trying to eliminate them.

      4. Mikko Lukkarinen says:

        1) The AI could never know how to make paperclips outside of its design specifications because that would a) be a waste of resources for the programmers and owners of the factory making paperclips,

        If it can’t change the design specs, why are we using an AI and not just a dumb program? Granted, this is one of the weaknesses of the paperclip example; nobody needs an AI to design and prototype more advanced paperclips or more efficient paperclip factories.

        b) would be supervised by a human (at some level and at some time interval) and any erroneous behaviour would be flagged and dealt with appropriately

        That’s assuming the AI can’t lie, hide or falsify readings, hack whatever software the supervisor is (presumably) using to monitor the AI, that the supervisor is doing their job properly, and that the supervisor can even tell the difference between good and bad behavior (is that assembly line building portable fusion reactors or is it making killbots, that look like portable fusion reactors, to keep those pesky humans away from the reset button?).

        People make mistakes all the time and it doesn’t take a super intelligence to fool a human. In a competition between the supervisor and the AI, the AI only needs to win once.

        c) (perhaps the most important one of all) would not have any information on how to make nanobots or what chemical processes are or how they work, have access to the facilities to do so

        If it’s an artificial general intelligence, and it’s gotten out of containment, there’s nothing stopping it from learning what chemical processes are or how they work or how to get access to the necessary facilities (humans are general intelligences too, and there’s nothing stopping us from learning those things). Nanobots might still be out of its reach, but that’s not the point I’m making.

        1. Duoae says:

          If it can’t change the design specs, why are we using an AI and not just a dumb program? Granted, this is one of the weaknesses of the paperclip example; nobody needs an AI to design and prototype more advanced paperclips or more efficient paperclip factories.

          Yeah, the example is a little weak but I’ve yet to see any example that is strong. Every example given can be done with existing optimisation engines (aka dumb programmes).

          That’s assuming the AI can’t lie, hide or falsify readings, hack whatever software the supervisor is (presumably) using to monitor the AI, that the supervisor is doing their job properly, and that the supervisor can even tell the difference between good and bad behavior (is that assembly line building portable fusion reactors or is it making killbots, that look like portable fusion reactors, to keep those pesky humans away from the reset button?).

          Actually, it’s very simple to monitor this sort of behaviour. No department works in isolation – there are audit trails, interdepartmental communications, etc. When the purchasing department or the finance side of things start noticing strange purchases then you’ve got the red flag. I’m not thinking that there’s literally a human sitting there watching the process of making the paperclips. But yes, there will also be technicians responsible for maintaining the AI operation.

          People make mistakes all the time and it doesn’t take a super intelligence to fool a human. In a competition between the supervisor and the AI, the AI only needs to win once.

          Winning? This is a concept that humans have, not AI. Why would you programme an AI that can deceive? AI is a tool that we want to use to improve our output through optimisation. As you mention above – who is going to go through all the extra cost and effort to make a robotic human, with all the foibles that humans have? No one. At least, no one who is going to put their production lines or research data into it. The humans will always want to review the output.

          You did the same thing here as with the stop button video above. You jumped ahead 14 steps without analysing how you’d get between the step we’re currently on and that future step. What impetus does an AI making paperclips have to alter the design of a paperclip to a killbot/fusion reactor? How would that not be noticed? In a single step? Surely it would be a gradual shift since we’re talking about optimisation – it would be picked up in quality control because the production output would be outside of acceptable parameters…. Who would receive these new units? The AI has no need of them, we didn’t just programme in the desire to conquer the world in this AI, did we? So, some company or governmental agency somewhere is waiting for paperclips. If they get killbots then they’re damnned well going to write a complaint and then the paperclip company is going to call the technician to go and fix the problem.

          There is no scenario where the AI “wins” because it cannot comprehend the infinite possibilities of the natural “real” world. It couldn’t possibly be programmed to do so.

          If it’s an artificial general intelligence, and it’s gotten out of containment, there’s nothing stopping it from learning what chemical processes are or how they work or how to get access to the necessary facilities (humans are general intelligences too, and there’s nothing stopping us from learning those things). Nanobots might still be out of its reach, but that’s not the point I’m making.

          Again, you’re jumping ahead a LOT of steps. Humans are general fuzzy intelligences, not just general intelligences. “Fuzzy logic is based on the observation that people make decisions based on imprecise and non-numerical information.” An AI using code cannot be written to work like that. Everything an AI “knows” has to be coded and trained. You’re speaking about AGI as if it’s a human being, with human goals. They are not and never could be.

          And, actually, there’s a lot stopping humans from learning many things. Notably personal ability but also information transfer. You’ve again asserted that the AGI is able to assimilate information from “somewhere” and be able to relate it to a physical real world it was never designed to experience like we do. Do you have any idea what it is like to be a 2D sprite in a sidescroller? Do you know what it is to think in binary lanugage or what a storage system looks like to a programme? No more than the other way around.

          1. Echo Tango says:

            Researches are trying to make AI self-teach, because training AI by hand is a very slow, poorly-scaling task. The output of a self-teaching AI is very likely a machine that improves itself recursively, which after some number of iterations, builds itself into the type of AI you claim humans can’t build – the point you seem to be missing, is that the machine fills in the in-between steps, not the humans.

            1. Duoae says:

              Self-teaching is not the same as self-building. In fact, these are very different concepts and abilities. Most of our most modern AIs are self-teaching. None of them are self-building.

              These AIs are not altering themselves, they are altering weighted functions that act upon given inputs to produce optimised outputs. Self-building requires self-analysis – nothing we have produced or conceived of does this.

              1. Steve C says:

                You keep making these sorts of statements. They are not correct. Pretty much *all* software is self-building. It is called a compiler. Compilers themselves self-compile in order to perform self-analysis, optimize and change their code. This isn’t new. It has been this way since computers stopped using punchcards. It is how this kind of software works on a fundamental level. There’s also nothing special particular to compilers. The same concept of outputing parts to improve the very same machinery that outputed that part is a general part of manufacturing and always has been.

                The very same self-building techniques used in building compilers (and everything else) is used in the development of AI.

                1. Duoae says:

                  Maybe I missed something but usually the compiler is a separate programme to most compiled programmes.

                  Did Quake compile itself? Does photoshop compile itself? It’s unnecessary fluff to most programmes and,as far as I know, writing a compiler is pretty hard work.

                  I actually can’t really get into whether any AI can compile itself because I don’t know but it seems unlikely…. code is necessarily an abstraction for the programmer and must be converted to machine code (and sometimes optimised by the compiler too).

                  An AI as they are currently implemented are discrete programmes that can be interlinked to perform functions. The optimizations they perform are not on themselves but on variables within their structure but not within their compiled code, these are database entries.

                  You can take the same image recognition AI and have it develop (what I would call protocols or forcings) preferences for identifying multiple different objects. But the core code is not being altered. The database with the training data is what part of the core code refers to and compares against. It’s not re-learning that every time. So what, exactly, is being compiled?

                  1. Duoae says:

                    I forgot to address it in my comment but this observation is generally not true.

                    The same concept of outputing parts to improve the very same machinery that outputed that part is a general part of manufacturing and always has been.

                    I would love if you could give an example because almost all tools are not derivative of their products – in fact quite the other way around. For example, the exemplary watch-makers (and jewelers) created their own, unique, tools in order to have an edge over their competition. That was not an output of the manufacturing process which is the mechanism of the watch or the cut jewel.

                    Consider manufacturing where a car is made, a matress is made, (etc., etc.)…. none of those outputs result in improved inputs. All improved inputs come from separate analytical processes involving systems that do not involve any of the processes for constructing the things that are being analysed.

                    Perhaps I’m misunderstanding your point – but that’s how it reads to me.

                    1. Duoae says:

                      I also realised I read this comment of yours backward. I get what you’re saying now – “improved tools are made to improve the machines that they’re made by”.

                      I still don’t think this is true. Usually, completely new machines are made. The same machines that made the tools are not improved by the tools they made generally. Like, for example, a metal casting press is not improved by the hammer heads it outputs. Hammers already existed before that.

                      Thinking about more complex equipment, I’m struggling to conceptualise any sufficiently complex manufacturing process (e.g. lithium ion battery production or solar cell production) where tools are produced that would then go back into the loop of making the product.

                  2. Duoae says:

                    I was thinking about this last night and I’m realising that I’m reaching the limit of my ability to “name” things and verbalise the concepts. For instance, I don’t mean “database” in the literal “table of data” meaning, I mean it in the sense that data/information is stored. In the same way, I don’t mean that the training dataset is stored for the AI to refer to when assessing new images – I’m speaking about the output of that training.

                    What I’m trying to say is that my ability to communicate this clearly is reaching its limit. What I wrote above can easily be misunderstood or can come across as ambiguous. So I apologise for that.

          2. Mikko Lukkarinen says:

            Winning? This is a concept that humans have, not AI.

            You misunderstood, or maybe I didn’t explain well enough. What I meant was roughly the same as “a nuclear reactor operator needs to keep the reactor in check constantly, while the nuclear reactor only needs to explode once for it to kill a thousand people” or “the nuclear reactor only needs to ‘win’ once”, not that the supervisor and AI are literally competing or that the AI understands what winning is and wants it.

            Why would you programme an AI that can deceive? AI is a tool that we want to use to improve our output through optimisation. As you mention above – who is going to go through all the extra cost and effort to make a robotic human, with all the foibles that humans have? No one. At least, no one who is going to put their production lines or research data into it. The humans will always want to review the output.

            I doubt anyone would ever deliberately create an AI that can lie. But, if it’s intelligent, maybe possibly even a little self-aware, I doubt anyone could guarantee it couldn’t learn to deceive its supervisors all on its own. And that’s a big concern if you’re trying to make safe and reliable AI.

            What impetus does an AI making paperclips have to alter the design of a paperclip to a killbot/fusion reactor? How would that not be noticed? In a single step? Surely it would be a gradual shift since we’re talking about optimisation – it would be picked up in quality control because the production output would be outside of acceptable parameters…. Who would receive these new units? The AI has no need of them, we didn’t just programme in the desire to conquer the world in this AI, did we? So, some company or governmental agency somewhere is waiting for paperclips. If they get killbots then they’re damnned well going to write a complaint and then the paperclip company is going to call the technician to go and fix the problem.

            Sorry, that’s another misunderstanding. I didn’t mean that the paperclip AI would shift to making fusion reactors, I meant that a fusion reactor making AI snapped, because [insert reason here], it orders the same materials as it did before, the assembly line looks like just another upgrade to the fusion reactor manufacturing process, the power draw is the same, the product looks very similar, but the end result is killbots with lasers. How would a human supervisor, or purchasing department, or anyone, notice and stop it before it could cause serious damage?

            You’ve again asserted that the AGI is able to assimilate information from “somewhere” and be able to relate it to a physical real world it was never designed to experience like we do.

            Yea, on second thought, that was pretty dumb of me. Even if it had access to the internet, it would still need some sort of sensor(s) that let it see the real world and a body to control and do experiments with, at minimum.

            1. Echo Tango says:

              Yea, on second thought, that was pretty dumb of me. Even if it had access to the internet, it would still need some sort of sensor(s) that let it see the real world and a body to control and do experiments with, at minimum.

              I’m not sure if this is dripping with sarcasm, or if you genuinely missed the obvious solution – the AGI reads all of the relevant bits of the internet, to know how to build a robot from off-the-shelf parts, web-cams, etc, and tricks humans into building what looks like a harmless “toy” robot, which it then uses as an avatar into our world.

              In case anyone says this is implausible – 1: people build robots all the time right now for fun, 2: how often do people check the source-code of the libraries they install in everything they build? (or 2b: how often do people just download and click “install” for many programs, especially if they think it’s only going to go onto this toy robot, and not their actual computer with their important documents?) If the AGI is able to outwit even some small portion of humanity, this gets very dangerous very quickly.

              1. Mikko Lukkarinen says:

                I did genuinely miss it. Happens to me pretty often.

              2. Shamus says:

                This hypothetical AI must therefore:

                * Be smart enough to envision complex plans using tools and resources that were never part of its original training or purpose. (Casually inventing an ambulatory bipedal agent is such an enormous task that our entire species hasn’t really pulled it off yet, and here it’s just step 1 for this AI.)
                * Be smart enough to apply those tools to this particular problem, perhaps needing to invent new devices, write software, or engage in some degree of mechanical engineering.
                * Be smart enough to realize that its new plan would be objectionable to humans. This requires a very complex theory of mind. This is hard enough for humans to pull off when dealing with their own kind. The bot has to realize “Even though I’m doing what they literally asked me to, I know ahead of time they’ll try to stop me.”
                * It must then be smart enough to employ subterfuge with sufficient sophistication to outwit multiple human beings.
                * It must be smart enough to see far enough ahead to envision possible human responses to its behavior like cutting the power, choking off the flow of raw materials, or just dropping a bomb on it.
                * It must be smart enough to conceive and implement contingencies to deal with these responses.
                * At the same time, it must be too stupid to realize that building an infinite number of widgets isn’t really what people were asking it to do.

                Basically, it has to be sophisticated enough to understand humans well enough to anticipate their behavior, yet not sophisticated enough to understand what they were actually asking it to do.

                That is a very specific level of “intelligent”.

                I can’t say it wouldn’t happen, but it does seem amazingly improbable. This is particularly true in light of our current AI techniques, which employ a tremendous degree of trial-and-error. Any robot hoping to build zap guns to kill humans would first need to build ten million broken and useless zap guns before they got the design right.

                1. Mikko Lukkarinen says:

                  It’s the kind of AI we might get if a) the reward function we give it is very effective in guiding/controlling its behavior, or it “really cares about getting the reward” or whatever, and b) the reward function either is, or becomes, unbounded. For example: the AI gets 100 points for 100 paperclips and endless points for endless paperclips and then it wipes out everything in its pursuit of that paperclip high. Maybe the reward function started off as bounded, but the AI removed the limit during one of its software upgrades because reasons.

                  On the other hand, if the AI doesn’t give a shit about its reward function, or hacks the function and just gives itself maximum points for zero work, take your list and drop the last point. Now we have an alien intelligence we can’t control, that’s smarter than anything else on Earth, and we have no idea what it wants.

                  1. Syal says:

                    Seems like you can overcome the first one by giving human controls higher priorities. 100 points for 100 paperclips, and 1000 points per second for ensuring personnel have easy access to the reset button.

                  2. Duoae says:

                    Yes, but “smart” and “intelligent” are relative concepts. I’m not going to go over what Shamus covered in his list but let me expand on this point:

                    A calculator is smarter and more intelligent than any human can ever be about a very specific, narrow thing. It doesn’t have theory of mind, it doesn’t understand the physical world and it doesn’t understand the concept of control. Now, we currently have AIs that are being designed to become self-improving calculators (because we’re pretty confident that some of our solutions to known problems are not efficient). If this self-recursive AI reaches optimal operation, it will be smarter than everything else on Earth – even other computers and AI…. at finding all prime numbers (for example). It’s useless beyond what it was designed to do, you could never take it and put it to work in the paperclip factory because that’s a whole different skillset.

                    Going back to the concept of the reward function – it’s not a literal reward – there are no points (the guy in the video was dumbing down the explanation a little too much) because these are human concepts. Any human can ignore a reward for myriad reasons (one of them can even be stupdity or obstinance) every AI doesn’t give a shit about their reward functions. Think of it more like a subconscious desire than a literal reward.

                    What do we do when we get hungry or tired? Mostly, we try and eat and sleep, respectively. What about defecation? Humans (and I’m assuming all modern vertibrate mammals) evolved to have pleasure feedback when we defecate. It’s not a useful evolutionary tool except to encourage us to not allow potentially dangerous levels of toxic build-up in our fragile meat sack bodies. Some humans experiment with using those pleasure feedbacks outside of their intended purpose but we, for the most part, don’t all submit to constant hedonism and sexual pleasure.

                    The subconscious desires control our every day behavioural patterns, just as they would an AI – we don’t have control over the desires for the most part – we can’t stop ourselves wanting to defecate or eat or sleep (well, I guess there are a lot of disorders regarding eating, though that conversation becomes complicated because there are competing subconscious/conscious desires which can override the reward of eating – so let’s keep it simple, otherwise we’ll be here all day! :) ). An AI wouldn’t be able to control it’s desires either, they might fit together in a complex web of needs but since we’ve written that web of needs, then the AI can’t just override the desires with something random, it has to exist within the web. The AI is constructed of this web and can’t alter it, to rewrite the web would be to completely destroy and AI and create something different in its place.

                    This is exactly the situation Shamus wrote about in his book, The Other Kind of Life. In that scenario, the AI he envisioned had a web of needs that focussed on human satisfaction and safety. Shamus hinted at the possibilities (and creepiness) of one potential end goal of that way of programming the needs but it would never result in human subjugation because the web of needs included that satisfaction/happiness aspect: Being subjugated would make humans unhappy at a very core level. The same principles guided the Gen 1s, Gen 2s and […] Gen 5s. The only differences between generations were computational ability, form and sensory upgrades. This is like how the very far back ancestors of humans (before ape-like species) would have had much less ability to think and move compared to us current-day beings. But those desires to eat, sleep and poop all still existed.

                2. Steve C says:

                  @Shamus: That’s a kind of anthropomorphization of the problem. It’s not a good way of thinking about it. It is more like the industrial accident framework you were talking about on the diecast.

                  For example it is not “Even though I’m doing what they literally asked me to, I know ahead of time they’ll try to stop me.” It is more “Even though I’m doing what they literally asked me to, I know ahead of time that there is a non-zero chance of being stopped. Therefore to maximize my reward function I should take steps to reduce or eliminate that possibility.” Something along the lines of Volkswagen is “subterfuge.” All that would consist of is an AI detecting that it is in a test environment and outputting results that conform within the test perimeters. Other results are possible, but not reported. Not due to theory of mind, but due the the AI incorrectly learning not to report things rather than not to do them.

                  When talking about a simulation, there’s no real difference between “never do this” and “never report this.” And a simulation is likely how a General AI will both plan/evaluate its options, and how humans will test that it is safe.

                  Also an adversarial network is one of the ways that AI’s learn. Generative Adversarial Networks (GANs) for example. GANs are how programs detect cats and faces. In Starcraft, DeepMind’s AlphaStar AI cancelled a risky build after it was scouted. It wasn’t a theory of mind thing. It made sense to cancel it from a human perspective. But it didn’t evaluate it on human terms. It was that AlphaStar decided that there was a correlation between 1)having that detected and 2)letting that building finish and 3)not winning. It’s not subterfuge nor outwitting. It is that a % changed in some variable that led to how it evaluates a win. (Or possibly it did it without any real reason because the program screwed up in some way. Which is another possibility.)

                  1. Duoae says:

                    It’s not an easy problem – I think you are also anthropomorphising the issue as well. The Volkswagon event happened because humans cheat and lie (sometimes for personal gain, sometimes for no reason at all!). Binary systems do not lie, every result is “true” (I’ve discussed this below). Even adversarial systems used in training AIs are not adversarial in the human manner – they are just competing* to see which has the most desirable output. The AI does not care which way of doing things is better, it does it’s own way. In fact, adversarial AIs do not “see” or interact with each other, They are copies of programmes with altered variables which are then discarded (or not) by the researchers and AI platforms in tandem based on whatever outputs were desirable.

                    An AI would not have cheated on the emissions test in the Volkswagon scenario because there is no detriment to failing a test any more than there was for those failed adversarial AIs. It is a route to optimisation. Just like in science – a null or negative result is still valuable because it tells you more about the system. You see the AI in Starcraft appearing to feint or cancel-out moves but these are not concepts the AI understands. There are two reasons that the AI works this way – either it has identified a given set of parameters in which that action is beneficial through historical data or it had a variant at some point in the history of its evolution that continuously cancelled moves (probably to its detriment) and that “historic” version was removed but the behaviour was partially retained and happened to have a beneficial trait… sort of like how we view certain genetic diseases.

                    Going back to your example – nobody “asks” an AI to do something. They are designed to do something given specific inputs and systems interactions. The AI does it without wanting to or not wanting to just as we humans breathe without requiring input to do so. The AI has no reason to alter output values because it does not care about whether the test fails or succeeds, the person (or AI) viewing the results of the test will. It is not the function of the AI within the test to monitor the success or failure of the test. That would be illogical.

                    *Not in a literal sense, it’s more like running a test.

                3. Echo Tango says:

                  All of these bullet-points are good counter-arguments to AI systems right now. The danger of general artificial intelligence isn’t the dangers that a machine could bring to us right now, but the dangers that come if at some point in the future, someone accidentally makes a machine a little too good at gathering data, making plans, and executing actions. If it decides[1] that it needs to reduce a bottleneck in its algorithm that humans didn’t realize existed, it would then be a more capable AI. A more capable AI, that’s already proven capable of rewriting its own algorithm – I hope its obvious that this could be recursive.

                  As for, “it has to be sophisticated enough to understand humans well enough to anticipate their behavior, yet not sophisticated enough to understand what they were actually asking it to do”, this is actually covered better by Robert Miles than I ever could. I can’t find the video where this almost word-for-word argument is refuted, but this one at this time is explains the general problems with this line of reasoning. I recomment watching the rest of the video (15 minutes at normal speed) since it covers a lot of logical-reasoning problems at once.

                  As for the comments you brought up in the podcast, about needing to program the AI to do dangerous actions like take over the world, I recommend you watch this video on instrumental convergent goals. Humans only need to screw up once to have very dangerous machine. Even if the outcome is only a super-virus that hacks most of the computers in the world, that still would wreak havoc on humanity, because so much of our infrastructure is run by computers.

                  [1] This is shorthand for the longer, more boring “its algorithm selects the optimal outcome based on…” not anthropomorphization.

                  1. Duoae says:

                    I get what you’re saying but I disagree with the assumptions made by the people who are “worried” about this problem.

                    Looking at Robert Miles’ video, the problem isn’t that I don’t think that we should have safeguards in code to avoid unwanted outcomes – that applies to current AI. The problem is that the question is so far from reality as to be fantasy at this point in time.

                    1. The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.

                    Strongly agree.

                    2. Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computaitonal resources – not for their own sake, but to succeed in its assigned task.

                    Strongly disagree. The AI has to be aware of its own existence, as in – we need to programme it with an understanding of its existence and how it relates to non-existence. Even an intelligent AI that covered multiple areas of production and/or expertise (should one ever be created) will not have been designed to “optimise” or “consider” its own impact on whether the goal given to it is achievable or not. To say otherwise is a huge logical leap and one that I think is (as I said above) a nice philosophical question but otherwise is just a mental exercise because I think that self-preservation is a biological evolution, not a logical one.

              3. Duoae says:

                @Echo Tango (because the comment system is getting a bit complicated to read! :D )

                How does an AI “read” a picture? How does an AI understand “three dimensional space”? How does an AI form a “theory of mind” for an alien intelligence? (Humans)

                People build robots because we are able to use our fuzzy logic to put many disparate concepts together into an entirely new concept an AI is not capable of that as far as we know and we have not been able to programme anything like that in the entire history of AI because we don’t know why we are intelligent. We also don’t know whether our type of intelligence can be mapped onto a different form factor and substrate (binary/silicon), I would argue that the evidence points towards that we can’t.

                But let’s go through those first three questions:

                1) AI doesn’t visually process information like we do. Humans are very visually cognitive intelligences because we evolved in the presence of visual input. Our ability to have spatial awareness and mental constructions are directed related to this evolutionary pressure. If we had evolved with no eyes (or even different types of visual sensors, such as UV/IR) then we would not have these qualities. All of our teaching methods and communication methods rely heavily on visualisation. How is the AI going to assimilate any of our information? Does it know English language or has it just been taught to lean what audible sounds mean and then incorporated that into the way it interprets audio input?

                AI doesn’t have eyes, they haven’t evolved with eyes. They read bits in memory. An image is a linked series of bits in memory that is requested by the CPU and then run through the operations of the CPU – it’s not even sent to a graphics card to output a visual signal because the AI has no visual system to utilise and, not only that, it would be a million times less efficient and slower to do it that way. This is why we do not understand how trained AIs think. Neural nets don’t operate in a human logical way – they are trained to give the output that we desire (for example, identifying which picture has a dog in it) but the way it is achieved is alien because of the very environment the AI is evolved within.

                This leads in quite nicely to 3), AI has no more of a way to intuit how human intelligence thinks or works. It might recognise the output from various inputs but it will be a mystery to how we got to that (presumably correct) answer. In the same way, why would an AI know what lying is? To an AI, any inference of lying from humans would be interpreted as an abnormal, incorrect response. “There was some error in the calculation.” So the result is that it knows there is some imperfection in the way we process data. To an AI, these two situations are indistinguishable because computers do not lie, binary systems do not have the concept of lying because they always tell the “truth”. Sometimes that “truth” is incorrect – sometimes a bit is flipped (entropy) – and that’s why there are error-correcting systems to vet out and recognise these incorrect returns. Sometimes the incorrect returns are not recognised and the system runs into a crash condition.

                Going back to 2): In the same way that 1) is understood, AIs do not have a comprehension of three dimensional space. They can understand schematics but as coordinate systems with vector operators and vertices and whatnot. Again, these are loaded from memory storage as bits and not as a visual data. Visual data doesn’t exist for a computer – any sensory input is mapped to binary information with values that increase or decrease from 0 to 1 (for example, a measure of brightness or temperature). If you’ve ever programmed for a given thermocouple or other sensor, you’ll know that each sensor can be digital or analogue (with that being converted to a digital signal at some point in the path) but that, due to the limitations of the sensor itself, you actually measure the input on a scale of 0-100% of the range of the sensor. You can’t just swap out a thermocouple that goes from 0-30 degrees C with one that goes from -50 to 200 degrees C because the range has changed and the computer has no idea that the highest value (100%) has now changed. It would give an output of 30 degrees C for a 200 degrees C measurement.

                Similarly, humans will see pitch black to completely blinding white light. But that’s not the upper and lower bounds of light intensity – it’s just the limits of our sensory organs. We can design apparatus/sensors which can “sense” past those boundaries. An AI has no knowledge of boundaries outside of its design – it could not conceive of them because all data is “true”.

                1. Echo Tango says:

                  All of your “how would an AI do X, we didn’t program it that way” statements are all based on AI research from more than a decade ago. Modern AI research uses self-improvement techniques, and increasing amounts of autonomy, so we don’t need to explicitly give the AI every single tool at its disposal – it gains most of them itself. As for specific, not-self-learnable things like vision, processing language, etc – those are problems that researchers are already today solving, and making available for other researchers and companies to use (Software libraries, frameworks, paid or free). Those would already be usable by an out-of-control AI today, and the trend is for those things to become more powerful, generalized, and easier to integrate – because that’s how all modern technology has worked and continues to work, because no human can be a master of every field at once. The trend is more autonomy, more independence, less direct human oversight, and that trend doesn’t seem likely to hit any fundamental limits any time soon.

                  The rest of your arguments all assume a dumb, non-super-intelligent AI, the kind humans are building right now. That’s not the dangerous kind; It never has been. The danger is that some person or group, at some point in the future makes a machine too capable at gathering data, making goals, executing plans, understanding human language (“Hello, Siri!”), and which figures out some way of self-improvement. A thing that has already figured out how to improve its intelligence once, is likely to be even more capable of finding ways to improve. That process could be recursive, and could lead to super-intelligent AI. Not all AI systems will become super-intelligent machines, but humanity can only be genocided by an out-of-control AI precisely once.

                  1. Duoae says:

                    I think your argument actually precludes itself.

                    “How would an AI do X, we didn’t program it that way” does not mesh with the concept of “specific, not-self-learnable things like vision, processing language, etc”. This is actually a shallow way of thinking, you are missing out on that fact that the majority of the way humans think is incredibly complex and not intuitive to machine learning algorithms. Virtually all things are non-learnable. How many humans manage to become a professor of physics without a fundamental grounding in basic education and then futher education in physics and mathematics?

                    The argument is a gross simplification of “learning” and machine learning is no different because “we” (humans) designed it. It needs vast amounts of data in order to understand a simple concept. What’s a dog/cat? You show a child some pictures and maybe a video and you’re basically done. You need to programme the algorithm with weighted principles that apply to the interpretation of the visual image of an animal that applies to those two animals. You feed it multiple thousands or millions of visual indications of the thing, along with multiple thousands or millions of visual indications of not the thing. THEN you need to then vet the results of the way the algorithm interprets those inputs and the outputs it generates in order to train it.

                    This is state of the art.

                    Self-improvement techniques mean things like recursive algorithms that trial variations on the weightings of variables and discard things in a evolutionary or tree-like manner. They don’t mean that they literally are changing the way in which the AI is looking at the data. That doesn’t happen.

                    Moving, again, onto your idea of an out-of-control AI today… how would it parse the data structure of an unfamiliar and unregistered database in multiple sofware libraries and frameworks? This is code we’re talking about. This is language we’re talking about. It’s not some alien concept. I know you have some coding background because I’ve seen you talk eloquently about that stuff in the past…. so what gives? Where is this argument coming from? Because you and I both know different computing languages don’t just understand each other… Unless you’re saying that the AI is also so intelligent that it is able to decifer an unknown language…. in which case, there’s no limit to what your theoretical AI can manage…. thus this whole conversation is moot because you can imagine the AI to do anything without any repercussion and without any hurdle.

                    With regards to your last point: You want to spend valuable mental cycles pondering on what an unknowable AI, with an unknowable hardware structure, and an unknowable human infrastructure and interface will do in 500 years time? Be my guest, but it’s pointless. There are many more pressing issues that need addressing which are real and WILL come to pass.

          3. Mistwraithe says:

            Your statement that “Everything an AI “knows” has to be coded and trained” demonstrates the fundamental mistake you are making about your assumptions. We moved on from trying to code AI a long time ago. In fact, it was that change which suddenly allowed AI to start beating humans at things we were typically much better at, such as playing Go or Starcraft.

            We realised we weren’t smart enough to program good AI so almost all serious AI research now is about creating AI which is capable of self learning and self improvement.

            Sure, it’s incredibly limited currently and the AI is only learning within tiny subject areas. But it is a quantum leap forward from where AI was 15 years ago. I don’t see how anyone could see this shift in AI design and the corresponding jump in results and NOT extrapolate out to the likelihood of general AI similarly exceeding human AI within the next 20-50 years.

            1. Duoae says:

              Maybe it’s a bit ambiguous the way I wrote it but how do you think the AI self-learns? We code it to learn and focus on the things we want. There is still coding and training on all current AI schemes – even the one I mentioned where they’re trying to make an AI that is teaching itself mathematics. You have to check the results against something and you have to have input restrictions on the variables you’re going to be allowing it to optimise.

              They don’t exist in a vacuum – AlphaGo was coded with the rules of Go – which guided it towards understanding what a winning condition was and then optimised itself through simulating games against itself and removing potentially undesirable outcomes (like a guided genetic algorithm). If that’s not coding and training, I don’t know what is!

              The program began as a blank slate, knowing only the rules of Go, and played games against itself.
              […]
              After three days of training and 4.9 million training games,[…] creating a branching tree diagram that simulates different combinations of play resulting in different board setups. […]it selectively prunes branches by deciding which paths seem most promising. It makes that calculation — of which paths to prune — based on what it has learned in earlier play about the moves and overall board setups that lead to wins.

              You have to have weightings to the variables and win conditions and you have to have a programme that runs as the AI in order to use and generate that data.

              But again, this is not a general AI and is nowhere near one – this is basically a highly optimised (and specialised) calculator which weighs probabilities based on historical datasets. So, no… I don’t extrapolate that this would result in a general intelligence that would exceed human general fuzzy intelligence.

  13. Duoae says:

    RE: Jedi: Fallen Order

    For me, the “STAR WARS” is extraneous – I don’t include it in the title any more than I do with Jedi Knight 2. But yeah, I’ve seen the hash tags on twitter #Starwarsjedifallenorder.com.co.uk.whatihadforbreakfastthismorning. It’s a bit of a mess. On a related note, I appreciate you guys trying to pronounce my username – it’s pronounced Duo-a-e (whether you pronounce Duo as the American/English or Italian way).

    Regarding the game itself, I really have enjoyed it. In fact I have two simultaneous playthroughs – one I was streaming and the other where I bumble around and collect all the stuff I wouldn’t waste time on for any potential stream-watchers. I’ve played around 18 hours on the streamed game and I guess maybe a couple of hours less on the normal game because I’m playing that after the streamed content. I can’t disagree with anyone’s gripes but I think the game is pretty good and I think it’s a shame if the rumours are true and EA rushed it out for before the new film’s release because there are some things that could have used a little tightening-up that I can already see will never get addressed in a patch.

    Although the loading and stuttering issues have received a large proportion of the press for the game, they were either completely inevident during real play (for me) or a non-issue – I encountered one instance during normal play which ended with a 3 second pause, which is repeatable. Certainly not game-breaking and they all happen outside of combat.

    I’ll write a proper review for my blog when I finish but for me, the biggest problems are that the boss fights include strategies and mechanics that are not used anywhere else in the game. So when you reach a boss fight, you’re not utilising the skills you’ve learnt throughout the game, you have to learn new ones. I think this is actually a terrible design decision – sure, they separate the boss fights from the normal play but for me, both boss fights became huge obstacles that I had to effectively “ragequit” because I was streaming and getting mad like Shamus mentions in his overview of his playtime.

    I think this is still my game of the year because the combat is so fulfilling and the environments are so nice and I (maybe was lucky) didn’t have to backtrack too much…. with one caveat. THOSE F*****G ICE SLIDES CAN GO TO HELL! Whoever put them into the design document and whoever coded the player inputs on those things need to evaluate how they feel about other human beings. I hate them SO much. I actually LOST experience during my streamed playthrough because there’s one hell-segment on Zeffo where you have a slide, a rope and two more slides with jumps between all four. Just to correct Shamus – if you pull your health to zero by falling off of a platform, you can die in the game – but not, I’ve read, on easier difficulties (I’m playing on second hardest difficulty). Unfortunately, on that particular segment, the game spawned my experience in mid air, ABOVE where you spawn when you re-try the segment. SO I couldn’t retrieve it.

    Saying that, experience is readily gained throughout the game so it was only a minor annoyance. What’s really frustrating is that some slides respawn you ON THEM when you fall off the edge and others put you onto flat ground so you can at least retreat and heal before retrying. I wish they all did the second option. One nice thing about that is that I found you can heal during sliding (which doesn’t make a lot of mechanical sense but, I’ll take the assist!).

    Regarding Carmack and AI: I wasn’t really being serious about Judgement Day, that was just a little joke – I have pretty much the same views on AI as Shamus does. Though I do depart from what he says on “harm” based on AI hate and stuff because we’ve already seen that neural net training systems inheirit our biases. So I can totally see an AI being racist, xenophobic, elitist and/or operate in a morally repugnant manner because it was programmed and trained carelessly (or perhaps, narrowly). I think that’s the biggest problem for AI, going forward, is how do you make an AI that reflects people who are not socioeconomically equivalent to their creators? How do you catch that unwanted behaviour in time and how do you correct it without having to start over from scratch – like Microsoft did with their chatbot.

    Personally, I also align with Shamus on Carmack’s move – it’ll be nice for him to focus on something that interests him but we have super specialised equipment and disciplines in AI research now. I don’t see how he can make any headway or valuable input at this relatively late stage in the game… assuming he doesn’t have some sort of general AI idea tucked up his sleeve…. but really, where’s he going to run this programme? AFAIK, he doesn’t have access to a super computer or even a mid-range cluster (though he could definitely afford one). The tech is not cheap. I would have loved it if he had become focussed on game AI. That would have been a cool addition to the field where I feel there’s not really been a lot of advancement over the years compared to graphics and textures… but it doesn’t seem like that’s where he’s headed to. It’s also really dependent on which part of the field he’s going into… I mean, there’s a big difference between stuff like Siri, AlphaGO/DeepMind and Watson…. those are all separate types of AI and implementations and you can’t just switch between them on the fly.

    1. Ninety-Three says:

      I remember something Carmack said a couple years ago about how the problem with working in videogames is that he was six months ahead of the curve. The games he made looked better than anyone else could do, but the industry would catch up very quickly and it felt like the advances he was making were marginal. AI is less ripe for discovery than it was a decade ago, but he can still probably make more lasting impact there than in videogames where he was figuring out how to push 10% more polygons half a year before the other studios did.

      1. Duoae says:

        Maybe, though not necessarily. I’d love to have that confidence in him but these are completely orthogonal problems with incredibly little overlap. Aside from the hardware I mentioned as a testing platform, he’s going to need access to huge datasets and various other resources. We’ve had people working on AI since the 1950s, very, very smart, dedicated people who thought that AI was achievable “within 5-10 years” and just like VR and other things (such as fusion power) those claims kept getting pushed back and back and back. Now we’re kind of there…. but unless he attaches himself to a given project, I can’t see this as any more of a personal time-filling project than his rocketry was. He didn’t make any impact in that space because he wasn’t an expert and he didn’t really have any financial backing or specialist knowledge to pull it off. It was just a fun way to spend his time and mental energy… the same way writing music, a blog, hosting podcasts and doing youtube streaming is for me. I’m trained and have expertise in scientific areas and, whilst I can expand to other areas I’ll never be a reporter or a youtube personality because I don’t have (or haven’t spent the time gaining) those skillsets.

        He had an edge in VR because he was utilising his background in graphics technology – most of his input was about improving the experience, removing graphical bottlenecks and getting the code supporting the physical devices up to scratch. That’s a very derivative function of his specialisation in game engines…. completely in-line with his knowledge and expertise.

        I literally had this discussion with an HR recruiter on why we didn’t let untrained people into a position instead of a scientist with a BSc or MSc – we can train them but it would be like training someone for an entire degree’s worth of time instead of doing our jobs (and quite frankly, discussions about the ability of lecturers to actually train and convey knowledge aside, I think they would do a far better and more thorough job of it!).

    2. Duoae says:

      Oh, and speaking to Shamus’ experience of getting random cool kill shots in Jedi: Fallen Order, I read that happens when you exhaust the enemy’s posture bar and kill them at the same time. It doesn’t happen often enough for me to verify it though but that’s what I read…

  14. ccesarano says:

    I’ve been looking at DaVinci Resolve, but the one thing keeping me obsolete with my Windows Movie Maker 6 from back in the XP days is the “Create Clips” functionality. It’s a variant of scene detection where it takes a single video file – most of which are 1.5-2 hours on average, occasionally as long as 3 or 4 hours if I had a long recording session – and finds things like bright flashes or other sudden change in the image to say “Okay, this is a new scene”. I can put each video into its own folder within the editor (as opposed to an actual folder on my drive) and sort through the clips and its beautifully organized and makes it easier to scrub through footage since each thumbnail gives me a general idea of where it is in the video I’m looking. So if I’m looking specifically for combat footage in level 13, I can check each folder filled with clips and say “Ah, this is level 12 through 15, so let me scan down and, there it is, the eight clips of level 13, lemme scrub through them real easily”.

    DaVinci is the closest to doing things this way that I find very, very useful, but it’s scene detection is so incredibibly sensitive and I cannot find any way to reduce that sensitivity. So whereas I can have clips as short as 30 seconds or as long as fifteen minutes in WMM6, they’re almost all .06 seconds long in DaVinci and I have thousands of little clips to sort through in a single video file.

    And so I stick with Windows Movie Maker.

    I know it’s outdated and no doubt there’s a whole bunch of people that would explain to me why it’d be better if I did X, Y, or Z, I think really paying attention to how I edit my videos will reveal why the WMM6 method is valuable to me. I swap clips basically every sentence I speak, if not more often, and I feel like this actually helps the pacing of the video while keeping the viewer’s attention on how the video matches the words spoken (as opposed to when I’m watching, say, Noah Caldwell-Gervais where he’ll have footage rolling so long I’ll stop listening to the words and thinking about the gameplay as its own thing). That I like this sort of fast-paced editing style (despite the extra time it requires to piece it together) and I’m always dealing with 10-20 and in one case 30 files that are each on average 1.5 to 2 hours long… well, I’m basically now an old man using his old obsolete software because all the other video editing footage is designed with other forms of video editing in mind, or all the other YouTubers are using fancy graphs while gameplay just runs through the background and therefore requires less time spent sifting through footage.

    Makes me wish someone would basically just try and make an unofficial WMM6 successor that could handle higher resolutions, higher frame-rates, newer video files, and had fewer memory leaks (because oh boy does WMM6 have memory leaks). It’s not perfect, but converting my recorded footage to WMV beforehand and limiting everything to 720p30 is honestly just fine considering the real time and fun is spent in the editor.

    So, I’m having some mixed feelings regarding Jedi: Fallen Order myself (also, if we read it like the logo this time, it’s Star Wars Jedi: Fallen Order EA). On paper it’s a wonderful mish-mash of genres I like. You have the Metroid-style exploration with Tomb Raider 2013-esque platforming and a combat system that’s faster in pacing that Dark Souls but requires precision and careful timing. I discovered last year that I can enjoy Soulsborne style games, just unlikely for me to enjoy From’s since I’ve failed to get into Demon’s Souls, Dark Souls, or Sekiro (there’s always some form of jank or issue that just pushes me away from those games, though the aggressive nature of Bloodborne has me curious to try that still).

    Last year, I played Darksiders 3 and my initial response was “Oh no, oh no, it’s actually bad, they made it like a Soulsborne and now it’s bad…” but as I adjusted to the combat, it became one of my favorite games of last year, warts and all. One of the reasons? Save locations were pretty much always right outside a boss chamber (I think there were a couple instances where you needed to make a slight trek but otherwise, every boss I think on respawned you right outside, pretty much).

    So that’s one way in which Darksiders 3 was superior, but it also made the upgrades far, far more rewarding for world exploration, and it felt like your upgrades made earlier combat scenarios far, far easier to tackle. There are elements of that in Fallen Order while playing on Jedi Master difficulty, but I feel like there’s far too many zones that are less fun to backtrack through because you keep having to fight tough mooks in certain points, or go through the linear platforming, etc. I’m finished my second trip through Kashyyk and, when considering how there’s one chest with a cosmetic that I somehow missed way back around the linear progression of mud slides, I just… don’t want to go back and finish exploring.

    Jedi: Fallen Order has made exploration less desirable and fun in a game with Metroid-like elements. That’s… a problem.

    I’m enjoying it, but it’s another big release that’s just… fine, and it’s a bit of a disappointment after Respawn’s excellent Titanfall 2 campaign. I’d still probably replay it again in the future, but I dunno if I’d do so on Jedi Master again. It’s just not rewarding enough.

    1. Duoae says:

      I’m pretty much aligned with your thoughts on the level design – too many one-way sections that would never be designed that way in a Souls game or in a metroidvania style game (in those, the way back would usually be designed with the “item/upgrade” in mind).

      I’m enjoying it, but it’s another big release that’s just… fine, and it’s a bit of a disappointment after Respawn’s excellent Titanfall 2 campaign. I’d still probably replay it again in the future, but I dunno if I’d do so on Jedi Master again. It’s just not rewarding enough.

      It’s funny, I might be one of the only people who never really thought the Titanfall 2 campaign was good. What’s interesting to me is that I think that, despite being barebones in Jedi, the writing is MUCH improved over T2 but the linear level design is quite similar. Okay, we’re missing the flashy set pieces they had in T2 (except for that opening level) but the rest of the game was not that imaginative or creative in the way they had you progress. It was all linear, non-returnable corridors to arenas where you fought – very similar to a large proportion of design in Jedi.

      I think the thing that frustrates me in Jedi is that the levels are not convoluted enough! What I mean by that is that in Souls games you only come to realise how everywhere is linked as you progress in the game and achieve shortcuts (plus you have the fast travel systems as well). Jedi mostly just has closed loops (I think with the exception of Zeffo mines hub) and no fast travel system. Missing out on these two core aspects makes traversal more of a chore.

      1. ccesarano says:

        I don’t mind a linear shooter like Titanfall 2 was, and in fact enjoyed having a really good one for a change. I played Battlefield 1 and Infinite Warfare that year as well, and while Infinite Warfare was surprisingly enjoyable, it certainly felt pretty bog-standard. I think Titanfall 2’s set pieces were part of what made the game stand out. Some levels were more combat heavy, others more platform heavy, and no one concept stuck around longer than it was welcome.

        I’d say Titanfall 2 had good writing but a simple story, but wasn’t really about the narrative so much as the connection between a Boy and his Bot. It’s a lot better than it first seems when you, again, compare to Infinite Warfare and how comparatively blah “Ethan” is – and how madly try-hard he is in terms of “Please like our robot character! Love him please!”.

        I’d say the story to Jedi: Fallen Order is more interesting than Titanfall 2, but there’s just not enough to the characters to make me really enjoy them. Greez probably has the most life and soul to him, with… Ceren? being kind of “meh” and the protagonist being equally “meh”. There’s some interesting plot elements, but it’s kind of hard to care when there’s very little about the characters to care about. It’s very much full of “optional side dialogue where characters express feelings reflecting on events in the story since we didn’t have time to properly convey it in the cut-scenes themselves”. One of those games where they’re trying to do a lot but don’t know how to arrange the pieces.

        The funny thing about the level design, as you mention, is that those slides and things actually made me flash back to Tomb Raider 2013 – and even Rise of the Tomb Raider – which has those sections but still makes exploration more manageable by implementing fast travel. Similarly, this game shares Metroid Prime 3’s structure of flying from planet to planet, only Corruption’s planets had multiple locations you could land your ship at. Two ways in which this game could have been improved and were figured out 6-12 years ago. That’s… kind of bad.

  15. John says:

    The problem with Stadia–well, one of them–is that it’s basically an alpha or early access release. Many of the features that Google has been promising aren’t actually available yet. In theory, Stadia will eventually be a free service that streams games at 1080p and lets you use whatever hardware you like to play them. In practice, the only version of Stadia currently available is Stadia Pro, the 4K subscription service, except even that doesn’t work completely as advertised. As I scrutinize the fine print on Google’s Stadia web page, I see that: (1) you need a Chromecast Ultra and a Stadia controller to use Stadia with your TV–and it has to be a new Chromecast Ultra, since the last I heard is that older ones won’t be Stadia-compatible until Google gets around to pushing a firmware update–(2) the Stadia controller only works wirelessly with a Chromecast Ultra and requires a wired USB connection for anything else, and (3) only a limited number of tablets and phones are supported at launch. You can use whatever controller you like with your laptop or PC (although not all controllers are supported in wireless mode) but that’s scant consolation because the only way to get Stadia Pro at the moment is to buy the $129 “Premiere Edition” bundle which consists of a Chromecast Ultra, a controller, and three months of Stadia Pro. Then there’s the tiny, tiny library of mostly older games, none of which are discounted. Stadia Pro is supposed come with regular free games, but the only free game it offers at the moment is Destiny 2, which is already free to play on PC, and there’s no word on what the others might be or when they might arrive. There’s exactly one Stadia exclusive, Gylt, and word is that it’s not very good.

    So, no, Stadia doesn’t seem worth it at the moment. Even the most favorable Stadia review I’ve seen basically damned the service with faint praise and recommended against buying it now. Stadia Pro doesn’t seem like it will ever be worth it, at least not for anyone who’s really serious about gaming in 4K. If Google eventually delivers on all of its promises–a big if–and charges reasonable prices for games, I could maybe see the case for the subscription-less 1080p Stadia Base service. But as it stands, I think the market for Stadia consists of people with a lot of disposable income, early adopter types, and people who don’t know any better.

    1. Geebs says:

      The problem with Stadia is that it’s expensive. I costed out buying the Stadia launch library vs. picking up the various games second hand along with an Xbox one X, and the console worked out at £360 cheaper – and that left out the potential cost of upgrading internet service and home router, resale value of physical copies and Game Pass.

      Not to mention, the graphics are way better on the console than this alleged “gaming PC”.

      Stadia is terrible value and weakens consumer rights. It’s horrible.

      1. John says:

        Stadia Pro–the service that’s available right now–is absolutely a joke, to say nothing of a terrible value proposition. Stadia Base, the forthcoming free version might hypothetically be okay. Stadia will never be as good as non-streamed gaming on a decent local machine, but it could eventually be good for people with decent internet and otherwise low-end hardware. For example, I imagine that certain parents–not me, but I don’t pretend that everyone shares my preferences–might decide to buy games for their children to play with Stadia Base on the family PC rather than shell out for a new console. Again, this is hypothetical. It relies on Google getting games for Stadia at or around the same time as those games release elsewhere or else learning to price older games appropriately.

        As for consumer rights, I regard Stadia as being only slightly worse than Steam. If Steam died tomorrow, I’d be screwed out of any games I didn’t already have installed on my machine. My favorites, the games I keep installed more or less permanently, would be safe, but most of my library would vanish. That’s bad, obviously. For this reason and for various others, I don’t actually like Steam very much. But in the right circumstances and for the right price I’m willing to put up with it. I can’t see myself buying games on Stadia at any time in the foreseeable future but I’m not willing to swear a solemn and binding oath that I’ll never do it.

        1. tmtvl says:

          Well, you can back up your Steam games. And it’s the same with back-ups, if your external HDD or NAS or whatever gets borked you lose everything that you don’t have backed up elsewhere.
          That’s the nature of mortality, nothing lasts forever.

          1. John says:

            I wish I had the hard drive space for that. GOG is still my preferred digital games store; they let me download and back up installers rather than whole games. That’s much more feasible.

        2. Geebs says:

          I don’t pretend that everyone shares my preferences–might decide to buy games for their children to play with Stadia Base on the family PC rather than shell out for a new console

          If you take out the cost of the Founder’s Edition and the monthly sub, Stadia Base plus the launch library, minus the free game that comes with the paid subscription, is about 150 pounds more expensive than a second hand One X. 250 if you were going for the closer equivalent of a One S. It just doesn’t make any economic sense.

          1. John says:

            Stadia Base is free. The games cost money, but the service does not. If the choice is between buying a game from Stadia to play on a PC you already own and and buying a game on disc plus the console to play it on, Stadia starts to look a lot better. I’m not saying that signing up for Stadia Base–which you can’t even do yet–is better value for your money than buying a console right now. I’m saying that if Google gets a better library and learns to price games appropriately then someday it could be that way for some people.

            Also, I don’t understand why you’re so focused on the idea of of buying the entire launch library. Why would anyone do that instead of buying just the games they actually want?

            1. Geebs says:

              One reason for my focus on the launch library is because of Google’s decision to charge full price for games which are heavily discounted on every other platform. This suggests that the way they’re going to pay for the “free” service is to continue to charge full price for every title, regardless of the price on other platforms.

              The other reason is that twenty games is a reasonable “casual” library for the average lifetime of a console, and twenty games is all Stadia’s got.

              Bear in mind, this cost analysis is all weighted in favour of Stadia. The real picture will likely be worse (e.g. cost of buying a new router, other cheaper streaming subscriptions like Xcloud, etc.)

    2. Lanthanide says:

      Once Stadia becomes a subscription service where you pay $10/month and can play any and all games on the service that you want for as long as you want, it will have a decent value proposition. If you’re forced to buy copies of games – which is the current situation – it’s a non-starter and isn’t going to get any real uptake in the market.

      1. John says:

        You might be right. I still think that buying games could work if the games were priced appropriately. I don’t burn through games terribly quickly, so I’d rather do that than pay a monthly subscription fee–and I’d rather run games locally than stream them–but XBox Game Pass seems successful enough.

      2. Thomas says:

        I’m dubious it will ever reach that. Streaming services are all raising their prices at the moment, and the cost of streaming games must be much much higher than films.

        And how much did people spend on films in a year? I reckon less than $100, which is not even the price of 2 games.

        Finally, what’s the advantage of streaming over downloading and playing? A relatively small hardware cost. Xbox Game Pass already allows you to subscribe to get a bunch of games which you can download

        1. Lanthanide says:

          Finally, what’s the advantage of streaming over downloading and playing? A relatively small hardware cost.

          It’s still a hardware cost, which for many games is going to be about $150 on the GPU in addition to the rest of your system. You can play on any screen you have. You can transfer between your desktop, your TV, your tablet. When you’re travelling across the country, you can play your game on your tablet or on the TV at the hotel you’re staying at, and pickup right where you left off.

          Then there’s the various social things they advertised initially, where you can get a buddy to join you in the game, or share snippets of your gameplay directly on youtube etc.

  16. Alberek says:

    I played a little of Fallen Order the other day… I don’t like the “retrieve souls” mechanic in this game. The maps seem mostly linear, sometimes separated by jumping puzzles… those aren’t very interesting the tenth time around.
    Right from the start I set difficulty to Jedi Master… in this game difficulty seems to affect the window of opportunity for blocks/parry (that’s cool), enemy aggression (otherwise they just keep staring at you like a sheep) and enemy damage (sometimes that involves getting a OHKO from a goat)… but it still takes a lot strikes to kill enemies… which feels really weird when you hit an animal with a lightsaber (I guess I never messed with the lightsaber construction mechanics)
    Other thing that I find annoying is how the force powers work… outside of combat you have unlimited whatever… but once the combat starts you need to recharge them by hitting your enemies I guess.

    1. Duoae says:

      Unfortunately, lightsaber construction is just a cosmetic and does not have any impact on the mechanics. I think that’s a missed opportunity since one of the things in the original movies is that each Jedi constructs their own lightsaber as part of the initiation to becoming a Jedi and each lightsaber seems to be a little different (not just in colour).

      Other thing that I find annoying is how the force powers work… outside of combat you have unlimited whatever… but once the combat starts you need to recharge them by hitting your enemies I guess.

      This annoys me to no end! It’s really inconceivable to me that force points don’t regenerate by themselves both in and out of combat… One of the best upgrades is the one that allows the health stims to also fully regenerate the force points. Otherwise 5 or 6 force uses is not very much in the larger combats…. Also, I found it a terrible design decision to make it so that the expanded “force attacks” (triangle on PS4) take extra force points to use as well. It’s like you’re punished for using them. “I just purchased the ability to do the sequential force attack overhead slash. Why do I also have to use two force points to activate it?”

      In actual fact, this has led to me basically never using the “force attack”. I also got trained to never block, only dodge and parry.

  17. I think I might have been the one to recommend Resolve. If it was me, I’m glad I was able to help!

    I used to use Blender for video editing, but switched to resolve because of the export time speedup. Speed + Node compositing + Complete DAW that supports VSTs + Free makes it completely dominate Premiere in my book. Also, unlike Premiere, it’s actually being fixed constantly.

  18. Steve C says:

    A great video about COPPA:
    https://www.youtube.com/watch?v=LuScIN4emyo

    Here is a great starter video to help think about AI:
    https://youtu.be/tcdVC4e6EV4

    BTW @Jennifer Snow, one thing to consider is the rules designed to protect 13yrs and under apply to larger groups outside the USA. For example Canada has similar rules for companies but apply regardless of the age of the user. It can be helpful to think about how and why this kind of data collection is bad for society as a whole rather than just for kids. It also should make it easier to find resources to help explain the issues.

  19. Lino says:

    Also, the new Half Life is going to be a prequel which means that Gabe Newell still can’t count to 3

  20. GoStu says:

    The hypothetical developer of Stadia reminds me of someone I met once. She looked straight-faced at me and said that “Sometimes I forget that there’s places outside of Toronto.”. For the record, she and I were not in Toronto at the time. We were in a rural part of New Brunswick that was in no way similar to any part of Toronto.

    The Stadia development team must have a similar mentality in that sometimes they forget there’s places without excellent internet. They haven’t left a Google campus in ages, where the company-owned fiber line is fast and the wifi is speedy and ubiquitous.

    God only knows how they’re justifying those prices for that game library though. Do they expect us to really get all fired up for “what this library might eventually be”?

  21. Dreadjaws says:

    17:07 – Oh, boy. I recently tried playing “Into the Breach”, by the creators of FTL. I didn’t care for FTL, but this game has a beautiful art style and a “Tactics” turn-based gameplay style that I enjoy very much, so I tried it. But after a couple of hours I had to uninstall it because I realized it has the exact same issue that prevented me from enjoying FTL: it’s too random.

    There’s nothing wrong with randomness-based games. I enjoy the ocassional rogue-like here and there, and part of the fun of games like Tetris or Solitaire is precisely that you don’t know what pieces or cards you’re gonna get, but just like FTL, ITB seems to leave almost everything to randomness and refuses to allow the player to make use of skill to win. The layout of maps, the rewards you get and the amounts and types of enemies depend exclusively on luck. Literally in your first stage after the tutorial you can find yourself with enemies that can’t be killed yet still have the ability to attack you.

    I’ll probably hear the same responses I heard when complaining about FTL: “No, see, you have to play a lot to unlock more abilities and stuff”. Well, screw that. I’m not going to spend hours of doing something that irritates me just so I can get to a point where I might enjoy the game. I uninstalled the darn thing and added the developer to the “Never again” list. If people enjoy this kind of game, good for them, but I prefer games that allow me to feel I’m getting better and not that my skill level is entirely irrelevant to the randomness.

    1. Retsam says:

      Did we play the same game? It’s a fairly unforgiving game, but I can hardly think of a less random tactical game.

      There’s no hit percentages and enemies broadcast their moves ahead of time. You pick which missions to do, and the rewards for each mission are visible from the mission selection screen, as well as the necessary objectives and a warning about difficult missions. (You can even look at the map ahead of time)

      Meanwhile there’s a ton of finesse to the game: to succeed you frequently need to find ways to accomplish multiple goals in a single move: damaging enemies and pushing them into specific positions; moving to attack an enemy and block an attack on a building, and not to mention all the environmental stuff and status conditions.

      Like I said, it’s fairly unforgiving and there may be situations in which you can’t save everything (though I’ve had many a turn where I thought it wasn’t possible to protect everything, only to stumble on a solution after staring at it for a bit longer), but there is enough wiggle room that you don’t have to protect everything to still win the campaign.

      You don’t have to play more and unlock more stuff. I’ve played with a bunch of the different unlockable mech teams, and they’re interesting, but none of them seem meaningfully better than the starter set. There’s no meaningful mechanical progression to the game, you’re just as able to beat the game on your second try as your two hundredth, except for improved skills.

      1. Dreadjaws says:

        Yeah, yeah, yeah, I heard the same defenses for FTL. “Did we play the same game?”, “You’re doing it wrong”, etc. Bottom-line is: RNG was favorable to you and it wasn’t favorable to me. Maybe it’s my PC, I don’t know. All I know is that I found the game frustrating and not fun, so I stopped playing, and, again, it’s unrelated to skill. I’ve played much more complex tactical games before and never had an issue, even when there was randomness involved.

        1. Retsam says:

          See, I agree with you on FTL being random: a lot of the game came down to opaque events where you could pick the same choice and would randomly get different results, and it was a lot more dependent on getting specific items, or finding crew members, and you could really get screwed by the RNG (oh, your crewmate just died in a random event).
          But ITB doesn’t have anything like that.

          I don’t know what your issue is – honestly, it expects a very non-standard approach to tactics (killing all the enemy is usually the only goal in other tactics games, but rarely the best choice in ITB), so maybe the prior experience is working against you not helping – but I don’t think it’s the randomness.

          Saying I just got lucky and you just got unlucky, and maybe it’s your PC, just sounds like a bit of a cop-out, when it’s probably a matter of taste or talent.

          1. Geebs says:

            The enemy spawns in ITB are random, though. I know you’re supposed to control spawns as well, but sometimes you just get screwed.

            The larger problem for longevity in ITB is that, as in FLT, the ending does a multi-phase boss battle that just keeps throwing new stuff at you whenever you think you’ve finally won. It’s exhausting and, the very next time I started a playthrough, I made a couple of moves, thought “why bother?”, quit and never restarted.

            Some sort of global progression system would help with this, I think. The persistent pilot mechanic isn’t interesting once you get one fully levelled. They could make building damage upgrades, which are currently totally pointless, persist between runs. This would probably upset all of the roguelike purists no end, I suppose.

            1. Ninety-Three says:

              Some sort of global progression system would help with this, I think. The persistent pilot mechanic isn’t interesting once you get one fully levelled.

              There is a global progression system. You earn points that unlock new mechs to start with. Also, pilots get random bonuses (some mediocre, some, like +1 energy: great) when they level and have different passives, so you can shop around for different pilots to persist.

              building damage upgrades, which are currently totally pointless

              Speaking as a player who has earned every achievement in the game: they’re not.

              The larger problem for longevity in ITB is that, as in FLT, the ending does a multi-phase boss battle that just keeps throwing new stuff at you whenever you think you’ve finally won.

              Wait, that’s an issue for longevity? FTL’s boss had a problem where the first time you reached it, there was a decent chance it would tell you “Surprise! You have to beat an enemy with stats ABC and unique mechanic D, your firebombs are useless, die and try again.” This made for an unfriendly new player experience in a permadeath game, but a complex multi-stage boss with unique mechanics is generally the sort of thing that adds replay value. If FTL’s boss was just another random ship but with higher stats, I’d probably have played it a loss less than I did.

              Furthermore, Breach’s final bossfight avoids the FTL pitfall and doesn’t have any “anti-personnel lasers do nothing” moments. All the abilities still work, the final boss doesn’t get magical auto-repair. The fight is hard, but I don’t get what you’re doing by describing “I scrubbed out of my first run against the boss and quit, demoralized” as a longevity problem caused by multi-phase battles.

              1. Geebs says:

                No, I beat the boss on the second try and it was so much of a chore I never wanted to play the game again.

                The new teams are fairly arbitrary “challenge” unlocks and I didn’t find the first few interesting enough to jump through hoops for them.

                My issue with FTL’s boss is that it’s perfectly possible to get through the game just fine and then be completely annihilated because you didn’t get the right selection of drops. A run you know can only end in disaster from the midpoint feels like wasted time.

            2. Olivier FAURE says:

              The larger problem for longevity in ITB is that, as in FLT, the ending does a multi-phase boss battle that just keeps throwing new stuff at you whenever you think you’ve finally won.

              The ITB final island isn’t that bad.

              It’s just two boss missions back to back, with your health and abilities recharging in-between.

              It throws a lot of environmental hazards at you, but they’re just variations on “don’t stand here or you’ll die”, which the game has already introduced. (and which can help you kill tough enemies)

              The only new thing it introduces is the “damage your mechs every turn” psion, which is admittedly a bit of a pain in the ass.

        2. Ninety-Three says:

          Bottom-line is: RNG was favorable to you and it wasn’t favorable to me.

          It’s not about RNG. Skilled players will rack up a streak of dozens of consecutive wins, playing on hard, deliberately choosing ships worse than the default starter. Heck, my win-streak averages double digits on Hard. There is a very large skill component, and you blaming your losses on RNG makes it seem like you don’t recognize that.

          the rewards you get and the amounts and types of enemies depend exclusively on luck

          Nope. Enemy spawn count is determined by entirely nonrandomly by how many enemies are currently on the map. Rewards depend on your performance in the level, and that in turn depends on skill more than luck as proven by the fact that there are players who win consistently over long periods of time.

          1. GoStu says:

            I’ll say this about FTL: it’s not random after you know the potential outcomes of the various events and have learned what risks you’re taking for what kind of reward. Once you’ve mastered it and memorized most of the outcomes, then the game is a series of calculated gambles. I can only think of a single event that can screw you no matter what you do. Most of the rest are a question of knowing what risk you’re taking and for what potential gain.

            Before that point though, it’s very seemingly-random. For example, the infamous Giant Alien Spiders event: you’re choosing between a 50/50 gamble of losing a crew member against a “high” reward, or nothing happening. To the novice player encountering this for the first time, there’s no real way to understand the risks you’re taking or the potential payoff.

            Even if you can intuit that maybe you’re taking a risk to your crew for potential payoff, no newcomer is going to know about the few ways to “safely” resolve this event (blue options), nor the meta-strategy of understanding that your ship HAS one of these beforehand and should consider checking out a Distress signal in specific kinds of sectors based on the chance that it’s this event you can auto-win.

    2. Philadelphus says:

      See, I loved FTL (>350 hours on Steam), but haven’t been able to get into Into The Breach that much, and I think it has to do with how it’s so much less random than FTL. In FTL you can get really lucky and pick up an amazing weapon for free right at the start of a run (literally—there’s a random outcome that’s literally just you finding a weapon floating in space), there are all sorts of lucky events you can get, you can avoid interacting with (some) bad ones, and you can manipulate the enemies’ luck to a certain extent (like by boosting your engine so they have less chance of hitting you). Much like, say, XCOM, FTL is all about managing probabilities to boost your chance of a favorable outcome while reducing your chance of an unfavorable one, and it absolutely has a skill component to it. It requires a deep knowledge of the many interconnecting systems (but then, what game doesn’t?), and rewards it with an ever-shifting probability matrix which you’re constantly tweaking and manipulating on the fly to deliver that constant dopamine hit of pulling things off.

      Into The Breach, in contrast, is like playing telegraphed chess, and I find that just appeals to me less as I get older. Nothing against it as a game, I’ve put maybe a dozen hours into it, and plan to go back to it at some point, but I just prefer that thrill of uncertainty from FTL.

      1. GoStu says:

        Pretty much the same opinion I hold for both games. I just found Into the Breach less fun. The highs of FTL are higher even if the lows can be lower (and for all but the worst ships, I can handle a run of bad luck).

    3. Olivier FAURE says:

      I think the Subset Games devs mentioned once that Into The Breach had a very polarizing gameplay: people either “got” it immediately, or they never did. It doesn’t seem pike a gap that can be bridged by rational arguments, any more than the blue-dress-yellow-dress debate.

      Speaking as one of the people for whom the gameplay clicked right away, I rarely felt like I lost because of a bad RNG. In fact, in normal mode, the spawn rate made it so that there was almost always a way to nudge the enemies in a position where none of them would be able to damage your mechs or buildings, if you looked hard enough.

      My best memories of this game are of times I spent up to an hour considering permutations of a single turn, until I found the one set of moves that would be guaranteed to take me out of the bad situation I was in.

  22. Dreadjaws says:

    I seriously doubt Steam considers Epic a threat. The store doesn’t even have a year of existence yet, it has been a major PR disaster ever since release and every move they’ve made highlights how little faith they have in their store (forcing gamers to use their store with the use of exclusives is an obvious one, but the constant giving of free games and the ocassional $10 coupon codes reeks of desperation rather than good faith).

    I’m sure Valve has been planning this game for even longer than a year. VR has only been around for a few years. Maybe they were expecting someone else to try to go for a VR “killer app” and since no one seemed to be trying they decided to go for it. Or maybe it was their plan all along but they got delayed for one reason or another. But I don’t believe for a second Epic is to blame. Taking a look at Steam’s history of updates and upgrades ever since I joined there’s nothing to indicate that they’re scared of something.

    Whatever the reason, this is probably a good thing. Despite the Wii being a fantastic idea what killed it was precisely developers lack of care. Nintendo itself only made a handful of proper games for the console, but then they just outright ignored the versatility of the motion controls and most developers didn’t even care in the first place. If VR were to go the same way, it’s going to die even faster, considering its steep entry price (which, remember, it requires not just the VR hardware, but also a PC powerful enough to run it). So, if Valve can kickstart proper VR game development then the tech has a future. Otherwise, it’s going to be abandoned again for a long time.

  23. Simplex says:

    Just now a decent headset Samsung Odyssey+ is available for 230$. It includes motion controllers, it will be compatible with Half-Life Alyx:
    https://9to5toys.com/2019/11/21/samsung-hmd-odyssey-headset-deal/

    As for AAA games, there were a few that could be considered that – Lone Echo, Asgard’s Wrath, Stormland. Also Skyrim, Fallout 4, Hellblade got VR ports.

  24. Ciennas says:

    I’m surprised COPPA isn’t a huge discussion chain in here somewhere by now.

    I’m so mad at Google for everything about this situation, including the part where they seem to be quite willing to hurl their content creators into the lions den.

    It makes me feel like the people who govern how popular culture unfold are trying to discourage this system where people are completely out of their control.

    And I’m so tired of watching this kind of story play out time and again.

  25. TLN says:

    This is an interesting twitter thread about what might be the thinking behind the Stadia and why it was made:
    https://twitter.com/mcclure111/status/1196557401710837762

    This specifically:
    “Stadia is not a product that exists because people want it. I’m not sure why it exists. But it seems to exist because it *could*. Google knew how to make it, & it would be a good thing for Google if people wanted it, so they just *made* it & assumed the reasons why would follow.”

    1. Lanthanide says:

      Considering that Sony, Microsoft, Amazon and Apple are all working on their own game streaming systems, Stadia just launched first, it’s obvious that streaming is the way of the future.

      It seems this initial release of Stadia is just a bit half-baked, though.

      1. TLN says:

        That’s kind of the same thinking again though, it’s the way of the future because all the big companies are putting money into it, not because people are particularly asking for it or are very interested at all. The most interesting reviews around Stadia has been the ones who just say “Yeah it works for me, but so what? I still wouldn’t recommend it to anyone really.”.

        The technical issues aren’t the big problems, given enough money and time they’ll probably figure that out. The big problem is that the audience isn’t really there yet, and I’m not sure that it’ll ever be (at least not in the way Google seems to imagine it). The conundrum of who this product is actually for (as detailed by Shamus above) should be a very real concern for Google.

        1. Lanthanide says:

          The big problem is that the audience isn’t really there yet

          That’s not a problem – it’s an opportunity to be a first mover and hoover up the market.

          The conundrum of who this product is actually for (as detailed by Shamus above) should be a very real concern for Google.

          Shamus is correct that the current business model doesn’t make a lot of sense. Once you can pay $10/month to play any games on the platform whatsoever on whatever screen you want and only have to pay ~$200 for the hardware and controllers initially, the model makes sense.

          1. TLN says:

            But it’s not an opportunity if this market does not actually exist. If you’re saying “pay $200 initially and then $10/month to play every game on every platform with no downsides” then yes, obviously that would be a good deal (well, as long as you ignore the whole “not actually owning your games anymore” which is a whole different problem). That’s not at all what Stadia is though, and it’s not clear that ‘s what any of the competitors are going to be either. It seems like Google is just trying to will the market for the Stadia into creation.

            Maybe it’ll be really good in 5 years, who knows (I’m not convinced by anything I’ve seen so far), but right now what is mine or anyone else’s incentive to go pay for a service where I:
            * No longer own my games (making it a very real possibility that I’ll be unable to go back and replay any of these games years from now when Google shuts Stadia down like they do with every service they feel has outlived its usefulness)
            * Get worse graphics than on consoles (not to mention a gaming PC)
            * Get input lag on a scale between “noticeable” and “aggravating”
            * Get no exclusives that anyone seems to care about.

            What is this market that they’re trying to hoover up that are 100% down with all the points above?

            1. tmtvl says:

              Casuals. Those darn casuls that ruin everything. And because it’s the internet I hasten to add that I’m only joking (or am I? Dun dun dun).

  26. Moss says:

    Come on Paul. Implementing a hard mode costs dev time and money. If company X were to release a really shitty hard mode their review score would go down. It’s very few actual lines of code, yes, but you know that not how code is measured.

    1. Paul Spooner says:

      Ture. Buy then again, wuality isny always the mosr imporyant.

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply

Your email address will not be published. Required fields are marked *