Crunch is Proof that Video Game Executives Don’t Understand Video Games

By Shamus Posted Wednesday May 29, 2019

Filed under: Column 131 comments

My column this week will be a bit of a re-run for those of you who have been around long enough to remember the last time I brought up crunch mode in video game development. On one hand, it feels silly to cover the same topic again and again. On the other hand, it’s even more silly that this is still a problem after all these years. The news doesn’t stop covering a natural disaster just because it’s day 3 and everyone knows about it. They cover it until the destruction ends and the mess is cleaned upActually, they probably stop covering it when people stop watching, but you know what I mean..

Some people take the hardline stance that “crunch should never happen”, and I just can’t get behind that idea. It’s built on the premise that since all mistakes are avoidable, you can avoid all mistakes. This is clearly not the case. As an engineer, I’m much more comfortable with designing systems with a bit of fault tolerance than designing systems that are perfect as long as nothing ever goes wrong.

This is a world of finite resources, bedeviled by selfish jackasses, subject to entropy, and filled with unpredictability. Jerks will cause problems, misfortune will strike, people will make mistakes, and equipment will fail. You can insist that all game budgets should be mapped out perfectly and then sufficiently padded, but this ignores the way that creative projects will expand to consume available resources. If feature creep is a problem when schedules are tight, then how much worse will it be when everyone has lots of “extra” time?

Saying teams should never crunch is like saying that airbags are a waste of money because accidents shouldn’t happen in the first place. The pursuit of an unattainable ideal will prevent you from building a robust system.

So how would I handle this?

Since You Asked…


If I was the president of videogames, I’d put a hard cap on crunch of 6 weeks. Sure, that number is fairly arbitrary, but I think that’s right around the point where people start to burn out and you’re doing more harm than good. If a game needs more than 6 weeks of crunch to hit its ship date, then you can’t save the ship date with crunch because the team will burn out before they get there.

Someone has just entered my lavish executive office at Ginormous Videogame Funtime Superconglomerate Inc. and interrupted my nap with news that Shoot Guy 7It took me a long time to persuade them to not name it “Shoo7 Guy”. is behind schedule. For now let’s assume that the delay isn’t caused by misfortune. The team didn’t lose any equipment, data, or key leaders. The publisher (me) didn’t throw them any last-minute curve-balls like “Oh, we also want this game to run on the Wii U and support mutiplayer and also have mobile integration, and no I can’t give you more money for that.” They weren’t forced to do any last-minute changes to the design to adapt to changes in the marketLike maybe the Battle Royale fad is over, or everyone realized their protagonist just happens to look like a hated character in a terrible movie that just came out.. I didn’t cut their budget or steal a chunk of their team for a pet project. We’re going to assume that this game had a more or less normal development cycle but it’s behind schedule anyway.

Are we just hearing about this delay at the last minute? Hopefully not. If a AAA game is behind schedule, then you should know about it many months ahead of time. At that point I’d have to make a call to either move the ship date and allocate more money, or leave everything the same and tell the project manager to cut features and content until the scope fits the budget. The latter isn’t nice, but if this is a cookie-cutter game in a crowded genre, then throwing good money after bad is not a good idea. I have other studios to worry about.

Once the game has shipped, the important thing is to identify what went wrong with the process.  This is fundamentally a management problem, and that’s where you need to start looking for clues.

Why was this game late? Was the project manager way too optimistic with their scheduling? Were they allowing too much feature creep? Was the PM bad at keeping the team on-task? Did they keep changing the design?

Assuming this isn’t my fault, then we have a PM who has messed up. Maybe this is a good PM that made a few mistakes, and if I give them another chance they could mature into a talented leader. Or maybe this is someone who was really good at game design / programming and pushed for a promotion they couldn’t handle. Maybe they need to take a step down to their previous job and I should give the leadership position to someone else.  Changing leaders is disruptive to a team, but leaving an incompetent leader in place is actively destructive.

Right here, at this moment, is where someone in my position needs an intimate understanding of the industry in general and software development in particular. This is a hard call to make. It requires both technical knowledge and people skills to figure out what needs to be done. Maybe the problem isn’t the leader, but the process the team is using to schedule the job and hit milestones. I need to either replace the project manager, or replace the process. To do the latter, I’d have to use the decades of management experience I ought to have and the lessons I’ve learned from our other more successful studios. “Oh, I see your problem. You didn’t know how to budget for all of those character models because you’d never modeled and animated centaurs before. Over at GopherGames, those folks create a few test assets in pre-production, and use that experience to inform their estimates rather than just taking a wild-ass guess.”

That’s how I’d handle it, anyway. I wouldn’t avoid prolonged crunch just because I’m a nice guy, I’d avoid it because its ultimately destructive and won’t give me what I want, which is a profitable game released on time and a healthy studio ready to make another one.

Why Not Just Pay People?

It ends with you, too.
It ends with you, too.

Rather than adopt an inflexible zero-tolerance policy towards crunch, why not just reward people for their sacrifice? In a 2 year project, paying people double their standard pay for 6 weeks of crunch is only a 5% increase in your labor budget.

According to conventional wisdom, the following things are true:

  • Labor is usually your biggest expense in a tech company.
  • On top of that expense, you have costs like equipment, facilities, bandwidth, legal fees, middleware licenses, etc. This number is significant, but typically smaller than the first.
  • In videogames, AAA projects frequently spend as much on marketing as they do on everything else, which means this figure is as large as the other two combined.

You can take those facts and plug your own numbers into them, but it’s clear that paying people an extra 6 weeks worth of money can’t be more than 1% or 2% of the total budget for the game. Assuming you’re trying to retain talented professionalsBecause finding replacements is costly and training new hires is costly AND time-consuming. then a “thanks for the crunch time” bonus is a no-brainer. You need to give those people a reason to stick around. Mistreating your creative talent over stupid rounding-error sized chunks of money is nuts.

Just pay them. It’s not even a big deal.

Barring that, give people a few weeks of paid time off once the game shipsObviously you need a few people to stick around for bug fixing, support, etc. But the bulk of the creative staff can go home.. The longer the crunch, the longer the break. Actually, this last one is probably healthier. After 6 weeks of crunch, everyone needs time to recharge.

Or you could split the difference and give the team a little of each. Again, this isn’t that big of an expense in the grand scheme of things.

I don’t claim that Andrew Wilson‘s job is easy. Taking responsibility for tens of thousands of people is difficult, and you’re often faced with tough questions with no easy answers. But I can’t believe he’s trying to do that job with so little (apparent) knowledge of how the process works.


I forgot to add the most important point, which is that it would be up the executive (in our hypothetical example, that would be me) to create the desired culture. You have to let people know it’s okay to go home on time. If left to their own devices, they will crunch on their own. The publisher / studio dynamic is inherently alienating because it’s hard to maintain trust at a distance like that. No matter how nice I am as an executive, I can’t have a casual rapport with all of my dozens of studios. If the project gets behind schedule, the project manager might get nervous about their job, and start pushing people to work longer hours. If crunch isn’t actively discouraged, a team is fully capable of destroying themselves and the project with overwork.

The only way to prevent that is to make it clear that this is not what you want. Like I keep saying: Tone at the top. If I make it clear that I want good morale and a healthy work environment, then the studio heads and project managers below me will try to do that. If I insist that they hit the ship date at any cost, then their leadership style will follow that mindset.

Go home. See your family. Then come back tomorrow and give us your best work. Everybody wins.



[1] Actually, they probably stop covering it when people stop watching, but you know what I mean.

[2] It took me a long time to persuade them to not name it “Shoo7 Guy”.

[3] Like maybe the Battle Royale fad is over, or everyone realized their protagonist just happens to look like a hated character in a terrible movie that just came out.

[4] Because finding replacements is costly and training new hires is costly AND time-consuming.

[5] Obviously you need a few people to stick around for bug fixing, support, etc. But the bulk of the creative staff can go home.

From The Archives:

131 thoughts on “Crunch is Proof that Video Game Executives Don’t Understand Video Games

  1. Delachruz says:

    The one thing, coming from somebody who once worked a soulless corporate abyss where crunching for projects regularly was the norm, one of the scariest parts is that workers will sometimes maneuver themselves into the worst situations. PSA: This in no way means I’m blaming the workforce for enforced crunch time, developers are being abused, it’s not okay. This is just personal experience.

    When we worked on “fictional project 3, Revolutions” we had a deadline to reach. The need for crunch was obvious, but was never spoken of aloud. You saw the date, you saw the garbled mess that, Bill Gates willing, would one day be a sellable product and you knew people would need to work a load of overtime to get this stuff done. But it was rarely management that enforced the additional work hours. It was the other guys. Manager came in, announced the desired date, and with that it was established it had to get done. And the company culture was such that the veterans would practically enforce the horrid conditions by themselves.

    It was not okay to go home. If you went home early, you were betraying your fellow developers. Your hours would need to be caught up by somebody else. The guys who had worked for said company for decades could not even entertain the idea of “Let’s just admit we can’t do this, and get management to change the damn date”. They would give people who didn’t pull enough overtime the evil eye, you would get ignored if you “didnt pull your weight enough”. I’m talking full on bullying here. I was in the comfortable position of not having to give a toss about the job, as it had been practically forced on me. And it became a personal little game to very deliberately walk out exactly on time. This went on for a bit, until my lunch started vanishing, my jacket would be dumped in the trash container, and at one point I was pretty certain somebody was trying to spike my coffee.

    After the thing was shipped, I lodged a complaint with HR and was more or less quietly let go. Part of me felt horrendous, because some people in there needed those jobs, and it felt like walking out of prison when I finally escaped. Don’t get me wrong, the company was practically Mordor Inc. But the fact that there were actual WORKERS in there who supported the regime of “Robot, not Man” disgusts me to this day.

    1. Decius says:

      That’s a toxic culture that management can’t change by small measures. Turn it off and turn it back on again. By which I mean “Cancel all features that aren’t already complete, ship the game IFF one exists, and then make hard decisions about which people to put onto other projects and which ones to fire.”

      1. Kylroy says:

        It’s also a toxic culture that management at worst encourages, at best deliberately avoids learning about. If management can squeeze free work out of people without doing anything beyond announcing a date, why wouldn’t they?

        I left my (exempt) job of over a decade because they were explicitly telling me to work more hours – not complete certain projects, not resolve X work tickets, just spend more time in the office. It had nothing like the crunch of the video game industry, but it was very much built around peer pressure to have more of what I referred to as “ass in chair time”.

        1. Decius says:

          If your explicit job duties include warming a chair with your butt for a specified number of hours, you are not an exempt employee.

          If you’re planning on leaving your current job soon, it might even be worth it to get documentation that your actual job duties include seatwarming, so that you can file a DoL complaint on your way out.

          1. Guest says:

            Dude, “exempt” here means he’s leaving out his employer’s name to avoid doxxing himself. Context.

            And no, seatwarming isn’t anyone’s duty, it’s a failure of management. If someone is being paid to do a job, and they can complete all the duties in less time than you expect them in the office, then either a) That employee is heavily underutilised, and could use a position with more responsibility and work or b) That position is drastically undertasked, and needs more work assigned.

            They’re also absolutely right. Culture comes from the top, and management are the ones who create this environment, often explicitly for the purpose of using crunch to get free labour.

        2. Sleeping Dragon says:

          This really reminded me of something that I’ve seen posted on tumblr a while ago but after finding and re-reading it I suspect the entirety of it would break the no-politics rule (plus it’s rather on the long side) but I’m going to repost two paragraphs here that I think should be okay (if not I hope Shamus will kill this comment promptly):

          At this point the management no longer needs to influence anyone directly to work through lunch break, simply by keeping up the sense of constantly being a little late for the project they have ensured the lunch-grinders will apply pressure to their peers who aren’t working through breaks.

          As workplace hostility increases towards the “unproductive” members who are expressing their formal right to a break – they will be replaced with new individuals who may not even realize they have the right to a lunch break because working through the hour has become normalized by their peers.

          The thing is, this probably wouldn’t stick with me if not for some thing that have been happening at my workplace lately.

          1. Scourge says:

            Something similiar happened to me at work. I am working in a Callcenter, that means we have SLA’s about how many calls we are allowed to miss, how many people are allowed to hang up, etc.

            When my shift neared the end and I’d get a call, it often would take me some time to fix the issue, depending on the severity of it, or document it properly. Which meant that I’d stay 10 to 20 minutes past my time and going into Overtime.
            Each and every time I had to tell my boss ‘hey, I had some OT here’ and they’d have to sign it off. Then one day, they didn’t.
            I asked why and they told me it ‘Since no else is asking for OT compensation it would be unfair for you to get it’.
            “Alright. If that is the decision.” I said and since that day I have always quit on time, no matter what.

            I am too old/too young to play those stupid games and I am not being paid enough for it either.

      2. Guest says:

        Well, they can. It’s management that pits workers against each other. Someone in a leadership position needs to take the responsibility and let upper management know of the delay, and that upper management needs to be receptive of news like that, so that people don’t hide things for fear of losing their jobs.

        It’s the employer that sets the deadline, and creates periods of crunch, and that’s why people turn on each other in those times, because you know there’s a deadline, things have to be done, and anyone who isn’t pulling their weight, is creating a situation where their colleagues will be forced to.

        I’m not saying the bullying is ok, it’s absolutely not, and it’s awful, and it’s a disgrace that the solution there is to let the victim go instead of fixing things. I’m saying it’s an environment created by the expectations set at an executive level. They absolutely can fix that environment, by following the sort of advice Shamus laid out here, and not literally planning to use crunch time to save on labour, as some publishers currently do.

    2. Hal says:

      That you were let go seems like a great illustration of the toxicity at work here.

      “I can’t work with these people!”
      “Boy, have we got a solution for you, then.”

    3. Mr Compassionate says:

      Man these dev stories are really interesting. It reminds me of how people talk about dictatorships.
      Historians make the point that in the Nazi regime it wasn’t just Adolf Hitler bullying a country into doing these things or even his direct subordinates. For massive sweeping change like that to happen it always needs to prevail on every level of the society from the lowliest worker up the highest official. Most people need to believe in the cause or at least directly contribute to it. In the end everybody is complicit on some level and all the leadership has to do is reinforce the ideal.

      I guess a workplace can be the same way.

      1. Zak McKracken says:

        I think herd instinct plays a huge role in this

      2. Mousazz says:

        It’s like how in concentration camps the SS officers would assign some of the workers to the role of “kapo” to oversee their peers and bully them into submission. A very fine measure to cut down on the number of necessary guards and to foster distrust and resentment among the prisoners, hindering efforts to organize a mass escape. Labor efficiency was shot to hell, but for a camp whose main purpose was to just keep all the undesirables away from the general populace, this method was clearly acceptable.

        However, if you’re in charge of a capitalist company whose goals depend on the efficiency of its workers, having the office be run like a concentration camp is… questionable, even when looking at the situation from a purely economical perspective.

    4. Agammamon says:

      As a manager in the military one of the hardest lesson I needed to learn was when to tell my seniors ‘no’. If you keep saying ‘can do’ they’ll keep piling on tasks. At some point you have to put your foot down and say ‘I can’t do all this with the resources I have at hand, this needs to be assigned to someone else or you need to decide what slips’.

      1. Christopher Wolf says:

        I wasn’t in the military, but I was a restaurant manager in when my GM gave me a bunch of stuff to do I told him I would put it in my que. He was like what? I have to line up what you want done and it will get done eventually. You want it done faster? Give some to someone else!

      2. Decius says:

        One of the things the military does poorly is teach that to middle management. All too often it just gets passed down all the way down the line.

        I read an example once of an order that an XO gave to a division officer to have a space cleaned by the morning; DivO passed it to the chief, who passed it to the LPO, who passed it to the grunt, who stayed up all night doing it, then fell asleep on watch the next day. LPO found him, told the chief, who told the DivO, up to the XO, who referred the sailor for NJP for sleeping on watch.
        At the actual Captain’s Mast, the captain, for the first time in the process, asked the sailor why he fell asleep. The entire dysfunctional chain of command was in the room with them at the time, apparently either unaware of what the repercussion of their orders would be, or too afraid to ‘talk back’ up the chain of command, or both.

        I think it was from a book titled It’s Your Ship written by a retired Navy Captain turned management consultant.

      3. Boobah says:

        The rule is: The reward for a job well done is more work. That’s as true in civilian life as it is in the military. Though in civilian life you can often put a price tag on your time to help filter out which part of that workload need doing next.

  2. Infinitron says:

    The executives didn’t invent crunch, of course. In the old, wild days of game development before the big corporations took over, working hours that would nowadays be defined as “crunch” were probably more widespread.

    The Looking Glass guys who made System Shock back in the early 90s? You bet your ass those dudes were working weekends, sleeping on the floors and eating ramen.

    1. Matt` says:

      That’s a problem that goes wider than the games industry and into the rest of tech – taking the cultural norms and behaviours of the scrappy little startup where everyone is very personally invested, lionising those attitudes, then expecting people to operate the same way within a corporate behemoth.

      1. kincajou says:

        This is ridiculously common in all fields of scientific research at academic and non academic level…it’s really unhealthy how modern science relies on burning through human lives

        1. Hal says:

          Half the reason I opted to leave grad school with an MS instead of continuing on to get my PhD. No matter how much anyone tells you that work life balance is important, there are people around you willing to work 80+ hour weeks, and the powers that be notice the enthusiasm gap. And there’s always more of those people waiting in line.

          Worse still is that this insane workload isn’t even a guarantee of success in the scientific world. It’s your entrance fee just to get a chance to try.

          No thank you.

    2. Decius says:

      And they also choose when to crunch and when to delay release or drop features.

      That’s key- the people deciding to crunch need to be the ones who crunch.

  3. Lars says:

    Saying teams should never crunch is like saying that airbags are a waste of money because accidents shouldn’t happen in the first place.

    Ahh! Terrible car analogy. Welcome back!

    1. Kylroy says:

      Eh? I think it works fine.

  4. anorak says:

    Did Randy Waterhouse write part of this article or does everyone feel the same about the hazerds of Captain Crunch

    1. Nixorbo says:

      No, but I do have a sudden intense craving for Cap’n Crunch.

    2. Canthros says:

      Wow. That’s a deep cut.

      Deeper than the Captain Crunch-induced lacerations on the roof of one’s mouth.

  5. MadTinkerer says:

    Crunch is Proof that Video Game Executives Don’t Understand Video Games

    And they would know it if they didn’t keep firing everyone after each project shipped.

    1. DerJungerLudendorff says:

      The system works then!
      Plausible ignorance and giant sacks of money, what more could an executive desire?

    2. Sleeping Dragon says:

      To celebrate the release we’re having an out of office pizza party* and getting bonuses**!

      *as your desks are being cleaned
      **for those of you who will remain employed and assuming the game scores ridiculously well on metacritic

      1. Guest says:

        *** even then, we might fire you to reduce our labour overheads for the financial year, because it makes the profit margin look more dramatic for our shareholders.

  6. Hector says:

    I’ve long wanted to point out a comparison with Hollywood. Game companies for years tried to copy Hollywood film-making (and to a degree, still do), but they never as far as I know copied Hollywood’s systems. There’s all kinds of problems with the studios – but they don’t generally end up needing a desperate struggle to finish a movie barely in time to release. They usually break filming into multiple phases with a *very* strong emphasis on planning, and have backups ready if things don’t go as planned.

    1. Kylroy says:

      I feel like the difference is we’ve been making big-budget movies for about a century, and we’ve only been making comparably expensive games for ~15 years. There’s a rich history of moviemaking that can suggest almost all the problems you’re likely to encounter – there’s far less AAA videogame production history to refer to.

      Granted, as Shamus points out, video game management is so insulated from development that nobody is even *trying* to learn from the history we have…

      1. Hector says:

        That’s why the industry leaders should be pro-actively trying to learn from other businesses’ best practices. We’ve had 20 years to develop those skills, but the executives at places like Activision or EA are doubling down on employee and customer abuse instead. Every publisher is stupidly scrabbling for pennies in the dirt rather than developing the social organization to succeed.

        1. Kylroy says:

          And EA/Activision/etc. are not doing it because, well, they don’t have to. They’re still, on balance, making buckets of money. Optimizing this process will take a *lot* of work, and the benefits will probably not go to the first people to do it – they’ll be trying out new methods, and many of them will end up being worse than the status quo. What they learn will benefit the *next* team, however, and their efforts will benefit the team after that, etc.

          But all of that could take a half-dozen years to pay off, and if you’re a CEO expecting to cash out next January, why put that drag on your earnings?

        2. Guest says:

          Also, films are heavily unionised, which makes it very difficult to get away with exploitation.

    2. Nessus says:

      Kylroy is right about the historical time scale. If you go back to the early “studio system” era of Hollywood, you start seeing a lot more similarities to modern game dev/publishing. The actual strategies are different ’cause the logistics of the media are different, but the culture, corporate attitudes, and the gaping lack of legal regulation/protections because the law hasn’t fully noticed this new media yet are VERY familiar.

      1. Nimas says:

        It also helps I imagine that there are actual unions in Hollywood. I think there may be problems associated with having them, but it does allow some pushback against higher level management that just doesn’t exist in the games industry.

        1. Nessus says:

          Unionization is part of what dismantled the old Hollywood studio system. Like I say, the situation is a REALLY obvious case of history repeating.

          Even had the same dynamic people cite further down, where unionization was hard to get up steam because there were allways heaps of desperate, starry-eyed newbies eager to work in the industry for the studios to replace “uppity” workers with.

  7. Thomas says:

    The UK Government enforces mandatory overbudgeting and scheduling of projects, based off statistics of other running over on projects.

    This is because underestimating resources is a cognitive biasesas that even every experienced people can’t avoid

    1. Decius says:

      There’s even a name for it: Planning Fallacy.

      The mechanism is mostly that people only schedule the tasks and complications that they know about, and then form an estimated completion time equal to the sum of the expected times of things that they know about.

      Changing that to a model where you estimate how long it will take you, personally, to do something by looking at how long it generally takes people like you to do it is more accurate, but requires humility: Most people think they are above median.

      1. onodera says:

        > Changing that to a model where you estimate how long it will take you, personally, to do something by looking at how long it generally takes people like you to do it is more accurate, but requires humility: Most people think they are above median.

        Wait, what? If I am arrogant, I will think my estimates are too *small*, because other people are worse than me and will require more time to do the same task.

        If I am humble, I will think other people will do the same task faster than me.

  8. Joe Informatico says:

    Hopefully not. If a AAA game is behind schedule, then you should know about it many months ahead of time. At that point I’d have to make a call to either move the ship date and allocate more money, or leave everything the same and tell the project manager to cut features and content until the scope fits the budget.

    So a few things I’ve read have suggested that AAA games in development are usually a mess of different levels and modules and features being worked on by multiple teams, some of which aren’t even in the same time zone, and it’s only in the last 6-12 months that the producers work out a structure of what they have to produce a functional product. It sounds like the games industry equivalent of “finding the film in the editing bay”, i.e. taking all this principal photography from a haphazard shoot of a troubled production or one with a constantly changing script, and trying to make a workable narrative out of it.

    I don’t know if this is the way it has to be for AAA game development, or if it’s part of the larger issues about project management and the development pipeline that we know are widespread in the industry.

    1. DerJungerLudendorff says:

      A lack of clear direction does seem to be a common theme in a lot of the major stories we’ve gotten in the recent period. And it would certainly cause a lot of scheduling problems and last-minute changes.

    2. Gethsemani says:

      The difference between a movie and a game is that a movie doesn’t have to re-invent the camera for every movie. The interactivity of games and the fact that gameplay is the big draw of a game means that a lot of a games production is about coding all those cool features you want so they’ll actually show up. The Matrix revolutionized action movies, but from a technical standpoint all it really did was take some cues from Chinese action movies about the use of wires in action sequences, combined it with green screens and sequenced rapid cycling cameras. The effect was stunning, but for the most part the Matrix is a traditional action movie and the big hurdle is getting the choreography down and the rights to shooting the action scenes in some cool places. A movie can just follow the script and revise the scenes that are too expensive, too technical or requires shooting rights for places you can’t get (ie. interior of Notre Dame).

      Compared to something like Borderlands, which melded the action-rpg and FPS genres, which required tons of coding to get even the basics of the leveling system down, not to mention the loot system or the shooting system. All of these had to be done from scratch. The cel-shaded graphics had to be coded from scratch. Look at Anthem and the flying mechanic for another great example. They wanted it in, but the technical issues of the engine and the gameplay problems of unrestricted aerial movement made it hard to incorporate. A game is more like a puzzle then a movie, in that you need to make a lot of smaller pieces fit and until fairly late in the development you can’t know if the “flying”-piece will actually work or if it will even fit with the melee-oriented combat system or if the inventory system (which hogs memory due to all the modifications a player can do to all items) can be combined with the open world you wanted without murdering the RAM of a PS4. Hence games are often made “in editing”, because your cool story absolutely required the protagonist to fly, but the limitations on RAM meant that flying had to go because the levels were too small and it was too easy to glitch out of the game world. So now you’re scrambling to re-write the story to omit all the flying references and sequences that required flying and replacing them with your back-up sliding mechanic.

      Considering the challenges facing a modern AAA developer, I’m slightly amazed that we get good games to play at all.

      1. Kylroy says:

        I’d say this also explains why videogame stories tend to be so weak. When you’re struggling to make an enjoyable game out of the features you’ve managed to cobble together, ensuring thematic consistency and avoiding ludonarrative dissonance is priority ten or lower.

        Obligatory link:

      2. Cubic says:

        Not to be a downer but that sounds to me like the Wachowski brothers (at the time) were a lot smarter about how to spend their time and energy.

        It’s a bit surprising that the megacorps haven’t got canned versions of everything already. Conceptually, you can start planning and researching Madden 2025 like right now. Likewise, if you sit somewhere near the exec suite at EA you hopefully already know that Frostbite will have to support the following list of sequels with roughly known properties. Most games conform to some known template with known solutions, so planning should really be straightforward, like building a new house. And still we end up with crunching and burnouts and arbitrary deadlines. Sometimes it seems like game development is drama seeking by culture.

        1. Gethsemani says:

          The Maddens and the FIFAs and the NHLs do come out on time every year though, no? That’s because they are games where you don’t need to do tons of coding year to year (similar cases exist with CoD being the eminent FPS example), just make iterative improvements on what you already have. You might shave some RAM load here and there, so you can incorporate that cool new mish-mash mapping (probably not a real thing…) on players and make sure to do a bunch of new art assets to ensure the right players are in the right clubs etc..

          But if you want to make a sequel to a AAA game, you need to do a lot of work. All new art assets (because Shoot Guy 7 is set on Mars, not Luna like Shoot Guy 6), you want to really get down with that slide/stomp/roll mechanic and someone has a real cool idea about on the fly weapon modding. That’s on top of completely overhauling the graphics coding, because you want mish-mash mapping and spectral voluminosity shaders (probably not a thing either) and since the maps are going to be twice as big, you need to do all that while finding a way to cut RAM load by at least a third…

          Once again, game making is quite unlike most other forms of creation, because you are trying to figure out a way to build the walls even as you are supposed to be putting them up. All parts of a game are pretty much made simultaneously, which is part of the problem, because in a real house the windows are delivered after the walls are put up, in a game you make the art assets, sounds and scripts while the maps are still in pre-alpha. It is like putting in windows and drawing electrical wiring while the walls are going up.

          The reason for this is because it’d be even more expensive to stagger development, so that the programmers did all their work, then the level designers, then the art teams and then everyone got back together to polish and it’d take much, much longer to make games.

          1. Cubic says:

            It seems to me a lot of the new cool radical technology should be done, or at the very least least properly planned, before development starts. That’s how you derisk your project and schedule. Building the plane while falling towards the ground is exciting but a recipe for failure. Most of what you mention above (like cool new moves or fitting levels in certain RAM) can be investigated in advance. You probably should do a lo-fi prototype if possible. This should really be part of megacorp R&D IMO.

            Even with that sorted, you usually still have plenty of issues to manage, like story, constructing 3D assets (something like 90+% of the team works on this, right?), music, testing, voice acting and whatnot. Here too the managers should have a very good idea about how many cut scenes and maps will be needed and what they involve, the tempo of producing assets, when design plans need to be available and stuff needs to be delivered, and so on. Ideally you should already know the end product is a solid game, otherwise you get sent to iteration hell. (You sometimes have to reshoot a movie but you really would prefer not to.) I’d probably try to set up various toll gates to try to avoid redoing a lot of work at a later stage.

            Anyway, the big advantage would be if all managers now have a reasonable idea about what is involved in making this game and what are the possible future issues. This one is twenty maps, six of them detailed industrial, etc. These are the enemies: 1, 2, 3 … That way you can also discuss and scope the project properly to start with, as well as ongoing. Perhaps less exciting but that’s the point, isn’t it?

            1. Nessus says:

              This was kinda my reaction too. What Gethsemani is describing sounds like a runaway business model failure to me. I see a lot of people within the games industry insist that it has to be this way, but too often their reasoning sounds like a case of “when all you have is a hammer, every problem looks like a nail”.

              What he’s describing is just classic chaotic, inefficient, self-sabotaging non-planning due to impatience, hilariously spun as a deliberate strategy. But it’s a “strategy” that a whole industry has allowed itself to become deeply dependent on by bending its entire business model around it. So they’re wrong that “it has to be this way” in the sense of this being the only viable strategy (much less the best) but they’re right in that the entire industry being entrenched in it makes adopting anything else so non-competitive in the short term that any company trying it would go under before they’d have the ability to change anything.

              It’s a dysfunctional business model that’s bad for everyone, but it’s so deeply and broadly integrated that change can only come from the outside. I.e. either a crash (all but impossible: the market is too broad, businesses can crash, but not the industry), or legal regulation (as was the case with Hollywood back in the day).

              1. Gethsemani says:

                The best solution I’ve read, because I agree that modern AAA game development is dysfunctional, is to effectively model it after Hollywood. The Hollywood way to make a film today is not to grab “Awesome Film Studio” which has the complete package of filming units, cameramen, audio staff, stage hands etc. and have them do a film. Instead you’ll pitch a film and the studio that picks it up will get in contact either with the unions or trust partners to get a crew together. That way you can hire filming units piecemeal, you can get the exact number of cameramen, audio guys, make-up artists and what not that you need and you can do it for a limited period of time. So you film for two months after which all the filming crews and actors go on to new projects and the editing and CGI groups you hired get to work for whatever amount of time. This way, you get a lot of really good people onto the project but for just the exact amount of time they need to be on it.

                This also solves the problem of scoping the project, because you might not be able to keep everyone around if you get time overruns on filming, for example. Everyone is headed to a different project and needs to keep the time table, which means you either need to be liberal with the overtime comp (unions being strong and all) or know which parts can be cut first if you start running out of time.

                Ubisoft has sort of begun to work on a model like this, with their “overrun office” where excess staff is pooled until they are assigned to another project. What needs to happen is that we need to stop seeing game developers as these monolithic artistic teams and start thinking of game developers as a bunch of disparate professions that all come together for various amounts of times to deliver a piece of media. I also think that player/gamer attitudes to games need to change, because we’ve adopted the attitude that killed movie sequels in the 90’s: that the sequel should always be more of everything, while adding something new and cool. It is a recipe for disaster when doing something as complex as computer games.

            2. Boobah says:

              I’d suggest that this became the mode for game development back when you had to design your game to use the newest bleeding edge tech because by the time the game was finished that would be the gamers’ entry level rig, and if your game couldn’t use that it’d look dated at release.

              Plus, if you have this as your model, nothing is set in stone until your game ships. If then. Sometimes this means it’s easier to take advantage of something new, from inside or outside your development team. A lot of the time it becomes an excuse to dither instead of making a decision. Of course, while dithering your team continues to spend time and money building stuff that won’t ever be used in the final game.

              AAA games should be the poster children for a more defined game creation process because they aren’t the ones where you’re experimenting with stuff that may or may not work; that’s for indies and other less expensive games.

              1. Sleeping Dragon says:

                Meaning we can basically trace at least some of it to the fixation of photorealism and the absolute bestest graphical fidelity, because if Nvidia released a new chipset that does skin moisture ever so slightly crispier three months before our game is out we better redesign everything to take advantage of that…

            3. Guest says:

              But those things are also development, and they’ll also be important in project management for any projects using that technology, because they’re dependant on it.

              You’d end up with the same problem with extra steps.

              Dedicated team for working on the tech has to finish the tech before games can use it, which means if that falls behind, you then have to delay all of those games, which means you’re facing the same dilemma. Also, the developers who develop the engines and those who develop the games are often the same people, and features are created in the games that are somtimes rolled back into the engine. And, very often you end up with the issue where you need new tech in the game, that isn’t part of the engine, and you learn this by developing the game: See-everything that uses Frostbite. No third person camera? Have to develop it. Not designed for an RPG? Have to develop it. Getting the physics of flying an ironman suit to feel nice? Have to develop it, and those are parts of developing the game.

              It’s not an issue of how you discretionise your project management, because dependant milestones are dependant milestones.

  9. Joshua says:

    “Rather than adopt an inflexible zero-tolerance policy towards crunch, why not just reward people for their sacrifice? In a 2 year project, paying people double their standard pay for 6 weeks of crunch is only a 5% increase in your labor budget.”

    To play Devil’s advocate, I’m guessing that you’re incentivizing people to fall just a little bit behind, so they can make a ton of extra money at the tail end of the project. Considering how many ship dates are for the holidays, you’re also tying this extra money into being a form of Christmas bonus. As you said, people will be self-destructive and destroy their own family relationships to get some of this extra money. I had a friend who worked 70 hours a week in November and December so the kids “Could have a really NICE Christmas”. He’s now divorced. Even mature workers with families who would rather not work this overtime might get stuck there if some of the single 20-somethings subtly cause delays.

    I’m guessing you’d have to work Project Management incentives to counter this. So, they would get some overtime like the rest, but would get an even bigger bonus if it didn’t get to that point.

    1. Decius says:

      Align the incentives: Pay people more for crunch, but pay them more more for shipping on time without cutting features.

      1. Cubic says:

        How about simply paying overtime? For example, the Fair Labor Standards Act seems to merely require 1.5x pay for hours beyond 40 per week, including weekends, nights, etc. I would guess that’s an incentive to instead get things done in an orderly manner.

        Also, project managers should be responsible for knowing when to stop faffing around. Like, just count backwards to see when you need the various parts to be done to release two weeks before Christmas.

        1. Decius says:

          If time and a half is enough to make people happy working extra hours, then that is sufficient.

          If paying them straight time for extra hours is enough to make them happy, that is sufficient.

          One problem with exempt employee crunch is that exempt employees paid a flat salary don’t get paid more when they work more hours.

    2. Duoae says:

      I’ve never seen a whole group of people ‘cheat’ a crunch system like this.

      First off, people are individuals and most wouldn’t willingly do badly at their jobs or enforce crunch time on themselves, their friends or family. I worked (actually led) a project which entered into self-imposed long-term crunch* and management were constantly saying you need to pull back and rest.

      Further to all that – performance reviews are a thing. So people purposely falling behind on work will most likely be caught by that system if the management are even halfway competent.

      *For those interested, the project was going well but a bit late after 2 years development at small scale. We’d had problems but had been able to overcome them. The issue was when we came to scale up – all of a sudden the process we had developed no longer worked.
      We (mostly i) spent a couple of months figuring out how to fix the problems (which essentially meant redesigning the process from scratch). Then when we scaled up again we found a new problem – the new impurities introduced to the process no longer purged to the required level. I spent a long time trying to figure that out…
      At the same time, results came back from a study indicating we had trace amounts of an undesired structure in our material that wasn’t being detected by the contract company we had used to perform the analysis (most embarrassingly, it was our client that found it). This meant that we now had problems on two fronts and not enough resources or time remaining to finalise the project before submission date.

      The project was (rightly) cancelled at that point. I personally really suffered from not only the failure but also the continuous long hours and lost weekends. I think it was easily one of the worst periods of my life. I wouldn’t wish it on anyone or wish to do it again for a bit more money (remember, you’re only getting overtime for the extra hours…. it doesn’t actually add up to that much of your total take- home salary).

      1. Guest says:

        Being deliberately bad at your job so it takes longer is already possible without crunch anyway, as you’ve pointed out. We already have systems in place for them, like deadlines, progress reports, project managers, multiple layers of accountability are forced on workers, why is everyone so keen to play the “devil’s advocate” to suggest what is already taken as the way things are done by the literal devils who run these companies?

        Plus, if you’re deliberately bad at your job to get more hours out of it, then anyone who isn’t doing that will be more valuable than you, which will mean you won’t advance, and also your manager will come by to ask why you’re not as fast as them, particularly if you’ve been there longer. This never works, anywhere, from the lowest minimum wage position, to the highest. Even the monsters who run these companies aren’t doing badly out of spite, they’re amoral. They’re using crunch because it’s cheap, cheap is good to shareholders and the board, they’re firing people afterwards for the same reason. The mismanagement, the cruelty, is the point, it’s intentional. I don’t think it’s proof that Executives don’t understand game dev (Though I agree with all of Shamus’ points bar the headline), I think it’s proof that they do understand a system which systematically disadvantages workers.

    3. Daerian says:

      Overtime has required special, increased pay by law in most civilized countries.

    4. Guest says:

      The devil doesn’t need an advocate.

      The exact opposite problem is already happening: The current system incentivises underbudgetting on time and labour, because you can crunch at the end for free (So long as you don’t consider burning out your workers and ruining their health a cost-which bosses do not), to the extent that crunch periods are planned for, and extremely long crunch periods are booked to meat release dates months and months off.

      As Shamus already pointed out, it’s at most a 5% cost in labour, on something where you’re already planning on doubling the costs spent on production with marketing. If you can afford the entire budget of the game or more in marketing, you can afford to pay the workers who made it, especially when the amount you’d be paying out is less than the CEOs bonus.

  10. Decius says:

    I think it’s critically important to allow individual professionals to work late on their own. When someone has a pet feature that won’t fit in the schedule, let them work on it outside of regular working hours.

    Once you start doing more than just letting people work late on stuff they want, do whatever it takes to make them happy to work those longer hours. Additional pay can go a long way, as can paid vacation time. Job satisfaction can go pretty far- but all of those things have a natural limit, beyond which people are not happy working. At that point, the people who are not happy with the crunch cut back to regular hours or a lower amount of overtime, and the release date slips because the project manager failed to correctly estimate the time required to complete the project.

    If that is because the PM failed to predict how much the team would tolerate crunch time, that’s on them. If they failed to predict how much time the project would take, that’s on them. If there were too many unpredictable setbacks to make up with the slack that the PM left, that’s on them. (Note that ‘we had these several setbacks that are even retrospectively unlikely, and not enough slack to deal with all of them’ is a complete exoneration; the only way to have zero risk of missing a deadline is to be excessively conservative… and also many of those kind of setbacks result in the team being more tolerant of crunch time, and thus use less slack than they otherwise might)

    1. Thomas says:

      It’s very easy to let that build into a crunch culture though. It happens in every work environment that it turns into a competition to gain favour by working late.

      Doubly true for videogames where people are making massive works of art that can be consumed by millions.

      I disagree with Shamus’ title because the most developer lead studios are often just as bad at crunch. CD Projekt are massive crunchers. Indie developers often work absurd hours that they’re not going to make their money back on. Knowing how games work doesn’t stop people crunching

      1. Decius says:

        Don’t grant favor for working late that is disproportionate to the benefit that you get from it. If people are happy to work for favor, and you are happy with the amount of favor you offer for working late, that’s fair.

        1. Higher Peanut says:

          But it never ends up fair. Work politics dictates competition. When time comes to let people go, will the company let go the ones working extra time with enthusiasm or the others working standard hours to fit with a family? The fact that extra time exists drives people to compete for it when promotions/jobs are limited and an internal crunch culture develops. It’s not money or compensation you’re working for, your whole job or career can be on the line.

          1. Guest says:

            Exactly, as someone pointed out above, they have the same issue with academia, where crunch isn’t enforced, but if you’re up against someone who can, for whatever reason, do 80 hour weeks or whatever, you do automatically look like a worse candidate for limited positions, despite the fact that for most people, that sort of workload is damaging to both mental and physical health.

            For people’s own good, you need to design a system that looks after them.

            As a rule, I don’t like bosses, but the best one I’ve ever had, someone who I’d happily call a mate, is one who saw me limping on the job, because I’d been so overworked that the sweat running down my legs had turned into chaffing so bloody I could hardly walk, and damn was I glad that the uniform pants were black. I didn’t want to go home early or ask for relief, because I was new there, and I didn’t want to be seen as less useful, or lazy, or weak. He came over and spoke to me about it, and said that there was no point in me injuring myself further, and that I should go home, and even offered to sort out someone to cover my shift the next day.

            That’s the thing, some people are always going to work themselves passed the point where they should, because they’re stubborn, or because that’s their personal work ethic. The responsible thing to do is to not let them do that, and show them that it’s a fair and decent workplace, and you have nothing to prove.

    2. Kyle Haight says:

      I agree based on personal experience. A couple of decades ago I had a tech job that had an extended honeymoon period, of the “I can’t believe they’re paying me to do this” variety. During that time I would regularly put in a full days work, come home, eat dinner and then work another four to six hours, not because I had to or was even encouraged to, but simply because there was literally nothing else I wanted to do more with the time.

      It didn’t last, of course; such things never do. But while it did it was, and remains, one of the best work experiences of my life. I wouldn’t want to have missed out on that because of an arbitrary limit on my working hours. I wouldn’t want anyone else with a chance to do a job they love that much to miss the opportunity. Jobs like that are often once in a lifetime; if you are lucky enough to find one, make the most of it.

      1. Kylroy says:

        I think there are a large number of people perpetually trickling into the programming workforce who have experiences like yours, and this allows companies to keep doing crunch. A job like that may be once in a lifetime, but there are a lot of lives out there.

  11. Dreadjaws says:

    … everyone realized their protagonist just happens to look like a hated character in a terrible movie that just came out.

    I’m curious, has this ever happened? I’m trying to think of an example, but I really can’t come up with one.

    1. shoeboxjeddy says:

      This might not be fully true (more of a rumor than a fact), but early previews for The Last of Us had Ellie looking very similar to Ellen Page. Which was awkward, as she was being paid to work with a different game (Beyond: Two Souls) and had not given any agreement to use her likeness. Naughty Dog both denied that that’s what they were going for and subtly altered her look to appear less like Ellen Page before the game came out.

      1. Duoae says:

        Yeah but that was because the actress ellie was based on was a ‘youngified’ version of herself. The actress looked not too dissimilar to page (if you squint a bit). To be honest, I see this as an indictment against culture valuing similar features as beautiful.

        Here’s a decent summary. I remember seeing better pictures though but I can’t find them anywhere…

      2. Dreadjaws says:

        That’s not really the same situation, though. Videogame characters looking similar to real-life actors is not a new thing: What I mean is more or less literal: the protagonist (or at least a major presence) in a game looking like a character from a movie that recently came out and everyone hated. Like, imagine if one of the characters you control in Final Fantasy VIII (released September 1999 in the US) looked like Jar Jar Binks (TPM was released in May 1999 in the US). Something like that.

    2. Kyle Haight says:

      The closest example I can think of is the resemblance of JC Denton from Deus Ex to Neo from The Matrix, but Neo wasn’t exactly a ‘hated’ character.

    3. Syal says:

      There was the time they were making DuckTales but didn’t want to remind people of Howard the Duck, so they reworked Scrooge McDuck to look more like Morgan Freeman.

  12. N/A says:

    “Saying teams should never crunch is like saying that airbags are a waste of money because accidents shouldn’t happen in the first place. The pursuit of an unattainable ideal will prevent you from building a robust system.” Bad comparison. Airbags are actually helpful; crunch is not. There’s a finite amount of useful labour a human being can perform every week, and going beyond that does not get you ’emergency power’, it gets you ‘I am so fatigued that I am making mistakes which will take more hours to fix than the extra hours I am working.’

    Crunch should never happen, and if you find yourself tempted to reach for it in response to mistakes, you need to find another solution, one that actually works.

    1. Dreadjaws says:

      Maybe read the article first, where Shamus explain this whole thing in detail.

      1. N/A says:

        I did. I think it’s founded on a faulty premise.

    2. Kylroy says:

      Crunch *does* work, in limited amounts.

    3. Leeward says:

      Here’s a completely made up hypothetical situation for you:

      Let’s say that a consumer electronics company makes a bet on a product. It believes that the product will do well in the market and make oodles of money. It will hire more workers to build the product, and maybe even expand the department to make more similar products. On the other hand, if the product doesn’t do well, it will have to lay off some of the workers who designed it, the ones who are building it, some people in marketing, and it might even have to sell off assets like factories to cover the losses.

      Now, 6 weeks before the product is supposed to ship (it needs to be in distribution centers by late October in order to be on stores by Thanksgiving) someone notices a major problem that will require a redesign of a significant component. It’s a safety issue with a central feature, so it’s not something they can skimp on now and maybe fix with a patch after the ship date. The decision is made to spend a few million dollars on air freight instead of using boats to distribute these things around the world, but that doesn’t buy enough extra time.

      Management is now faced with a decision: kill the whole product (a year-long delay is not in the budget) or ask engineers to work overtime to fix the problem.

      Are you suggesting that on hour 41 of the first week, the engineers in question will suddenly cease to be effective? How about hour 42? They can pull in more people from other product development teams, but those people will take time to come up to speed. Does that coming-up-to-speed time fatigue them at the same rate as working-problems time?

      1. Duoae says:

        Since I have some experience in this sort of thing my answer would be: “Maybe, yes. At hour 41, an engineer might become ineffective.”

        The why of that answer carries quite a lot of nuance and is obviously dependent on the particular situation. Hence the maybe.

        Let’s make a few generic examples so people can equate.

        – How long does it teach someone to be a master at something?
        – How long does it take to code a database with a network accessible front end?
        – How long does it take to merge legacy data/operations with new ones?
        – How long does it take to design a new pharmaceutical drug?
        – How long does it take to prove a mathematical theorem?
        – How long does it take to think up a new societal paradigm?

        I would think that the answer to all of those is “it depends”. Shamus is fond of saying something like (paraphrasing here): “you spend a lot of time thinking about how to solve a problem instead of actually coding it and the time to achieve the answer cannot be anticipated.” That works for a lot of these sorts of problems that crunch just will not solve. Crunch is good for problems that can be worked through physically because you’re always making headway, even if you’re a little bit slower as time goes on.

        Remember, for those people already working on a given project that experiences a problem near the end, they will already be tired. They will already have fought lots of battles with design choices and unanticipated problems. They will already have thought of everything that they can plausibly think of about said project multiple times and probably discussed it as well.

        If a problem crops up at the end of a project like you describe then it will either be trivial to solve (and really won’t involve a lot of effort to fix) or be monumentally difficult to address.

        If you want two public instances of this, you only have to look at the Galaxy Fold and the Note 7 for the two extremes.

        The Note 7 had a silly problem that was easy to solve and fix (though bad publicity meant they didn’t bother) whereas the Fold has design flaws which are probably insurmountable without a huge redesign.

        The second part of your question about bringing new people onboard to help the problem also suffers from the same issues: bringing on more hands to help physically (as long as they are conversant with the methodology) is great and easy. Bringing on people to troubleshoot a problem is more difficult, time consuming and, yes, tiring for those already working on the problem.

        Those people have to stop working in the problem to do things like catch these newcomers up with the work of the whole project, write ground-up summaries, go over and repeat previous discussions and ideas. Often, a person new to a project will have the same ideas and thoughts that the pre-existing team has had because humans are not the font of original ideas we often believe we are.

        I have a favourite anecdote I like to regale people with:

        We were hiring for a new position and were struggling to find suitable candidates. We needed someone with a degree in chemistry and were only getting college graduates applying (college in our context is 17-18 yr olds).

        Our HR rep turned to us after an interview with a candidate which had a degree but who we felt wasn’t suitable and asked why we didn’t just take one of the college students and train them on the job.

        My answer was that it would take too much time and too much effort to do that. She didn’t really understand the difficulty so i changed tack. I pointed out that I could go up to the admin team and sit next to our accountant and they could teach me accountancy. However, it would take her at least 3-4 years to teach me all the things she had learnt in 3-4 years in her degree. Probably, (mostly likely) her teaching would be incomplete because the subject matter was narrow and she’s not a trained lecturer/ teacher with experience passing on information. As soon as a situation cropped up that I wasn’t familiar with I would likely be out of my depth.

        Similarly, I can’t just teach someone all the underpinnings of chemistry whilst also doing my own job and theirs (because I’d have to monitor them to ensure they were doing things correctly).

        Sorry for the long, rambling comment!

        1. Leeward says:

          So you may have guessed that my hypothetical wasn’t actually totally made up. They did decide to pull in people from other groups. I was one of them, and since I happened to be working on a product with an extremely similar codebase, I got to do a decent amount of the work. It was definitely not a 40-hour/week kind of project either. I remember leaving work at 3:00 in the morning once (but only once).

          While worker effectiveness may have been diminished in hour 41, it didn’t suddenly go negative.

          Prolog: I’m looking at the product in question on Amazon. It’s been discontinued (I was the software lead on its replacement), but they’re still selling, and for $30 more than the MSRP at launch. It’s got 4.3/5 stars with over 3,000 customer reviews. Before building this product, the company was used to selling tens or hundreds of thousands of units over a product’s lifetime. This product sold millions in its first year of production. Internally, a lot of people considered it a failure since the end was such a fiasco. However, it was the best selling product they had ever made at the time.

          The problem was in the battery management software, which was mostly rewritten from scratch and tested exhaustively by most of the company’s testers (pulled from other projects). I’m glad you mentioned the Note 7. My personal nightmare was that a product I worked on went all fireball on someone. It never happened. A significant chunk of the company’s software developers went into crunch mode and that was the outcome.

          1. Duoae says:

            Sounds like it worked out in the end, then! In my particular situation, it didn’t because the period of getting up to speed was relatively long in comparison with actually working on the project.

            At the time, there were just two of us and everyone brought in to help was based at another site and were only giving advice via telepresence. I spent all my time on bringing them up to speed instead of working on the problem.

            I think the fact they weren’t physically present was also a negative.

    4. Nessus says:

      Crunch is not something not something you should start project intending to do, but that’s not the same as it being something you should never do.

      The old saying about how “no plan survives contact with the enemy” is true for basically every type of project, not just warfare. No matter how good your management is, there’s always going to be unexpected failures, complications, oversights, acts-of-god, etc. that force you to adapt or extemporize and play catch-up mid-stream. Even the best planners are not gods. If the amount of catch-up incurred exceeds the schedule, some degree of crunch is the only alternative to just giving up on the entire project. Good management can vastly reduce the probability and/or probable severity of this, but never 100% eliminate it.

      If, as is currently the case in game dev, crunch is treated as an actual part of the plan rather than something you keep in your back pocket for when the plan breaks, THAT is incontrovertibly a sign of incompetent management and bad business models. That’s a culture of exploitation masquerading as a culture of failure, and absolutely should be called out and shouldn’t be tolerated. But demanding that crunch should never happen PERIOD and insisting that ALL examples are signs of incompetence is like saying buildings should never have fire extinguishers because fires are ALWAYS signs of incompetent construction.

    5. Decius says:

      Crunch works for some things. In the canonical example, if everything is going according to schedule and a programmer gets in a car crash and has to take a few weeks off, and QA finds a showstopping bug in existing code, the rest of the team might have to work overtime to fit both delays into the time allotted for delays.

    6. Guest says:

      tbh, no.

      Crunch does work, it’s just how it works that is the problem. Doing 6 months of crunch is absolutely abusive, and has a massive diminished return, but it “works” for the company, because it’s a dimished return, but it’s still a return. If you crack the whip hard enough, you can force extra progress out of them, disgusting as it is.

      And there are going to be some situations where things don’t go as scheduled, and someone needs to stay back to get things done, the trick is to prevent that from getting out of hand, prevent abusive, months long cycles of crunch, prevent cultures of self sacrifice and bullying from occurring. Crunch should never be planned for, but there are going to be occassions where people working longer for a week or two means hitting a deadline. You’ve got to be really, really damn careful though, because two weeks of crunch, if the hours are long enough, is absolutely able to have a negative health effect.

      And people should be compensated for the extra time they put in, at a rate above standard, because this is a failure of management, that’s what overtime rates are for.

  13. Arne Gibson says:

    I think you don’t understand the “Crunch should never happen” argument. It’s not about not making mistakes, it’s that it’s simply not the dev team’s responsibility to pay for your mistakes. Don’t make them take the fall, but take the fall yourself.

    1. Moridin says:

      What do you think will happen to the devs if the product fails because of the project managers? Regardless of how the PM “takes the fall” or whatever he does, I’m guessing the devs will suffer consequences from that. Even in the best case “I spent two years working on a product, and it failed because of the project manager” won’t look good on the resume. More realistically, I’m guessing a lot of those devs are going to get laid off. It isn’t as if the project manager is going to be able to give everyone a huge cash bonus as a consolation because the project failed.

      1. Cubic says:

        “Even in the best case “I spent two years working on a product, and it failed because of the project manager” won’t look good on the resume.”

        At one point in the general software industry, it seemed like someone could work a whole career just with huge multiyear projects that eventually were cancelled before release. I think I know a couple such projects (government) that have been thrashing about for at least a decade by now.

        From another viewpoint, it’s the modest contractor’s dream.

        1. Decius says:

          Companies that cancel lots of huge projects after years are sunk into them typically aren’t competitive in the long term.

          1. Cubic says:

            Hey, in the long term, we’re all retired. More seriously, if everyone fails a couple of times in, say, implementing SAP, then you’re no worse than the competition. And enterprises can drop serious time and money both on internal projects that fail and external projects that never make it to product. For the latter, I recall one that — instead of buying the existing leading product, sold on what seemed quite reasonable terms — set up a team of 40-50 people to built their own clone (failed after a few years of struggle). Poignant.

            If you’re on the inside, then you can see plenty of examples of the same syndrome. Perhaps the EA decree of ‘Frostbite everywhere’ is kind of an example of how this can happen.

    2. Decius says:

      When the choices are “cancel every project and lay off every developer” and “Work crunch time to finish this project, which will fund the next project”, it’s pretty obvious that the choice that is not making the programmers pay is the one that continues to pay the programmers.

      1. Sleeping Dragon says:

        I don’t think it’s as simple as a choice between “cancel all the games” and “make people crunch”. A big part of the argument here is that the publishers have started treating crunch (even extensive) as an obvious thing that is just part of gamedev and as such the entire game development process (particularly in triple A) has been shaped accordingly.

        Here’s one example, barely (if that) competent writing and bland worldbuilding aside would ME:A be an immediate failure if it was delayed until it could get the polish it received in later patches without crunching? It could be. But why? Is it because releasing a ME game a year later would miss some unique conjunction of the spheres where the game could succeed or because the marketing has spun the hype up convincing people that his is the single most important game ever and will change their lives and they need it now?

        And sure, I’ll readily agree that it’s impossible to delay a project indefinitely and keep pumping money into it but if maintaining your business is based on the calculation of “quality, profitability, not running people out of the industry straight into an early grave, pick two” than maybe there is something wrong with the basic assumption (remember when AC:Unity sacrificed quality and some people heralded the end of the franchise?). I will point out that many people, Shamus among them, have argued that AAA as it is nowadays may not be viable in the long term and maybe the majority of gamedev should scale back financially by order of magnitude.

  14. Leeward says:

    I’ve worked on a few projects where late problems threatened immovable ship dates. When there are people in a factory who won’t be paid to come in and build stuff if your code doesn’t get fixed, well, let’s just say it’s motivating.

    I don’t know if you had left the industry by the time the agile thing was really ascendant, Shamus, but from my perspective 80% of what Agile/Scrum/Whatever brings to the table is predictability in the schedule. Once a team has been working on a thing for a while, it makes it possible to get reasonably accurate time estimates for work. It also makes it more obvious sooner when something turns out to be harder than expected. You find out that a feature is behind schedule within 2 weeks instead of 2 months.

    Anyway, in my experience with consumer electronics, scope is the first casualty when the amount of work to do something is underestimated. Missing a fall ship date is basically the same thing as completely cancelling the project. Crunchy weekends and late nights are there for specific milestones, but we never had 6 weeks of continuous crunching.

    Unrelatedly, I think your optimism with press coverage and unionization is misplaced. Video games are and will continue to be desirable to work on. Smart developers will read glassdoor reviews and leave dysfunctional companies, but there’s always a fresh supply of (heavily indebted) new college grads eager to spend 8 months of 16 hour days working on a AAA title. The loot box fiasco was a “think of the children” meets “gambling” uproar. Workers being exploited? That’s just Tuesday. At least these workers are making more than the median US income. (Note: This is as close to politics as I’m going to get. I hope it doesn’t cross the line.)

    1. Kylroy says:

      The other bar to programmer unionization is that, as Shamus points out, labor is the biggest expense in a tech company. The classic unionized jobs of yesteryear were mostly industrial, and in those fields labor is a comparatively small cost next to all the machinery and transport and processing. Given how hard it was for those fields to unionize, I am not optimistic about unionization efforts in tech.

      1. Decius says:

        Labor is a big expense, but with unions they could negotiate lower labor costs.

        Few people are going to trust a company that offers unconditional severance pay and reemployment assistance to actually follow through on it without a union contract and union backing.

    2. Cubic says:

      In my experience, scrum/agile is what you do when your customer doesn’t know what they want. But I’ve only used it on smallish groups working on web stuff where you can redeploy on demand and basically stop after any sprint, not with 500 people inexorably rolling towards a deadline.

      “there’s always a fresh supply of (heavily indebted) new college grads eager to spend 8 months of 16 hour days working on a AAA title.”

      Yeah, what the game industry has traditionally relied on. Hey, I’m working in show business, right?

      1. Leeward says:

        Scrum has been bent to fit into all kinds of places it was never designed for. What I really meant though was the set of process fads that descend from Toyota and kanban. There’s also the whole lean thing. Bosses want to know when it will be done, and underlings are bad at estimating that, so a whole industry has built up around being better at estimating how long it will take.

        1. Cubic says:

          “There’s also the whole lean thing. Bosses want to know when it will be done, and underlings are bad at estimating that, so a whole industry has built up around being better at estimating how long it will take.”

          YES. Aargh, been on both sides of this. Very frustrating. Even when you have or are a technical boss, it’s difficult. Partly because it’s not done systematically, I think. Sometimes you just get put on the spot to give ‘an estimate’. It’s usually not even clear whether it’s ready for test or ready for production … To return to agile, I never liked planning poker either myself. Perhaps these modern thingamajigs will do better. (Huh, was I raised by software wolves or something?)

          The non-technical bosses mostly seem to want you to commit to a date (crunch!). Perhaps they gave up. That’s the charitable interpretation.

          Haven’t used Kanban seriously, so I can’t add anything there.

          1. Kathryn says:

            I had a brand new employee on my first project I ran myself, and the thing that impressed me the most about him was that when he estimated he’d get a task done by a certain date, he did it. Fresh out of school. I have senior employees with decades of experience who are totally unable to do that.

            1. Kyle Haight says:

              I think I was actually better at estimating tasks earlier in my career. My guess as to why is that when I was a junior engineer I would estimate how long it would take me to do something, and I would buckle down and do it. As a senior engineer I’m constantly dealing with interruptions, helping other people with their issues, getting yanked into firefighting, etc. In other words, I can estimate how much actual work time it will take me to get something done, but I can’t predict how much wall clock time it will take me to get that much work time.

              1. Decius says:

                That sounds like you’re doing two different jobs. Is the amount of time doing your other job per fortnight consistent?

                1. Guest says:

                  No, they’re doing one job, being a senior staff member, which means while you may not be the boss, you’re mentoring and advising junior staff, who you can’t just tell to buzz off, because you’re also relying on them getting their job done, and being available for consults like that is expected.

                  It’s common in just about every line of work. Hell, even as a junior member of staff, I’m sometimes conscripted to teach the new guy how to do the job, which takes longer, and both of us are less efficient, because he doesn’t know enough to take initiative, which means when he finishes a task, he comes looking for me to ask “What next”, and because while I now have twice as many hands to do a task, only one of us actually knows what we’re doing, so I have to stop what I’m doing to teach him how we do things. Often, these are trained, competent people, who are experienced in the field, they just don’t know the specifics of this specific job, which is also a really common problem in engineering and software.

              2. Leeward says:

                Junior engineers also tend to get put on more straightforward tasks. It’s mostly just “implement this API someone else designed for you” type work. And the problems they work on tend to be pretty well understood.

                That said, when I was the lead on a team I probably spent 50% of my time making sure the other people on my team didn’t have to go to meetings.

                1. Kathryn says:

                  This particular junior engineer actually had a more complex task than the senior person I was thinking of (whom I eventually had to kick off my project).

    3. Daimbert says:

      I worked on a product for a while that kinda did Agile (a number of small features that someone would pick one to work on when they finished the previous one). Since we still had releases, though, it had two major issues, at least for me, when compared to the old style “Project Plan” approach:

      1) I was new to the team and was never sure if I should pick a feature or not since I didn’t know what I was doing. The last thing I wanted to do was take something that someone could do easily because they knew it and leave them something that they’d have to spend a lot of time learning, because that would be inefficient since both of us would be doing that.

      2) I found it hard to tell what shape we were in, so to be able to know when I needed to work harder or even crunch a little or when I could take it easy. If I take a little longer than expected, is that fine because everyone else should be able to handle everything else, or is it a disaster because no one will free up in time to finish the other important features?

      The nice thing about project plans was that it outlined what had to be done by when and what depended on other things so that if things started to slip you knew right away what impact that would have and how much it mattered. This is critical to reducing crunch time because designers and project managers can react before it becomes a problem or even highlight that it won’t make it. Agile allows for more rapid change since fewer things are committed and you won’t get to the end of the feature before discovering that what you did isn’t great, but it’s not so good for making solid plans. Which is to be expected, since its philosophy is explicitly to avoid doing that.

      1. Leeward says:

        I think the emphasis there should be on “kinda.” I’m not an Agile evangelist by a long stretch (I tend to work on things with firm deadlines where updates are difficult to push to customers) but that part you’re describing in #1 is probably the weakest part of Agile. I’ve even had teams where tickets get assigned during planning. Most teams I’ve seen run on sprints of 2-3 weeks, and reevaluate whether they’re on track or not in between.

        So to address #2 there, instead of having “Feature X will be done on project-day 82” marked on a gantt chart, the team agrees to finish some part of feature X in the next 2 weeks. Then the 2 weeks run kinda like a little mini-waterfall. If the schedule slips, we find out in 2 weeks. If there’s room to do extra stuff, it gets pulled in from the backlog. The goal is to be able to predict how much you’ll get done, not to get the most possible stuff done. So if a story pushes the sprint over the edge into “more work than we can complete” territory, either it gets cut or something else does.

        That said, I’ve worked at 3 different companies over the past 11 years that claimed to use agile, and none of them have done the same thing.

        1. Cubic says:

          Seems sensible. Even if I’m sort of waterfall in this particular case, it was easy to take it too far and generate false precision and disappointment (that’s why we got scrum/agile, after all). I hope we won’t return to those excesses either.

        2. Anachronist says:

          “I’ve worked at 3 different companies over the past 11 years that claimed to use agile, and none of them have done the same thing.” Of course not. Agile/scrum/kanban/whatever isn’t a prescriptive framework, it’s a toolbox set of principles and guidelines. Each team adapts the tools to do their best.

          In the case of a game, Shamus’s article describes a waterfall mindset, in which the game design and associated requirements must be defined up front and the development process has difficulty adapting to changing requirements. That’s the opposite of agile, which is characterized by few up-front requirements and embracing change.

          With agile, you may have an overall framework and concept defined, and user stories written about what the user wants to do in the game, and after a few sprints of development you decide it’s “good enough” to release with periodic incremental improvements released to customers on subsequent sprints.

          But games aren’t built that way. There’s this mindset that you must release a game complete with all its bells and whistles. I can imagine an untapped market of customers willing to buy a game that works functionally with the expectation that new features and improvements will happen at regular intervals for as long as they own the game or the company keeps developing it. The closest I can think that comes to this is Minecraft, which always has incremental improvements in their release pipeline.

          1. Philadelphus says:

            Paradox Interactive games since Crusader Kings II have also followed this mold. Sort of; they’re complete games at release, but CK II passed seven birthdays this year and is showing no signs of development stopping any time soon.

            1. Matthew Downie says:

              There are signs of it stopping some time soon: “We can’t add much more to the game as it is now. It’s crowded. The map is really big, there’s so much content in there. It wasn’t really built for all of the expansions we made. It’s getting heavy. We might need to take the etch-a-sketch, shake it a little bit, and start over.”

          2. Leeward says:

            Oxygen Not Included has been doing periodic releases of the pre-release game. Kerbal Space Program did something similar.

            1. Nessus says:

              Yeah, what he’s describing is literally exactly what “early access” is. Makes me wonder what he’s been playing and/or on what platform that he doesn’t apparently know this already a thing, and has been for over half a decade.

              And not just a thing: a problem. This approach to development and marketing has been a known stalking horse for “games as a service” for a while. And while there have been high profile successes like Minecraft or KSP, those are in the minority.

              TBH wishing for this sort of thing sounds like “my preferred managerial style isn’t right for this product… but I can’t bear to change, so what if we redefine the entire product instead so I don’t have to?” With echoes of that old Andrew Wilson “Used to be the game you bought was the game you got” spin shenanigans.

              1. Leeward says:

                It’s also the model for League of Legends, which is a serious e-sport. LoL isn’t early access, it just changes all the time. For games where single player doesn’t even make sense as a mode, regular updates are perfectly reasonable. I’d argue that successful games made by small studios are in the minority, but I don’t have any numbers on how many of those used the early access model.

                I don’t work on games, but I have worked on hardware projects, which have similar constraints to large game releases. Marketing campaigns have to be launched, and ship dates are completely immutable once set. Iterative development can still work in that environment. It’s just that someone builds a gantt chart out of the backlog and uses velocity to decide if things are on schedule or not.

                1. Nessus says:

                  Yeah, games where online multiplayer is the core idea like MOBAS, MORPGS, arena shooters, and sports games are a different animal, naturally. Those not only benefit from that sort of dev model, but actively require it as part of their very nature.

                  Single player games don’t, though. There’s been a lot of push from multiple angles over the past ten years or so to grind down and dismantle the user’s control over the games they “own”, and stuff like this is a non-trivial part of it. Anachronist’s last paragraph does read a lot like the creepy old “used to be the game you bought was the game you got” speech in how it frames user expectations as weird reasonless rando notions in an attempt to preempt arguments against the expansion of “games as a service” to games that would be more hurt than helped by it.

                  Yes, THEORETICALLY you absolutely could do that in a way that would only benefit the games in question. In RL practice however, we’ve already seen how companies readily and reliably dive right past that and into anti-customer behavior. Supply does not limit itself to just finding and fulfilling demand if controlling and farming it is possible instead.

              2. Anachronist says:

                Nessus commented: “Makes me wonder what he’s been playing…” Yup, you got me, I am NOT a gamer. Not everyone here is. I dabble in Minecraft and Roblox… and then only if I have time. Most of the games reviewed on this site don’t interest me. I am a technical program manager, though, with a 35-year career spanning military systems, consumer products, online medical journal publishing, and data storage. I know a bit about waterfall and agile, and the frustrations and joys of working neck-deep in both environments.

                The point I was trying to make, which apparently got missed, is that there’s a place for agile development in games with incremental evolving improvements and changes based on customer feedback about what they value.

  15. Thomas says:

    The banking sector and lawyers are two other professions where crunch is very much the norm.

    I wonder if there are any similarities you could draw between the 3?

    1. Kylroy says:

      Banking and law are both cases where you’re scrambling to deduce something, usually before other people do. The crunch is inevitable because you’re trying to make the gap between knowing what you’re looking for and actually finding it as small as possible; there’s a limit to how much you can do before you know what the new regulation or IPO or admissable evidence is, so you focus on working as fast as you can once you *do* know.

      Video games aren’t like that. They could, theoretically, be planned out well in advance, with the lessons of prior game development applied to give appropriate estimates and leeway. But nobody at the AAA level has tried to develop the systems for this, so instead they back themselves into the same corner as law and finance and focus on crunching.

      1. Decius says:

        You could, theoretically, plan for every contingency in finance or law. That would let you simply implement the plan that matches new information when you get it.

        In practice, that ends up being too much work, just like trying to predict which bugs you will write from a design document.

      2. Matthew Downie says:

        Things associated with crunch:
        Strict deadlines – “We have no choice! The trial begins in two days!”
        Good pay, or the hope of it – “Once you’ve proven yourself, you’ll be earning a 200K, minimum! Would you mind coming in to work on Saturday?”
        Fun-sounding job – “Sure, you might have to work long hours. But, videogames!”
        Unpredictable schedules – “We need to fix this bug that causes the game to occasionally crash for no obvious reason. How long will that take?”

  16. Kathryn says:

    So, I have many years of experience in engineering project management (typically complex electronic boxes with software and occasionally also moving mechanical parts).

    First of all, the assumption that upper management knows any given project is in trouble is definitely not valid. For more on this, there is an article called “The Thermocline of Truth”. It’s specific to software development, but the general principle applies to hardware as well.

    Second, it’s definitely usually management’s fault when a project is behind. I’d say at least 75% of the time, it’s a combination of unreasonable customers (e.g., introducing new reqts several days AFTER the Critical Design Review) and spineless or unreasonable upper management (letting the customer do that without insisting on a cost assessment first, despite the PM’s objections).

    Third, in cases where it *is* the project manager’s fault, in my experience, it is almost always due to overestimating the Technology Readiness Level. This is a concept that attempts to objectively assess how close the technology in question is to going into production. It’s a scale from 1 to 9 where 1 is an observation that the laws of physics do not prohibit the concept and 9 is the concept is already in use (successfully). Getting from a 2 or 3 to a 9 is SIGNIFICANTLY more expensive than getting from a 4 or 5 to a 9, in terms of both dollars and hours. So if you think you’ve got a TRL 4 because you’ve got your breadboard working, but you’re really still a TRL 2/3 because you still don’t know enough about the concept (this is a real example. Not one of my projects, just one I watched and laughed at…until I was asked to help save it…), your estimate is going to be WILDLY off.

    I would write more, but it’s taken me all day to find the time just to type this up (each paragraph was written hours apart, so sorry if it’s choppy). Suffice to say, project management is a complex topic.

    1. Leeward says:

      At one company I worked at, we did something similar to your TRL thing, but inverted. Risk for a feature/component/product was the product of 2 things, rated on a scale from 1-4:

      1) How well do we understand it? (doing it now-not at all)
      2) How complicated is it? (very-trivial)

      Rate these things from 1 to 4 (4 is bad), then multiply them. As the project progressed, tolerance for things with higher ratings went down. By the time ship dates were closing in, anything over a 3 was a huge deal.

  17. ngthagg says:

    Shoo7 Guy isn’t so bad. It’s much better than the reboot that will, out of necessity, get labeled Shoot Guy (2019).

    1. Decius says:

      Sh00t Guy best franchise entry.

    2. Chad Miller says:

      But who could pass up the opportunity to release Sh?t Guy?

  18. Agammamon says:

    Regarding ‘crunch time bonuses’ – these people might be on salary, but they won’t be ‘exempt’. Meaning they’re not exempt from overtime pay. Crunch then will, by law, require paying time-and-a-half (and double, even triple time for exceeding hours worked threshholds and on some holidays – and these pay multipliers are additive – in some states). So everyone except management personnel (who are generally ‘salaried but exempt’) are getting a bonus built in.

    It certainly wouldn’t hurt to add a little extra on top, but they’re not putting in 100 hours for the same pay as a 50 hour week – or even at the same *pay rate* as a 50 hour salaried week.

    1. Decius says:

      Computer programmers qualify for the professional exemption.

    2. Leeward says:

      Oh how I wish this were true.

  19. Zak McKracken says:

    Recently came across this gem of an article (about a proper research paper):

    The more passionate you are about your job, the more likely are you to be exploited, and others to think that it’s okay to exploit you.

    Most game developers are quite passionate about their job, I think …

    1. Guest says:

      It’s the truth.

      Plus, if you’re new, and feel you have something to prove, well, that enthusiasm is also exploitable.

      1. Richard says:

        Hence why it’s a great management technique to fire your entire team after every release!

        Trebles all round!

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply

Your email address will not be published. Required fields are marked *