Not everybody agrees on what the first video game was. To some, it’s Bertie the Brain, a large machine from 1950 which offered to play tic-tac-toe against an artificial intelligence. Most people think it’s Tennis for Two, played on an oscilloscope and created as an entertaining display among duller elements of a lab’s public exhibition, in 1958. Others wouldn’t classify these as video games at all, but instead mere curiosities, arguing that the history of video games really began when the first commercial products appeared.
These 1950s era ‘games’ were not meant to be profitable, but arcade video games had to be. They borrowed the coin-operated mechanism of pinball machines, as well as the limited play time: Pong ends when a player reaches 10 points, and Pac-Man will only last until the player loses their last life. Seemingly, the “Game Over” phrase was borrowed from pinball machines as well, first appearing in video games via 1975’s Gun Fight, a multiplayer shooting game.
The game was designed by Tomohiro Nishikado, who would later create Space Invaders, which used the same “Game Over” screen at the end of a session. It’s likely that the success of Space Invaders played a role in the spreading of the now iconic phrase to most arcade video games.
When the first home consoles arrived on the market, some of them naturally featured ports of popular arcade games. Most of them featured a Pong clone, for example, and Space Invaders made it to several systems. From bars and bowling alleys, “Game Over” made its way to living rooms.
On home consoles, games didn’t need the same time-limited gameplay loops as arcade machines, and there was room for new genres to emerge. Still, the need to have the players start over regularly remained: you were paying more than one quarter now, so customers needed to feel like they were getting their money’s worth. It would have been unacceptable for a game to be beaten in one sitting, and the hardware of the time didn’t allow the cartridges to hold enough content for unending entertainment. A steep difficulty curve was the solution. Intense trial and error was often necessary to beat a game, and defeat came with a punishing “Game Over” before the player was whisked back to their starting point.
Over the years, games evolved and started offering more complex experiences. Adventures began to last dozens of hours, with progress saved between sessions. “Game Over” screens remained, although they often were less punishing: a death in The Legend of Zelda doesn’t send you right back to the beginning. In Super Mario, you are taken to the world’s starting point, not the first level.
Still, while they weren’t all the exact same, they remained punishing for two reasons:
- 1. They interrupt the play: the phrase takes over the screen, leaving the player unable to interact for an instant before they can choose to play again, continue, or give up.
- 2. They set you back: the player will have to lose some of their progression.
After all those years, is ”Game Over” still relevant? Is it really useful, or just a relic of the past? Would developers even include it if not for the tradition that was once necessitated by technological limitations gaming has long shrugged off?
In indie platformer Fez, missing a jump or getting mixed up in an explosion doesn’t punish you. The game makes it obvious that you screwed up (Gomez, your avatar, wiggles with a look of terror and regret as he falls down), but it quickly puts you back to your last safe spot. One of the arguments in favor of game over screens is that they create tension – they are an event to fear. Nobody wants to stop playing (even if it’s just for 10 seconds) because they screwed up, so they are motivated to avoid mistakes. Well-designed die and retry games show that it works better to not stop the flow, and that a balanced difficulty counteracts the need for an added threat. One element of a “Game Over” (the set back) can be kept, but the interruption of play is rarely needed.
Another argument in favor of “Game Over” is that it gives value to some collectibles and save points: there’s nothing like finding a 1-Up mushroom when Mario’s lives are running short, and reaching a checkpoint feels like a haven. But why punish players for their mistakes when you can reward them for their successes? Lives can be taken out of the game to be replaced by other prizes.
Any collectible relevant to the progression can play a similar role. In Fez, again, the scattered cube bits are essential to make it further into the world. Finding out that one is nearby is a satisfying moment for all players, and not only the ones who are running short on lives. Getting one means you’re progressing, not that it’s less likely you’ll be punished. And if a game designer really wants players to avoid deaths, there are ways to make it work. In Crash Bandicoot 4: It’s About Time, each level has a gem which can only be obtained by beating it with less than 3 deaths, as well as a relic for beating it with 0. Many players will choose to face these optional challenges, but the main experience remains smooth.
Games don’t need to force much difficulty into the player’s experience. In Celeste, those who seek a particularly unforgiving challenge can take on the golden strawberry, a collectible which has to be carried to the end of the level without a death in order to count. A failure carries the same weight as many “Game Over” screens, but it is optional. Players have always found ways to add stakes on their own accord, even when the challenge isn’t offered by the game. No-death runs can be a built-in option (like Shovel Knight’s optional checkpoints), or a self-imposed challenge.
“Game Over” can still be relevant. Roguelikes are designed around repetition from the beginning, and games like Dark Souls are supposed to feel hard – if you’re not dying, you’re not playing them properly. But even in those contexts, they tend to interrupt the game without necessity. Do you really need to face a “You Died” screen – a “Game Over” screen by any other name – before resuming play?
Slowly but surely, nostalgia is being replaced by practicality. A cold, hard “Game Over” still makes perfect sense when it’s used in a fashion similar to their origin: arcade games still need coins to flow, and some free-to-plays like Candy Crush use similar tactics. But the industry is evolving, and the humble “Game Over” is disappearing. When it comes to good design, the portrayal of defeat is more thought out. And when it comes to the pursuit of profit, new predatory mechanics are enforcing new standards. Loot boxes, battle passes, gachas, time-limited events, carefully crafted mechanics designed to keep you coming back… The industry’s new strategy is FOMO. While we used to pay to continue playing, we now do to complete a virtual collection, or to get a quick shot of dopamine.
The pursuit of profit has always played a major role in the way games have always been conceived: “Game Over” interrupted the games to make players pay, then to stretch out shorter games into longer experiences. In the ’00s, the “Game Over” started becoming less common as DLCs appeared. Today, many major releases are ‘Games as a Service’, continually evolving to keep players spending money on them. How different could it all be without the need for money? Games are designed for constant, compulsive replayability, which is good for attracting customers. Could the technology have been used for more artistic, less gamified creations from the early days, like we can see now? Maybe video games would tend to be shorter, more compact and powerful experiences. Maybe they would be built around the simple ‘just five more minutes’ loop far less. Maybe they wouldn’t be as good?