When Limited Hardware Led To Limitless Creativity

Nothing breeds creativity better than adversity. When you have to make something out of a limited set of parameters, it forces you to get creative in order to make it work. Sometimes, those limitations even inform the entire direction of a project. When people set out to make movies in the days before computer-assisted effects, they had to plan those films as much around what they couldn’t do as what they could. Not having the technology to portray a realistic-looking shark in Jaws is a large part of the reason why you seldom see the creature, and having the shark be a mostly unseen threat is exactly why that movie works so well (and still holds up). To that point, most longtime movie fans agree that something has definitely been lost in the total giving over to computer graphics in movies. That isn’t to say that there hasn’t been some compelling things done with CG in movies, or that every single CG-heavy movie has been cold and lifeless. But having almost no limit to what can be done in movies anymore has definitely sucked a lot of the creativity out of the average Hollywood blockbuster, and why that term wasn’t automatically a reason to scoff in the days when it applied to movies like Indiana Jones and even sci-fi movies like Back to the Future and Ghostbusters, both of which were created with relatively little help from computers (at least not in the way they are used now or would be used now if those movies had been made today).

It definitely makes me wonder if a huge part of the reason why so many modern games seem to be lacking in true innovation and originality is because the technology today is almost limitless in what can be accomplished in a video game. There is very little a game designer can dream up that he/she wouldn’t be able to accomplish with enough time and money. Again, like with movies, that isn’t to say that access to all of that amazing tech is 100% a bad thing. It’s hard to deny what a great time we live in as gamers that games like Metal Gear Solid V, The Witcher 3, Batman: Arkham Knight, and Grand Theft Auto V can exist (even if only from a technical standpoint). But with no tangible limits to what a game can do or be, can imagination still thrive? What is left to push developers to think outside the box when the box is cozy and comfortable and doesn’t actually prevent them from doing anything?

The original Metal Gear was initially conceived as a standard military-based action shooter, and there is a very good chance it would’ve ended up being just that if it wasn’t for the hardware it was being developed on. The MSX2 was a Japan-only computer with hardware that limited both the number of enemies and the number of bullets that could be onscreen at once, which made a fast-paced, gun-heavy action game very difficult. The computer also had difficulty in scrolling, meaning that games typically had to take place one static screen at a time. With all of these limitations in mind, director Hideo Kojima decided that a slower-paced, stealth-based game would be a better fit, as the lack of ammunition would be seen as pushing players to be stealthy rather than rely on guns, and moving through the game very slowly would be a better fit for a single-screen game. It’s not a great leap to assume that Metal Gear would’ve been just another Contra-style game had those limitations not forced Kojima to be creative in how the game was designed, and it’s also not hard to imagine that it would’ve been lost in the shuffle of the many, many shooters of the era and not gone on to be such a unique and innovative series. How many other standard 2D action games of the late 80’s existed much beyond one or two installments, let alone for decades after? Beyond that, think about the state of third-person shooters when Metal Gear Solid launched in 1998 – the genre definitely had a long way to go. Would Metal Gear Solid, the rough-around-the-edges third-person shooter had been nearly as compelling or groundbreaking as Metal Gear Solid, the well-polished tactical espionage stealth game ended up being? Would all of those brilliant boss fights had been so brilliant if they were just your typical “send a hail of gunfire at the boss until his health runs out” boss encounters? Would infiltrating Shadow Moses had been so exciting if Snake had just rushed in there in a literal blaze of glory? I highly doubt it. And all of that might very well have been the type of game Metal Gear Solid ended up being had Kojima not basically been forced to make it a stealth-based series all those years ago.

Another time when developers were forced to get creative was in the early days of 3D. The consoles that were around at the start of 3D gaming – PlayStation, Saturn, and N64 – generally struggled to render larger 3D areas without sacrificing the visibility of the farther-away parts of the world. This would often result in objects popping up out of nowhere as you got closer to them – not-so-affectionately referred to as “pop-up.” Many developers began to develop a sort of “fog” that would cover the more distant areas of the worlds, and as you got closer to those areas the world faded in rather than just having things jarringly pop up. The team behind PlayStation horror game Silent Hill decided to embrace that fog and make it a living element of their game. Resident Evil relied on 2D pre-rendered backgrounds with static, often overhead camera angles in order to give players a sense of
unease as they weren’t able to see what lurked around the next corner. However, that is much more difficult to do when the game is in a fully 3D world as Silent Hill was. So in an effort to both mask the limitations of the PS1 hardware to render long draw distances, as well as elicit that uneasy feeling of not knowing what dangers lie ahead, the fictional town of Silent Hill was conceived as having a thick fog that enveloped everything but the 20 or so feet in front of you. So instead of a game having a “fog” for no legitimate reason other than to hide pop-up, Silent Hill had a fog because the actual setting had a fog, and the fog became an iconic part of that series going forward, even when it moved to hardware that could handle large 3D environments. At night, the environments were bathed in complete pitch blackness, with only your character’s chest-mounted flashlight as a lighting source. This obviously provided an extremely creepy atmosphere (and a rather novel one for the time as it was one of the first 3D games to do that), but again, it solved the problem of pop-up in a way that the player didn’t have to question. Had Silent Hill made its debut on a platform that could’ve handled its large outdoor environments with ease, and the developers wouldn’t have had any technical limitations to mask, would they have come up with the concept of a foggy town? And how much less scary is a town where you can see clearly for blocks and blocks versus one where a deadly creature can be close enough for you to hear it snarling but you can’t see it because of the thick, ominous fog all around? If you ask me, the thing that most modern horror games get so wrong – and why so few of them are as scary as those older ones – is that you can see everything in crystal-clear Unreal-powered detail. I don’t care how scary of a monster you design, it’ll never be as scary as the monster you can’t see, or only catch glimpses of. So the answer to that, sadly, has been turning most horror games into scary-ish action games, which completely misses the point. But that’s a different rant for a different day…

These are just two of the countless examples of how limitations shaped the evolution of a game and a franchise in really interesting and unique ways, and how those two series would’ve been much different – and probably much more generic and forgettable – if they wouldn’t have had to get creative with how they got around technical barriers. The fact that a general decline in the overall innovation of video games has coincided with a significant rise in the processing power of their hardware can’t be a coincidence. And the self-imposed “limitations” of scaled-down indie game development isn’t quite the same, since it is self-imposed and not legitimate barriers that they are forced to work around. I try my hardest not to be a cynical person, and especially not a cynical gamer, but I worry that the greatest innovations in gaming are already behind us and that we aren’t going to see anything truly groundbreaking anymore outside of the expected evolution of graphics, sound, AI, and so on. It’s great that developers have limitless power at their fingertips, but at its best, creativity needs to be challenged in order to thrive. With very little left in the way of challenges for game makers, how creative are they going to get from now on?