RSS

Tag Archives: doom

The Rise of POMG, Part 1: It Takes a Village…

No one on their deathbed ever said, “I wish I had spent more time alone with my computer!”

— Dani Bunten Berry

If you ever want to feel old, just talk to the younger generation.

A few years ago now, I met the kids of a good friend of mine for the very first time: four boys between the ages of four and twelve, all more or less crazy about videogames. As someone who spends a lot of his time and earns a lot of his income writing about games, I arrived at their house with high expectations attached.

Alas, I’m afraid I proved a bit of a disappointment to them. The distance between the musty old games that I knew and the shiny modern ones that they played was just too far to bridge; shared frames of reference were tough to come up with. This was more or less what I had anticipated, given how painfully limited I already knew my knowledge of modern gaming to be. But one thing did genuinely surprise me: it was tough for these youngsters to wrap their heads around the very notion of a game that you played to completion by yourself and then put on the shelf, much as you might a book. The games they knew, from Roblox to Fortnite, were all social affairs that you played online with friends or strangers, that ended only when you got sick of them or your peer group moved on to something else. Games that you played alone, without at the very least leader boards and achievements on-hand to measure yourself against others, were utterly alien to them. It was quite a reality check for me.

So, I immediately started to wonder how we had gotten to this point — a point not necessarily better or worse than the sort of gaming that I knew growing up and am still most comfortable with, just very different. This series of articles should serve as the beginning of an answer to that complicated question. Their primary focus is not so much how computer games went multiplayer, nor even how they first went online; those things are in some ways the easy, obvious parts of the equation. It’s rather how games did those things persistently — i.e., permanently, so that each session became part of a larger meta-game, if you will, embedded in a virtual community. Or perhaps the virtual community is embedded in the game. It all depends on how you look at it, and which precise game you happen to be talking about. Whichever way, it has left folks like me, whose natural tendency is still to read games like books with distinct beginnings, middles, and ends, anachronistic iconoclasts in the eyes of the youthful mainstream.

Which, I hasten to add, is perfectly okay; I’ve always found the ditch more fun than the middle of the road anyway. Still, sometimes it’s good to know how the other 90 percent lives, especially if you claim to be a gaming historian…



“Persistent online multiplayer gaming” (POMG, shall we say?) is a mouthful to be sure, but it will have to do for lack of a better descriptor of the phenomenon that has created such a divide between myself and my friend’s children.  It’s actually older than you might expect, having first come to be in the 1970s on PLATO, a non-profit computer network run out of the University of Illinois but encompassing several other American educational institutions as well. Much has been written about this pioneering network, which uncannily presaged in so many of its particulars what the Internet would become for the world writ large two decades later. (I recommend Brian Dear’s The Friendly Orange Glow for a book-length treatment.) It should suffice for our purposes today to say that PLATO became host to, among other online communities of interest, an extraordinarily vibrant gaming culture. Thanks to the fact that PLATO games lived on a multi-user network rather than standalone single-user personal computers, they could do stuff that most gamers who were not lucky enough to be affiliated with a PLATO-connected university would have to wait many more years to experience.

The first recognizable single-player CRPGs were born on PLATO in the mid-1970s, inspired by the revolutionary new tabletop game known as Dungeons & Dragons. They were followed by the first multiplayer ones in amazingly short order. Already in 1975’s Moria,[1]The PLATO Moria was a completely different game from the 1983 single-player roguelike that bore the same name. players met up with their peers online to chat, brag, and sell or trade loot to one another. When they were ready to venture forth to kill monsters, they could do so in groups of up to ten, pooling their resources and sharing the rewards. A slightly later PLATO game called Oubliette implemented the same basic concept in an even more sophisticated way. The degree of persistence of these games was limited by a lack of storage capacity — the only data that was saved between sessions were the statistics and inventory of each player’s character, with the rest of the environment being generated randomly each time out — but they were miles ahead of anything available for the early personal computers that were beginning to appear at the same time. Indeed, Wizardry, the game that cemented the CRPG’s status as a staple genre on personal computers in 1981, was in many ways simply a scaled-down version of Oubliette, with the multiplayer party replaced by a party of characters that were all controlled by the same player.

Chester Bolingbroke, better known online as The CRPG Addict, plays Moria. Note the “Group Members” field at bottom right. Chester is alone here, but he could be adventuring with up to nine others.

A more comprehensive sort of persistence arrived with the first Multi-User Dungeon (MUD), developed by Roy Trubshaw and Richard Bartle, two students at the University of Essex in Britain, and first deployed there in a nascent form in late 1978 or 1979. A MUD borrowed the text-only interface and presentation of Will Crowther and Don Woods’s seminal game of Adventure, but the world it presented was a shared, fully persistent one between its periodic resets to a virgin state, chockablock with other real humans to interact with and perhaps fight. “The Land,” as Bartle dubbed his game’s environs, expanded to more than 600 rooms by the early 1980s, even as its ideas and a good portion of its code were used to set up other, similar environments at many more universities.

In the meanwhile, the first commercial online services were starting up in the United States. By 1984, you could, for the price of a substantial hourly fee, dial into the big mainframes of services like CompuServe using your home computer. Once logged in there, you could socialize, shop, bank, make travel reservations, read newspapers, and do much else that most people wouldn’t begin to do online until more than a decade later — including gaming. For example, CompuServe offered MegaWars, a persistent grand-strategy game of galactic conquest whose campaigns took groups of up to 100 players four to six weeks to complete. (Woe betide the ones who couldn’t log in for some reason of an evening in the midst of that marathon!) You could also find various MUDs, as well as Island of Kesmai, a multiplayer CRPG boasting most of the same features as PLATO’s Oubliette in a genuinely persistent world rather than a perpetually regenerated one. CompuServe’s competitor GEnie had Air Warrior, a multiplayer flight simulator with bitmapped 3D graphics and sound effects to rival any of the contemporaneous single-player simulators on personal computers. For the price of $11 per hour, you could participate in grand Air Warrior campaigns that lasted three weeks each and involved hundreds of other subscribers, organizing and flying bombing raids and defending against the enemy’s attacks on their own lines. In 1991, America Online put up Neverwinter Nights,[2]Not the same game as the 2002 Bioware CRPG of the same name. which did for the “Gold Box” line of licensed Dungeons & Dragons CRPGs what MUD had done for Adventure and Air Warrior had done for flight simulators, transporting the single-player game into a persistent multiplayer space.

All of this stuff was more or less incredible in the context of the times. At the same time, though, we mustn’t forget that it was strictly the purview of a privileged elite, made up of those with login credentials for institutional-computing networks or money in their pockets to pay fairly exorbitant hourly fees to feed their gaming habits. So, I’d like to back up now and tell a different story of POMG — one with more of a populist thrust, focusing on what was actually attainable by the majority of people out there, the ones who neither had access to a university’s mainframe nor could afford to spend hundreds of dollars per month on a hobby. Rest assured that the two narratives will meet before all is said and done.



POMG came to everyday digital gaming in the reverse order of the words that make up the acronym: first games were multiplayer, then they went online, and then these online games became persistent. Let’s try to unpack how that happened.

From the very start, many digital games were multiplayer, optionally if not unavoidably so. Spacewar!, the program generally considered the first fully developed graphical videogame, was exclusively multiplayer from its inception in the early 1960s. Ditto Pong, the game that launched Atari a decade later, and with it a slow-building popular craze for electronic games, first in public arcades and later in living rooms. Multiplayer here was not so much down to design intention as technological affordances. Pong was an elaborate analog state machine rather than a full-blown digital computer, relying on decentralized resistors and potentiometers and the like to do its “thinking.” It was more than hard enough just to get a couple of paddles and a ball moving around on the screen of a gadget like this; a computerized opponent was a bridge too far.

Very quickly, however, programmable microprocessors entered the field, changing everyone’s cost-benefit analyses. Building dual controls into an arcade cabinet was expensive, and the end result tended to take up a lot of space. The designers of arcade classics like Asteroids and Galaxian soon realized that they could replace the complications of a human opponent with hordes of computer-controlled enemies, flying in rudimentary, partially randomized patterns. Bulky multiplayer machines thus became rarer and rarer in arcades, replaced by slimmer, more standardized single-player cabinets. After all, if you wanted to compete with your friends in such games, there was still a way to do so: you could each play a round against the computerized enemies and compare your scores afterward.

While all of this was taking shape, the Trinity of 1977 — the Radio Shack TRS-80, Apple II, and Commodore PET — had ushered in the personal-computing era. The games these early microcomputers played were sometimes ports or clones of popular arcade hits, but just as often they were more cerebral, conceptually ambitious affairs where reflexes didn’t play as big — or any — role: flight simulations, adventure games, war and other strategy games. The last were often designed to be played optimally or even exclusively against another human, largely for the same reason Pong had been made that way: artificial intelligence was a hard thing to implement under any circumstances on an 8-bit computer with as little as 16 K of memory, and it only got harder when you were asking said artificial intelligence to formulate a strategy for Operation Barbarossa rather than to move a tennis racket around in front of a bouncing ball. Many strategy-game designers in these early days saw multiplayer options almost as a necessary evil, a stopgap until the computer could fully replace the human player, thus alleviating that eternal problem of the war-gaming hobby on the tabletop: the difficulty of finding other people in one’s neighborhood who were able and willing to play such weighty, complex games.

At least one designer, however, saw multiplayer as a positive advantage rather than a kludge — in fact, as the way the games of the future by all rights ought to be. “When I was a kid, the only times my family spent together that weren’t totally dysfunctional were when we were playing games,” remembered Dani Bunten Berry. From the beginning of her design career in 1979, when she made an auction game called Wheeler Dealers for the Apple II,[3]Wheeler Dealers and all of her other games that are mentioned in this article were credited to Dan Bunten, the name under which she lived until 1992. multiplayer was her priority. In fact, she was willing to go to extreme lengths to make it possible; in addition to a cassette tape containing the software, Wheeler Dealers shipped with a custom-made hardware add-on, the only method she could come up with to let four players bid at once. Such experiments culminated in M.U.L.E., one of the first four games ever published by Electronic Arts, a deeply, determinedly social game of economics and, yes, auctions for Atari and Commodore personal computers that many people, myself included, still consider her unimpeachable masterpiece.

A M.U.L.E. auction in progress.

And yet it was Seven Cities of Gold, her second game for Electronic Arts, that became a big hit. Ironically, it was also the first she had ever made with no multiplayer option whatsoever. She was learning to her chagrin that games meant to be played together on a single personal computer were a hard sell; such machines were typically found in offices and bedrooms, places where people went to isolate themselves, not in living rooms or other spaces where they went to be together. She decided to try another tack, thereby injecting the “online” part of POMG into our discussion.

In 1988, Electronic Arts published Berry’s Modem Wars, a game that seems almost eerily prescient in retrospect, anticipating the ludic zeitgeist of more than a decade later with remarkable accuracy. It was a strategy game played in real time (although not quite a real-time strategy of the resource-gathering and army-building stripe that would later be invented by Dune II and popularized by Warcraft and Command & Conquer). And it was intended to be played online against another human sitting at another computer, connected to yours by the gossamer thread of a peer-to-peer modem hookup over an ordinary telephone line. Like most of Berry’s games, it didn’t sell all that well, being a little too far out in front of the state of her nation’s telecommunications infrastructure.

Nevertheless, she continued to push her agenda of computer games as ways of being entertained together rather than alone over the years that followed. She never did achieve the breakout hit she craved, but she inspired countless other designers with her passion. She died far too young in 1998, just as the world was on the cusp of embracing her vision on a scale that even she could scarcely have imagined. “It is no exaggeration to characterize her as the world’s foremost authority on multiplayer computer games,” said Brian Moriarty when he presented Dani Bunten Berry with the first ever Game Developers Conference Lifetime Achievement Award two months before her death. “Nobody has worked harder to demonstrate how technology can be used to realize one of the noblest of human endeavors: bringing people together. Historians of electronic gaming will find in these eleven boxes [representing her eleven published games] the prototypes of the defining art form of the 21st century.” Let this article and the ones that will follow it, written well into said century, serve as partial proof of the truth of his words.

Danielle Bunten Berry, 1949-1998.

For by the time Moriarty spoke them, other designers had been following the trails she had blazed for quite some time, often with much more commercial success. A good early example is Populous, Peter Molyneux’s strategy game in real time (although, again, not quite a real-time strategy) that was for most of its development cycle strictly a peer-to-peer online multiplayer game, its offline single-player mode being added only during the last few months. An even better, slightly later one is DOOM, John Carmack and John Romero’s game of first-person 3D mayhem, whose star attraction, even more so than its sadistic single-player levels, was the “deathmatch” over a local-area network. Granted, these testosterone-fueled, relentlessly zero-sum contests weren’t quite the same as what Berry was envisioning for gaming’s multiplayer future near the end of her life; she wished passionately for games with a “people orientation,” directed toward “the more mainstream, casual players who are currently coming into the PC market.” Still, as the saying goes, you have to start somewhere.

But there is once more a caveat to state here about access, or rather the lack thereof. Being built for local networks only — i.e., networks that lived entirely within a single building or at most a small complex of them — DOOM deathmatches were out of reach on a day-to-day basis for those who didn’t happen to be students or employees at institutions with well-developed data-processing departments and permissive or oblivious authority figures. Outside of those ivory towers, this was the era of the “LAN party,” when groups of gamers would all lug their computers over to someone’s house, wire them together, and go at it over the course of a day or a weekend. These occasions went on to become treasured memories for many of their participants, but they achieved that status precisely because they were so sporadic and therefore special.

And yet DOOM‘s rise corresponded with the transformation of the Internet from an esoteric tool for the technological elite to the most flexible medium of communication ever placed at the disposal of the great unwashed, thanks to a little invention out of Switzerland called the World Wide Web. What if there was a way to move DOOM and other games like it from a local network onto this one, the mother of all wide-area networks? Instead of deathmatching only with your buddy in the next cubicle, you would be able to play against somebody on another continent if you liked. Now wouldn’t that be cool?

The problem was that local-area networks ran over a protocol known as IPX, while the Internet ran on a completely different one called TCP/IP. Whoever could bridge that gap in a reasonably reliable, user-friendly way stood to become a hero to gamers all over the world.



Jay Cotton discovered DOOM in the same way as many another data-processing professional: when it brought down his network. He was employed at the University of Georgia at the time, and was assigned to figure out why the university’s network kept buckling under unprecedented amounts of spurious traffic. He tracked the cause down to DOOM, the game that half the students on campus seemed to be playing more than half the time. More specifically, the problem was caused by a bug, which was patched out of existence by John Carmack as soon as he was informed. Problem solved. But Cotton stuck around to play, the warden seduced by the inmates of the asylum.

He was soon so much better at the game than anyone else on campus that he was getting a bit bored. Looking for worthier opponents, he stumbled across a program called TCPSetup, written by one Jake Page, which was designed to translate IPX packets into TCP/IP ones and vice versa on the fly, “tricking” DOOM into communicating across the vast Internet. It was cumbersome to use and extremely unreliable, but on a good day it would let you play DOOM over the Internet for brief periods of time at least, an amazing feat by any standard. Cotton would meet other players on an Internet chat channel dedicated to the game, they’d exchange IP addresses, and then they’d have at it — or try to, depending on the whims of the Technology Gods that day.

On August 22, 1994, Cotton received an email from a fellow out of the University of Illinois — yes, PLATO’s old home — whom he’d met and played in this way (and beaten, he was always careful to add). His name was Scott Coleman. “I have some ideas for hacking TCPSetup to make it a little easier. Care to do some testing later?” Coleman wrote. “I’ve already emailed Jake [Page] on this, but he hasn’t responded (might be on vacation or something). If he approves, I’m hoping some of these ideas might make it into the next release of TCPSetup. In the meantime, I want to do some experimenting to see what’s feasible.”

Jake Page never did respond to their queries, so Cotton and Coleman just kept beavering away on their own, eventually rewriting TCPSetup entirely to create iDOOM, a more reliable and far less fiddly implementation of the same concept, with support for three- or four-player deathmatches instead of just one-on-one duels. It took off like a rocket; the pair were bombarded with feature requests, most notably to make iDOOM work with other IPX-only games as well. In January of 1995, they added support for Heretic, one of the most popular of the first wave of so-called “DOOM clones.” They changed their program’s name to “iFrag” to reflect the fact that it was now about more than just DOOM.

Having come this far, Cotton and Coleman soon made the conceptual leap that would transform their software from a useful tool to a way of life for a time for many, many thousands of gamers. Why not add support for more games, they asked themselves, not in a bespoke way as they had been doing to date, but in a more sustainable one, by turning their program into a general-purpose IPX-to-TCP/IP bridge, suitable for use with the dozens of other multiplayer games out there that supported only local-area networks out of the box. And why not make their tool into a community while they were at it, by adding an integrated chat service? In addition to its other functions, the program could offer a list of “servers” hosting games, which you could join at the click of a button; no more trolling for opponents elsewhere on the Internet, then laboriously exchanging IP addresses and meeting times and hoping the other guy followed through. This would be instant-gratification online gaming. It would also provide a foretaste at least of persistent online multiplayer gaming; as people won matches, they would become known commodities in the community, setting up a meta-game, a sporting culture of heroes and zeroes where folks kept track of win-loss records and where everybody clamored to hear the results when two big wheels faced off against one another.

Cotton and Coleman renamed their software for the third time in less than nine months, calling it Kali, a name suggested by Coleman’s Indian-American girlfriend (later his wife). “The Kali avatar is usually depicted with swords in her hands and a necklace of skulls from those she has killed,” says Coleman, “which seemed appropriate for a deathmatch game.” Largely at the behest of Cotton, always the more commercially-minded of the pair, they decided to make Kali shareware, just like DOOM itself: multiplayer sessions would be limited to fifteen minutes at a time until you coughed up a $20 registration fee. Cotton went through the logistics of setting up and running a business in Georgia while Coleman did most of the coding in Illinois. (Rather astonishingly, Cotton and Coleman had still never met one another face to face in 2013, when gaming historian David L. Craddock conducted an interview with them that has been an invaluable source of quotes and information for this article.)

Kali certainly wasn’t the only solution in this space; a commercial service called DWANGO had existed since December of 1994, with the direct backing of John Carmack and John Romero, whose company id Software collected 20 percent of its revenue in return for the endorsement. But DWANGO ran over old-fashioned direct-dial-up connections rather than the Internet, meaning you had to pay long-distance charges to use it if you weren’t lucky enough to live close to one of its host computers. On top of that, it charged $9 for just five hours of access per month, with the fees escalating from there. Kali, by contrast, was available to you forever for as many hours per month as you liked after you plunked down your one-time fee of $20.

So, Kali was popular right from its first release on April 26, 1995. Yet it was still an awkward piece of software for the casual user despite the duo’s best efforts, being tied to MS-DOS, whose support for TCP/IP relied on a creaky edifice of third-party tools. The arrival of Windows 95 was a godsend for Kali, as it was for computer gaming in general, making the hobby accessible in a way it had never been before. The so-called “Kali95” was available by early 1996, and things exploded from there. Kali struck countless gamers with all the force of a revelation; who would have dreamed that it could be so easy to play against another human online? Lloyd Case, for example, wrote in Computer Gaming World magazine that using Kali for the first time was “one of the most profound gaming experiences I’ve had in a long time.” Reminiscing seventeen years later, David L. Craddock described how “using Kali for the first time was like magic. Jumping into a game and playing with other people. It blew my fourteen-year-old mind.” In late 1996, the number of registered Kali users ticked past 50,000, even as quite possibly just as many or more were playing with cracked versions that bypassed the simplistic serial-number-registration process. First-person-shooter deathmatches abounded, but you could also play real-time strategies like Command & Conquer and Warcraft, or even the Links golf simulation. Computer Gaming World gave Kali a special year-end award for “Online-Enabling Technology.”

Kali for Windows 95.

Competitors were rushing in at a breakneck pace by this time, some of them far more conventionally “professional” than Kali, whose origin story was, as we’ve seen, as underground and organic as that of DOOM itself. The most prominent of the venture-capital-funded startups were MPlayer (co-founded by Brian Moriarty of Infocom and LucasArts fame, and employing Dani Bunten Berry as a consultant during the last months of her life) and the Total Entertainment Network, better known as simply TEN. In contrast to Kali’s one-time fee, they, like DWANGO before them, relied on subscription billing: $20 per month for MPlayer, $15 per month for TEN. Despite slick advertising and countless other advantages that Kali lacked, neither would ever come close to overtaking its scruffy older rival, which had price as well as oodles of grass-roots goodwill on its side. Jay Cotton:

It was always my belief that Kali would continue to be successful as long as I never got greedy. I wanted everyone to be so happy with their purchase that they would never hesitate to recommend it to a friend. [I would] never charge more than someone would be readily willing to pay. It also became a selling point that Kali only charged a one-time fee, with free upgrades forever. People really liked this, and it prevented newcomers (TEN, Heat [a service launched in 1997 by Sega of America], MPlayer, etc.) from being able to charge enough to pay for their expensive overheads.

Kali was able to compete with TEN, MPlayer, and Heat because it already had a large established user base (more users equals more fun) and because it was much, much cheaper. These new services wanted to charge a subscription fee, but didn’t provide enough added benefit to justify the added expense.

It was a heady rush indeed, although it would also prove a short-lived one; Kali’s competitors would all be out of business within a year or so of the turn of the millennium. Kali itself stuck around after that, but as a shadow of what it had been, strictly a place for old-timers to reminisce and play the old hits. “I keep it running just out of habit,” said Jay Cotton in 2013. “I make just enough money on website ads to pay for the server.” It still exists today, presumably as a result of the same force of habit.

One half of what Kali and its peers offered was all too obviously ephemeral from the start: as the Internet went mainstream, developers inevitably began building TCP/IP support right into their games, eliminating the need for an external IPX-to-TCP/IP bridge. (For example, Quake, id Software’s much-anticipated follow-up to DOOM, did just this when it finally arrived in 1996.) But the other half of what they offered was community, which may have seemed a more durable sort of benefit. As it happened, though, one clever studio did an end-run around them here as well.



The folks at Blizzard Entertainment, the small studio and publisher that was fast coming to rival id Software for the title of the hottest name in gaming, were enthusiastic supporters of Kali in the beginning, to the point of hand-tweaking Warcraft II, their mega-hit real-time strategy, to run optimally over the service. They were rewarded by seeing it surpass even DOOM to become the most popular game there of all. But as they were polishing their new action-CRPG Diablo for release in 1996, Mike O’Brien, a Blizzard programmer, suggested that they launch their own service that would do everything Kali did in terms of community, albeit for Blizzard’s games alone. And then he additionally suggested that they make it free, gambling that knowledge of its existence would sell enough games for them at retail to offset its maintenance costs. Blizzard’s unofficial motto had long been “Let’s be awesome,” reflecting their determination to sell exactly the games that real hardcore gamers were craving, honed to a perfect finish, and to always give them that little bit extra. What better way to be awesome than by letting their customers effortlessly play and socialize online, and to do so for free?

The idea was given an extra dollop of urgency by the fact that Westwood Games, the maker of Warcraft‘s chief competitor Command & Conquer, had introduced a service called Westwood Chat that could launch people directly into a licensed version of Monopoly. (Shades of Dani Bunten Berry’s cherished childhood memories…) At the moment it supported only Monopoly, a title that appealed to a very different demographic from the hardcore crowd who favored Blizzard’s games, but who knew how long that would last?[4]Westwood Chat would indeed evolve eventually into Westwood Online, with full support for Command & Conquer, but that would happen only after Blizzard had rolled out their own service.

So, when Diablo shipped in the last week of 1996, it included something called Battle.net, a one-click chat and matchmaking service and multiplayer facilitator. Battle.net made everything easier than it had ever been before. It would even automatically patch your copy of the game to the latest version when you logged on, pioneering the “software as a service” model in gaming that has become everyday life in our current age of Steam. “It was so natural,” says Blizzard executive Max Schaefer. “You didn’t think about the fact that you were playing with a dude in Korea and a guy in Israel. It’s really a remarkable thing when you think about it. How often are people casually matched up in different parts of the world?” The answer to that question, of course, was “not very often” in the context of 1997. Today, it’s as normal as computers themselves, thanks to groundbreaking initiatives like this one. Blizzard programmer Jeff Strain:

We believed that in order for it [Battle.net] to really be embraced and adopted, that accessibility had to be there. The real catch for Battle.net was that it was inside-out rather than outside-in. You jumped right into the game. You connected players from within the game experience. You did not alt-tab off into a Web browser to set up your games and have the Web browser try to pass off information or something like that. It was a service designed from Day One to be built into actual games.

The combination of Diablo and Battle.net brought a new, more palpable sort of persistence to online gaming. Players of DOOM or Warcraft II might become known as hotshots on services like Kali, but their reputation conferred no tangible benefit once they entered a game session. A DOOM deathmatch or a Warcraft II battle was a one-and-done event, which everyone started on an equal footing, which everyone would exit again within an hour or so, with nothing but memories and perhaps bragging rights to show for what had transpired.

Diablo, however, was different. Although less narratively and systemically ambitious than many of its recent brethren, it was nevertheless a CRPG, a genre all about building up a character over many gaming sessions. Multiplayer Diablo retained this aspect: the first time you went online, you had to pick one of the three pre-made first-level characters to play, but after that you could keep bringing the same character back to session after session, with all of the skills and loot she had already collected. Suddenly the link between the real people in the chat rooms and their avatars that lived in the game proper was much more concrete. Many found it incredibly compelling. People started to assume the roles of their characters even when they were just hanging out in the chat rooms, started in some very real sense to live the game.

But it wasn’t all sunshine and roses. Battle.net became a breeding ground of the toxic behaviors that have continued to dog online gaming to this day, a social laboratory demonstrating what happens when you take a bunch of hyper-competitive, rambunctious young men and give them carte blanche to have at it any way they wish with virtual swords and spells. The service was soon awash with “griefers,” players who would join others on their adventures, ostensibly as their allies in the dungeon, then literally stab them in the back when they least expected it, killing their characters and running off with all of their hard-won loot. The experience could be downright traumatizing for the victims, who had thought they were joining up with friendly strangers simply to have fun together in a cool new game. “Going online and getting killed was so scarring,” acknowledges David Brevick, Diablo‘s original creator. “Those players are still feeling a little bit apprehensive.”

To make matters worse, many of the griefers were also cheaters. Diablo had been born and bred a single-player game; multiplayer had been a very late addition. This had major ramifications. Diablo stored all the information about the character you played online on your local hard drive rather than the Battle.net server. Learn how to modify this file, and you could create a veritable god for yourself in about ten minutes, instead of the dozens of hours it would take playing the honest way. “Trainers” — programs that could automatically do the necessary hacking for you — spread like wildfire across the Internet. Other folks learned to hack the game’s executable files themselves. Most infamously, they figured out ways to attack other players while they were still in the game’s above-ground town, supposedly a safe space reserved for shopping and healing. Battle.net as a whole took on a siege mentality, as people who wanted to play honorably and honestly learned to lock the masses out with passwords that they exchanged only with trusted friends. This worked after a fashion, but it was also a betrayal of the core premise and advantage of Battle.net, the ability to find a quick pick-up game anytime you wanted one. Yet there was nothing Blizzard could do about it without rewriting the whole game from the ground up. They would eventually do this — but they would call the end result Diablo II. In the meanwhile, it was a case of player beware.

It’s important to understand that, for all that it resembled what would come later all too much from a sociological perspective, multiplayer Diablo was still no more persistent than Moria and Oubliette had been on the old PLATO network: each player’s character was retained from session to session, but nothing about the state of the world. Each world, or instance of the game, could contain a maximum of four human players, and disappeared as soon as the last player left it, leaving as its legacy only the experience points and items its inhabitants had collected from it while it existed. Players could and did kill the demon Diablo, the sole goal of the single-player game, one that usually required ten hours or more of questing to achieve, over and over again in the online version. In this sense, multiplayer Diablo was a completely different game from single-player Diablo, replacing the simple quest narrative of the latter with a social meta-game of character-building and player-versus-player combat.

For lots and lots of people, this was lots and lots of fun; Diablo was hugely popular despite all of the exploits it permitted — indeed, for some players perchance, because of them. It became one of the biggest computer games of the 1990s, bringing online gaming to the masses in a way that even Kali had never managed. Yet there was still a ways to go to reach total persistence, to bring a permanent virtual world to life. Next time, then, we’ll see how mainstream commercial games of the 1990s sought to achieve a degree of persistence that the first MUD could boast of already in 1979. These latest virtual worlds, however, would attempt to do so with all the bells and whistles and audiovisual niceties that a new generation of gamers raised on multimedia and 3D graphics demanded. An old dog in the CRPG space was about to learn a new trick, creating in the process a new gaming acronym that’s even more of a mouthful than POMG.



Did you enjoy this article? If so, please think about pitching in to help me make many more like it. You can pledge any amount you like.


Sources: the books Stay Awhile and Listen Volumes 1 and 2 by David L. Craddock, Masters of Doom by David Kushner, and The Friendly Orange Glow by Brian Dear; Retro Gamer 43, 90, and 103; Computer Gaming World of September 1996 and May 1997; Next Generation of March 1997. Online sources include “The Story of Battle.net” by Wes Fenlon at PC Gamer, Dan Griliopoulos’s collection of interviews about Command & Conquer, Brian Moriarty’s speech honoring Dani Bunten Berry from the 1998 Game Developers Conference, and Jay Cotton’s history of Kali on the DOOM II fan site. Plus some posts on The CRPG Addict, to which I’ve linked in the article proper.

Footnotes

Footnotes
1 The PLATO Moria was a completely different game from the 1983 single-player roguelike that bore the same name.
2 Not the same game as the 2002 Bioware CRPG of the same name.
3 Wheeler Dealers and all of her other games that are mentioned in this article were credited to Dan Bunten, the name under which she lived until 1992.
4 Westwood Chat would indeed evolve eventually into Westwood Online, with full support for Command & Conquer, but that would happen only after Blizzard had rolled out their own service.
 
 

Tags: , , , , , , , , , , , ,

The Next Generation in Graphics, Part 1: Three Dimensions in Software (or, Quake and Its Discontents)

“Mathematics,” wrote the historian of science Carl Benjamin Boyer many years ago, “is as much an aspect of culture as it is a collection of algorithms.” The same might be said about the mathematical algorithms we choose to prioritize — especially in these modern times, when the right set of formulas can be worth many millions of dollars, can be trade secrets as jealously guarded as the recipes for Coca-Cola or McDonald’s Special Sauce.

We can learn much about the tech zeitgeist from those algorithms the conventional wisdom thinks are most valuable. At the very beginning of the 1990s, when “multimedia” was the buzzword of the age and the future of games was believed to lie with “interactive movies” made out of video clips of real actors, the race was on to develop video codecs: libraries of code able to digitize footage from the analog world and compress it to a fraction of its natural size, thereby making it possible to fit a reasonable quantity of it on CDs and hard drives. This was a period when Apple’s QuickTime was regarded as a killer app in itself, when Philips’s ill-fated CD-i console could be delayed for years by the lack of a way to get video to its screen quickly and attractively.

It is a rule in almost all kinds of engineering that, the more specialized a device is, the more efficiently it can perform the tasks that lie within its limited sphere. This rule holds true as much in computing as anywhere else. So, when software proved able to stretch only so far in the face of the limited general-purpose computing power of the day, some started to build their video codecs into specialized hardware add-ons.

Just a few years later, after the zeitgeist in games had shifted, the whole process repeated itself in a different context.

By the middle years of the decade, with the limitations of working with canned video clips becoming all too plain, interactive movies were beginning to look like a severe case of the emperor’s new clothes. The games industry therefore shifted its hopeful gaze to another approach, one that would prove a much more lasting transformation in the way games were made. This 3D Revolution did have one point of similarity with the mooted and then abandoned meeting of Silicon Valley and Hollywood: it too was driven by algorithms, implemented first in software and then in hardware.

It was different, however, in that the entire industry looked to one man to lead it into its algorithmic 3D future. That man’s name was John Carmack.



Whether they happen to be pixel art hand-drawn by human artists or video footage captured by cameras, 2D graphics already exist on disk before they appear on the monitor screen. And therein lies the source of their limitations. Clever programmers can manipulate them to some extent — pixel art generally more so than digitized video — but the possibilities are bounded by the fundamentally static nature of the source material. 3D graphics, however, are literally drawn by the computer. They can go anywhere and do just about anything. For, while 2D graphics are stored as a concrete grid of pixels, 3D graphics are described using only the abstract language of mathematics — a language able to describe not just a scene but an entire world, assuming you have a powerful enough computer running a good enough algorithm.

Like so many things that get really complicated really quickly, the basic concepts of 3D graphics are disarmingly simple. The process behind them can be divided into two phases: the modeling phase and the rendering, or rasterization, phase.

It all begins with simple two-dimensional shapes of the sort we all remember from middle-school geometry, each defined as a collection of points on a plane and straight lines connecting them together. By combining and arranging these two-dimensional shapes, or surfaces, together in three-dimensional space, we can make solids — or, in the language of computerized 3D graphics, objects.

Here we see how 3D objects can be made ever more more complex by building them out of ever more surfaces. The trade-off is that more complex objects require more computing power to render in a timely fashion.

Once we have a collection of objects, we can put them into a world space, wherever we like and at whatever angle of orientation we like. This world space is laid out as a three-dimensional grid, with its point of origin — i.e., the point where X, Y, and Z are all zero — wherever we wish it to be. In addition to our objects, we also place within it a camera — or, if you like, an observer in our world — at whatever position and angle of orientation we wish. At their simplest, 3D graphics require nothing more at the modeling phase.

We sometimes call the second phase the “rasterization” phase in reference to the orderly two-dimensional grid of pixels which make up the image seen on a monitor screen, which in computer-science parlance is known as a raster. The whole point of this rasterization phase, then, is to make our computer’s monitor a window into our imaginary world from the point of view of our imaginary camera. This entails converting said world’s three dimensions back into our two-dimensional raster of pixels, using the rules of perspective that have been understood by human artists since the Renaissance.

We can think of rasterizing as observing a scene through a window screen. Each square in the mesh is one pixel, which can be exactly one color. The whole process of 3D rendering ultimately comes down to figuring out what each of those colors should be.

The most basic of all 3D graphics are of the “wire-frame” stripe, which attempt to draw only the lines that form the edges of their surfaces. They were seen fairly frequently on microcomputers as far back as the early 1980s, the most iconic example undoubtedly being the classic 1984 space-trading game Elite.

Even in something as simple as Elite, we can begin to see how 3D graphics blur the lines between a purely presentation-level technology and a full-blown world simulation. When we have one enemy spaceship in our sights in Elite, there might be several others above, behind, or below us, which the 3D engine “knows” about but which we may not. Combined with a physics engine and some player and computer agency in the model world (taking here the form of lasers and thrusters), it provides the raw materials for a game. Small wonder that so many game developers came to see 3D graphics as such a natural fit.

But, for all that those wire frames in Elite might have had their novel charm in their day, programmers realized that the aesthetics of 3D graphics had to get better for them to become a viable proposition over the long haul. This realization touched off an algorithmic arms race that is still ongoing to this day. The obvious first step was to paint in the surfaces of each solid in single blocks of color, as the later versions of Elite that were written for 16-bit rather than 8-bit machines often did. It was an improvement in a way, but it still looked jarringly artificial, even against a spartan star field in outer space.

The next way station on the road to a semi-realistic-looking computer-generated world was light sources of varying strengths, positioned in the world with X, Y, and Z coordinates of their own, casting their illumination and shadows realistically on the objects to be found there.

A 3D scene with light sources.

The final step was to add textures, small pictures that were painted onto surfaces in place of uniform blocks of color; think of the pitted paint job of a tired X-Wing fighter or the camouflage of a Sherman tank. Textures introduced an enormous degree of complication at the rasterization stage; it wasn’t easy for 3D engines to make them look believable from a multitude of different lines of sight. That said, believable lighting was almost as complicated. Textures or lighting, or both, were already the fodder for many an academic thesis before microcomputers even existed.

A 3D scene with light sources and textures.

In the more results-focused milieu of commercial game development, where what was possible was determined largely by which types of microprocessors Intel and Motorola were selling the most of in any given year, programmers were forced to choose between compromised visions of the academic ideal. These broke down into two categories, neatly exemplified by the two most profitable computer games of the 1990s. Those games that followed in one or the other’s footsteps came to be known as the “Myst clones” and the “DOOM clones.” They could hardly have been more dissimilar in personality, yet they were both symbols of a burgeoning 3D revolution.

The Myst clones got their name from a game developed by Cyan Studios and published by Brøderbund in September of 1993, which went on to sell at least 6 million copies as a boxed retail product and quite likely millions more as a pack-in of one description or another. Myst and the many games that copied its approach tended to be, as even their most strident detractors had to admit, rather beautiful to look at. This was because they didn’t attempt to render their 3D imagery in real time; their rendering was instead done beforehand, often on beefy workstation-class machines, then captured as finished rasters of pixels on disk. Given that they worked with graphics that needed to be rendered only once and could be allowed to take hours to do so if necessary, the creators of games like this could pull out all the stops in terms of textures, lighting, and the sheer number and complexity of the 3D solids that made up their worlds.

These games’ disadvantage — a pretty darn massive one in the opinion of many players — was that their scope of interactive potential was as sharply limited in its way as that of all those interactive movies built around canned video clips that the industry was slowly giving up on. They could present their worlds to their players only as a collection of pre-rendered nodes to be jumped between, could do nothing on the fly. These limitations led most of their designers to build their gameplay around set-piece puzzles found in otherwise static, non-interactive environments, which most players soon started to find a bit boring. Although the genre had its contemplative pleasures and its dedicated aficionados who appreciated them, its appeal as anything other than a tech demo — the basis on which the original Myst was primarily sold — turned out to be the very definition of niche, as the publishers of Myst clones belatedly learned to their dismay. The harsh reality became undeniable once Riven, the much-anticipated, sumptuously beautiful sequel to Myst, under-performed expectations by “only” selling 1 million copies when it finally appeared four years after its hallowed predecessor. With the exception only of Titanic: Adventure out of Time, which owed its fluke success to a certain James Cameron movie with which it happened to share a name and a setting, no other game of this style ever cracked half a million in unit sales. The genre has been off the mainstream radar for decades now.

The DOOM clones, on the other hand, have proved a far more enduring fixture of mainstream gaming. They took their name, of course, from the landmark game of first-person carnage which the energetic young men of id Software released just a couple of months after Myst reached store shelves. John Carmack, the mastermind of the DOOM engine, managed to present a dynamic, seamless, apparently 3D world in place of the static nodes of Myst, and managed to do it in real time, even on a fairly plebeian consumer-grade computer. He did so first of all by being a genius programmer, able to squeeze every last drop out of the limited hardware at his disposal. And then, when even that wasn’t enough to get the job done, he threw out feature after feature that the academics whose papers he had pored over insisted was essential for any “real” 3D engine. His motto was, if you can’t get it done honestly, cheat, by hard-coding assumptions about the world into your algorithms and simply not letting the player — or the level designer — violate them. The end result was no Myst-like archetype of beauty in still screenshots. It pasted 2D sprites into its world whenever there wasn’t horsepower enough to do real modeling, had an understanding of light and its properties that is most kindly described as rudimentary, and couldn’t even handle sloping floors or ceilings, or walls that weren’t perfectly vertical. Heck, it didn’t even let you look up or down.

And absolutely none of that mattered. DOOM may have looked a bit crude in freeze-frame, but millions of gamers found it awe-inspiring to behold in motion. Indeed, many of them thought that Carmack’s engine, combined with John Romero and Sandy Petersen’s devious level designs, gave them the most fun they’d ever had sitting behind a computer. This was immersion of a level they’d barely imagined possible, the perfect demonstration of the real potential of 3D graphics — even if it actually was, as John Carmack would be the first to admit, only 2.5D at best. No matter; DOOM felt like real 3D, and that was enough.

A hit game will always attract imitators, and a massive hit will attract legions of them. Accordingly, the market was soon flooded with, if anything, even more DOOM clones than Myst clones, all running in similar 2.5D engines, the product of both intense reverse engineering of DOOM itself and Carmack’s habit of talking freely about how he made the magic happen to pretty much anyone who asked him, no matter how much his colleagues at id begged him not to. “Programming is not a zero-sum game,” he said. “Teaching something to a fellow programmer doesn’t take it away from you. I’m happy to share what I can because I’m in it for the love of programming.” Carmack was elevated to veritable godhood, the prophet on the 3D mountaintop passing down whatever scraps of wisdom he deigned to share with the lesser mortals below.

Seen in retrospect, the DOOM clones are, like the Myst clones, a fairly anonymous lot for the most part, doubling down on transgressive ultra-violence instead of majestic isolation, but equally failing to capture a certain ineffable something that lay beyond the nuts and bolts of their inspiration’s technology. The most important difference between the Myst and DOOM clones came down to the filthy lucre of dollar and unit sales: whereas Myst‘s coattails proved largely illusory, producing few other hits, DOOM‘s were anything but. Most people who had bought Myst, it seemed, were satisfied with that single purchase; people who bought DOOM were left wanting more first-person mayhem, even if it wasn’t quite up to the same standard.

The one DOOM clone that came closest to replacing DOOM itself in the hearts of gamers was known as Duke Nukem 3D. Perhaps that isn’t surprising, given its pedigree: it was a product of 3D Realms, the rebranded incarnation of Scott Miller’s Apogee Software. Whilst trading under the earlier name, Miller had pioneered the episodic shareware model of game distribution, a way of escaping the heavy-handed group-think of the major boxed-game publishers and their tediously high-concept interactive movies in favor of games that were exponentially cheaper to develop, but also rawer, more visceral, more in line with what the teenage and twenty-something males who still constituted the large majority of dedicated gamers were actually jonesing to play. Miller had discovered the young men of id when they were still working for a disk magazine in Shreveport, Louisiana. He had then convinced them to move to his own glossier, better-connected hometown of Dallas, Texas, and distributed their proto-DOOM shooter Wolfenstein 3D to great success. His protégées had elected to strike out on their own when the time came to release DOOM, but it’s fair to say that that game would probably never have come to exist at all if not for their shareware Svengali. And even if it had, it probably wouldn’t have made them so much money; Jay Wilbur, id’s own tireless guerilla marketer, learned most of his tricks from watching Scott Miller.

Still a man with a keen sense of what his customers really wanted, Miller re-branded Apogee as 3D Realms as a way of signifying its continuing relevance amidst the 3D revolution that took the games industry by storm after DOOM. Then he, his junior partner George Broussard, and 3D Realms’s technical mastermind Ken Silverman set about making a DOOM-like engine of their own, known as Build, which they could sell to other developers who wanted to get up and running quickly. And they used the same engine to make a game of their own, which would turn out to be the most memorable of all those built with Build.

Duke Nukem 3D‘s secret weapon was one of the few boxes in the rubric of mainstream gaming success that DOOM had failed to tick off: a memorable character to serve as both star and mascot. First conceived several years earlier for a pair of Apogee 2D platformers, Duke Nukem was Joseph Lieberman’s worst nightmare, an unrepentant gangster with equally insatiable appetites for bombs and boobies, a fellow who “thinks the Bureau of Alcohol, Tobacco, and Firearms is a convenience store,” as his advertising trumpeted. His latest game combined some of the best, tightest level design yet seen outside of DOOM with a festival of adolescent transgression, from toilet water that served as health potions to strippers who would flash their pixelated breasts at you for the price of a dollar bill. The whole thing was topped off with the truly over-the-top quips of Duke himself: “I’m gonna rip off your head and shit down your neck!”; “Your face? Your ass? What’s the difference?” It was an unbeatable combination, proof positive that Miller’s ability to read his market was undimmed. Released in January of 1996, relatively late in the day for this generation of 3D — or rather 2.5D — technology, Duke Nukem 3D became by some reports the best-selling single computer game of that entire year. It is still remembered with warm nostalgia today by countless middle-aged men who would never want their own children to play a game like this. And so the cycle of life continues…

In a porno shop, shooting it out with policemen who are literally pigs…

Duke Nukem 3D was a triumph of design and attitude rather than technology; in keeping with most of the DOOM clones, the Build engine’s technical innovations over its inspiration were fairly modest. John Carmack scoffed that his old friends’ creation looked like it was “held together with bubble gum.”

The game that did push the technology envelope farthest, albeit without quite managing to escape the ghetto of the DOOM clones, was also a sign in another way of how quickly DOOM was changing the industry: rather than stemming from scruffy veterans of the shareware scene like id and 3D Realms, it came from the heart of the industry’s old-money establishment — from no less respectable and well-financed an entity than George Lucas’s very own games studio.

LucasArts’s Dark Forces was a shooter set in the Star Wars universe, which disappointed everyone right out of the gate with the news that it was not going to let you fight with a light saber. The developers had taken a hard look at it, they said, but concluded in the end that it just wasn’t possible to pull off satisfactorily within the hardware specifications they had to meet. This failing was especially ironic in light of the fact that they had chosen to name their new 2.5D engine “Jedi.” But they partially atoned for it by making the Jedi engine capable of hosting unprecedentedly enormous levels — not just horizontally so, but vertically as well. Dark Forces was full of yawning drop-offs and cavernous open spaces, the likes which you never saw in DOOM — or Duke Nukem 3D, for that matter, despite its release date of almost a year after Dark Forces. Even more importantly, Dark Forces felt like Star Wars, right from the moment that John Williams’s stirring theme song played over stage-setting text which scrolled away into the frame rather than across it. Although they weren’t allowed to make any of the movies’ characters their game’s star, LucasArts created a serviceable if slightly generic stand-in named Kyle Katarn, then sent him off on vertigo-inducing chases through huge levels stuffed to the gills with storm troopers in urgent need of remedial gunnery training, just like in the movies. Although Dark Forces toned down the violence that so many other DOOM clones were making such a selling point out of — there was no blood whatsoever on display here, just as there had not been in the movies — it compensated by giving gamers the chance to live out some of their most treasured childhood media memories, at a time when there were no new non-interactive Star Wars experiences to be had.

Unfortunately, LucasArts’s design instincts weren’t quite on a par with their presentation and technology. Dark Forces‘s levels were horribly confusing, providing little guidance about what to do or where to go in spaces whose sheer three-dimensional size and scope made the two-dimensional auto-map all but useless. Almost everyone who goes back to play the game today tends to agree that it just isn’t as much fun as it ought to be. At the time, though, the Star Wars connection and its technical innovations were enough to make Dark Forces a hit almost the equal of DOOM and Duke Nukem 3D. Even John Carmack made a point of praising LucasArts for what they had managed to pull off on hardware not much better than that demanded by DOOM.

Yet everyone seemed to be waiting on Carmack himself, the industry’s anointed Master of 3D Algorithms, to initiate the real technological paradigm shift. It was obvious what that must entail: an actual, totally non-fake rendered-on-the-fly first-person 3D engine, without all of the compromises that had marked DOOM and its imitators. Such engines weren’t entirely unheard of; the Boston studio Looking Glass Technologies had been working with them for five years, employing them in such innovative, immersive games as Ultima Underworld and System Shock. But those games were qualitatively different from DOOM and its clones: slower, more complex, more cerebral. The mainstream wanted a game that played just as quickly and violently and viscerally as DOOM, but that did it in uncompromising real 3D. With computers getting faster every year and with a genius like John Carmack to hand, it ought to be possible.

And so Carmack duly went to work on just such an engine, for a game that was to be called Quake. His ever-excitable level designer John Romero, who had the looks and personality to be the rock star gaming had been craving for years, was all in with bells on. “The next game is going to blow DOOM all to hell,” he told his legions of adoring fans. “DOOM totally sucks in comparison to our next game! Quake is going to be a bigger step over DOOM than DOOM was over Wolf 3D.” Drunk on success and adulation, he said that Quake would be more than just a game: “It will be a movement.” (Whatever that meant!) The drumbeat of excitement building outside of id almost seemed to justify his hyperbole; from all the way across the Atlantic, the British magazine PC Zone declared that the upcoming Quake would be “the most important PC game ever made.” The soundtrack alone was to be a significant milestone in the incorporation of gaming into mainstream pop culture, being the work of Trent Reznor and his enormously popular industrial-rock band Nine Inch Nails. Such a collaboration would have been unthinkable just a few years earlier.

While Romero was enjoying life as gaming’s own preeminent rock star and waiting for Carmack to get far enough along on the Quake engine to give him something to do, Carmack was living like a monk, working from 4 PM to 4 AM every day. In another sign of just how quickly id had moved up in the world, he had found himself an unexpectedly well-credentialed programming partner. Michael Abrash was one of the establishment’s star programmers, who had written a ton of magazine articles and two highly regarded technical tomes on assembly-language and graphics programming and was now a part of Microsoft’s Windows NT team. When Carmack, who had cut his teeth on Abrash’s writings, invited him out of the blue to come to Dallas and do Quake with him, Bill Gates himself tried to dissuade his employee. “You might not like it down there,” he warned. Abrash was, after all, pushing 40, a staid sort with an almost academic demeanor, while id was a nest of hyperactive arrested adolescence on a permanent sugar high. But he went anyway, because he was pretty sure Carmack was a genius, and because Carmack seemed to Abrash a bit lonely, working all night every night with only his computer for company. Abrash thought he saw in Quake a first glimmer of a new form of virtual existence that companies like Meta are still chasing eagerly today: “a pretty complicated, online, networked universe,” all in glorious embodied 3D. “We do Quake, other companies do other games, people start building worlds with our format and engine and tools, and these worlds can be glommed together via doorways from one to another. To me this sounds like a recipe for the first real cyberspace, which I believe will happen the way a real space station or habitat probably would — by accretion.”

He may not have come down if he had known precisely what he was getting into; he would later compare making Quake to “being strapped onto a rocket during takeoff in the middle of a hurricane.” The project proved a tumultuous, exhausting struggle that very nearly broke id as a cohesive company, even as the money from DOOM was continuing to roll in. (id’s annual revenues reached $15.6 million in 1995, a very impressive figure for what was still a relatively tiny company, with a staff numbering only a few dozen.)

Romero envisioned a game that would be as innovative in terms of gameplay as technology, that would be built largely around sword-fighting and other forms of hand-to-hand combat rather than gun play — the same style of combat that LucasArts had decided was too impractical for Dark Forces. Some of his early descriptions make Quake sound more like a full-fledged CRPG in the offing than another straightforward action game. But it just wouldn’t come together, according to some of Romero’s colleagues because he failed to communicate his expectations to them, rather leading them to suspect that even he wasn’t quite sure what he was trying to make.

Carmack finally stepped in and ordered his design team to make Quake essentially a more graphically impressive DOOM. Romero accepted the decision outwardly, but seethed inwardly at this breach of longstanding id etiquette; Carmack had always made the engines, then given Romero free rein to turn them into games. Romero largely checked out, opening a door that ambitious newcomers like American McGee and Tim Willits, who had come up through the thriving DOOM modding community, didn’t hesitate to push through. The offices of id had always been as hyper-competitive as a DOOM deathmatch, but now the atmosphere was becoming a toxic stew of buried resentments.

In a misguided attempt to fix the bad vibes, Carmack, whose understanding of human nature was as shallow as his understanding of computer graphics was deep, announced one day that he had ordered a construction crew in to knock down all of the walls, so that everybody could work together from a single “war room.” One for all and all for one, and all that. The offices of the most profitable games studio in the world were transformed into a dystopian setting perfect for a DOOM clone, as described by a wide-eyed reporter from Wired magazine who came for a visit: “a maze of drywall and plastic sheeting, with plaster dust everywhere, loose acoustic tiles, and cables dangling from the ceiling. Almost every item not directly related to the completion of Quake was gone. The only privacy to be found was between the padded earpieces of headphones.”

Wired magazine’s August 1996 cover, showing John Carmack flanked by John Romero and Adrian Carmack, marked the end of an era. By the time it appeared on newsstands, Romero had already been fired.

Needless to say, it didn’t have the effect Carmack had hoped for. In his book-length history of id’s early life and times, journalist David Kushner paints a jittery, unnerving picture of the final months of Quake‘s development: they “became a blur of silent and intense all-nighters, punctuated by the occasional crash of a keyboard against a wall. The construction crew had turned the office into a heap. The guys were taking their frustrations out by hurling computer parts into the drywall like knives.” Michael Abrash is more succinct: “A month before shipping, we were sick to death of working on Quake.” And level designer Sandy Petersen, the old man of the group, who did his best to keep his head down and stay out of the intra-office cold war, is even more so: “[Quake] was not fun to do.”

Quake was finally finished in June of 1996. It would prove a transitional game in more ways than one, caught between where games had recently been and where they were going. Still staying true to that odd spirit of hacker idealism that coexisted with his lust for ever faster Ferraris, Carmack insisted that Quake be made available as shareware, so that people could try it out before plunking down its full price. The game accordingly got a confusing, staggered release, much to the chagrin of its official publisher GT Interactive. To kick things off, the first eight levels went up online. Shortly after, there appeared in stores a $10 CD of the full game that had to be unlocked by paying id an additional $50 in order to play beyond the eighth level. Only after that, in August of 1996, did the game appear in a conventional retail edition.

Predictably enough, it all turned into a bit of a fiasco. Crackers quickly reverse-engineered the algorithms used for generating the unlocking codes, which were markedly less sophisticated than the ones used to generate the 3D graphics on the disc. As a result, hundreds of thousands of people were able to get the entirety of the most hotly anticipated game of the year for $10. Meanwhile even many of those unwilling or unable to crack their shareware copies decided that eight levels was enough for them, especially given that the unregistered version could be used for multiplayer deathmatches. Carmack’s misplaced idealism cost id and GT Interactive millions, poisoning relations between them; the two companies soon parted ways.

So, the era of shareware as an underground pipeline of cutting-edge games came to an end with Quake. From now on, id would concentrate on boxed games selling for full price, as would all of their fellow survivors from that wild and woolly time. Gaming’s underground had become its establishment.

But its distribution model wasn’t the only sense in which Quake was as much a throwback as a step forward. It held fast as well to Carmack’s disinterest in the fictional context of id’s games, as illustrated by his famous claim that the story behind a game was no more important than the story behind a porn movie. It would be blatantly incorrect to claim that the DOOM clones which flooded the market between 1994 and 1996 represented some great exploding of the potential of interactive narrative, but they had begun to show some interest, if not precisely in elaborate set-piece storytelling in the way of adventure games, at least in the appeal of setting and texture. Dark Forces had been a pioneer in this respect, what with its between-levels cut scenes, its relatively fleshed-out main character, and most of all its environments that really did look and feel like the Star Wars films, from their brutalist architecture to John Williams’s unmistakable score. Even Duke Nukem 3D had the character of Duke, plus a distinctively seedy, neon-soaked post-apocalyptic Los Angeles for him to run around in. No one would accuse it of being an overly mature aesthetic vision, but it certainly was a unified one.

Quake, on the other hand,  displayed all the signs of its fractious process of creation, of half a dozen wayward designers all pulling in different directions. From a central hub, you took “slipgates” into alternate dimensions that contained a little bit of everything on the designers’ not-overly-discriminating pop-culture radar, from zombie flicks to Dungeons & Dragons, from Jaws to H.P. Lovecraft, from The Terminator to heavy-metal music, and so wound up not making much of a distinct impression at all.

Most creative works are stamped with the mood of the people who created them, no matter how hard the project managers try to separate the art from the artists. With its color palette dominated by shocks of orange and red, DOOM had almost literally burst off the monitor screen with the edgy joie de vivre of a group of young men whom nobody had expected to amount to much of anything, who suddenly found themselves on the verge of remaking the business of games in their own unkempt image. Quake felt tired by contrast. Even its attempts to blow past the barriers of good taste seemed more obligatory than inspired; the Satanic symbolism, elaborate torture devices, severed heads, and other forms of gore were outdone by other games that were already pushing the envelope even further. This game felt almost somber — not an emotion anyone had ever before associated with id. Its levels were slower and emptier than those of DOOM, with a color palette full of mournful browns and other earth tones. Even the much-vaunted soundtrack wound up rather underwhelming. It was bereft of the melodic hooks that had made Nine Inch Nails’s previous output more palatable for radio listeners than that of most other “extreme” bands; it was more an exercise in sound design than music composition. One couldn’t help but suspect that Trent Reznor had held back all of his good material for his band’s next real record.

At its worst, Quake felt like a tech demo waiting for someone to turn it into an actual game, proving that John Carmack needed John Romero as badly as Romero needed him. But that once-fruitful relationship was never to be rehabilitated: Carmack fired Romero within days of finishing Quake. The two would never work together again.

It was truly the end of an era at id. Sandy Petersen was soon let go as well, Michael Abrash went back to the comfortable bosom of Microsoft, and Jay Wilbur quit for the best of all possible reasons: because his son asked him, “How come all the other daddies go to the baseball games and you never do?” All of them left as exhausted as Quake looks and feels.

Of course, there was nary a hint of Quake‘s infelicities to be found in the press coverage that greeted its release. Even more so than most media industries, the games industry has always run on enthusiasm, and it had no desire at this particular juncture to eat its own by pointing out the flaws in the most important PC game ever made. The coverage in the magazines was marked by a cloying fan-boy fawning that was becoming ever more sadly prominent in gamer culture. “We are not even worthy to lick your toenails free of grit and fluffy sock detritus,” PC Zone wrote in a public letter to id. “We genuflect deeply and offer our bare chests for you to stab with a pair of scissors.” (Eww! A sense of proportion is as badly lacking as a sense of self-respect…) Even the usually sober-minded (by gaming-journalism standards) Computer Gaming World got a little bit creepy: “Describing Quake is like talking about sex. It must be experienced to be fully appreciated.”

Still, I would be a poor historian indeed if I called all the hyperbole of 1996 entirely unjustified. The fact is that the passage of time has tended to emphasize Quake‘s weaknesses, which are mostly in the realm of design and aesthetics, whilst obscuring its contemporary strengths, which were in the realm of technology. Although not quite the first game to graft a true 3D engine onto ultra-fast-action gameplay — Interplay’s Descent beat it to the market by more than a year — it certainly did so more flexibly and credibly than anything else to date, even if Carmack still wasn’t above cheating a bit when push came to shove. (By no means is the Quake engine entirely free of tricksy 2D sprites in places where proper 3D models are just too expensive to render.)

Nevertheless, it’s difficult to fully convey today just how revolutionary the granular details of Quake seemed in 1996: the way you could look up and down and all around you with complete freedom; the way its physics engine made guns kick so that you could almost feel it in your mouse hand; the way you could dive into water and experience the visceral sensation of actually swimming; the way the wood paneling of its walls glinted realistically under the overhead lighting. Such things are commonplace today, but Quake paved the way. Most of the complaints I’ve raised about it could be mitigated by the simple expedient of not even bothering with the lackluster single-player campaign, of just playing it with your mates in deathmatch.

But even if you preferred to play alone, Quake was a sign of better things to come. “It goes beyond the game and more into the engine and the possibilities,” says Rob Smith, who watched the Quake mania come and go as the editor of PC Gamer magazine. “Quake presented options to countless designers. The game itself doesn’t make many ‘all-time’ lists, but its impact [was] as a game changer for 3D gaming, [an] engine that allowed other game makers to express themselves.” For with the industry’s Master of 3D Algorithms John Carmack having shown what was possible and talking as freely as ever about how he had achieved it, with Michael Abrash soon to write an entire book about how he and Carmack had made the magic happen, more games of this type, ready and able to harness the technology of true 3D to more exciting designs, couldn’t be far behind. “We’ve pretty much decided that our niche is in first-person futuristic action games,” said John Carmack. “We stumble when we get away from the techno stuff.” The industry was settling into a model that would remain in place for years to come: id would show what was possible with the technology of 3D graphics, then leave it to other developers to bend it in more interesting directions.

Soon enough, then, titles like Jedi Knight and Half-Life would push the genre once known as DOOM clones, now trading under the more sustainable name of the first-person shooter, in more sophisticated directions in terms of storytelling and atmosphere, without losing the essence of what made their progenitors so much fun. They will doubtless feature in future articles.

Next time, however, I want to continue to focus on the technology, as we turn to another way in which Quake was a rough draft for a better gaming future: months after its initial release, it became one of the first games to display the potential of hardware acceleration for 3D graphics, marking the beginning of a whole new segment of the microcomputer industry, one worth many billions of dollars today.



Did you enjoy this article? If so, please think about pitching in to help me make many more like it. You can pledge any amount you like.



(Sources: the books Rocket Jump: Quake and the Golden Age of First-Person Shooters by David L. Craddock, The Graphics Programming Black Book by Michael Abrash, Masters of DOOM: How Two Guys Created an Empire and Transformed Pop Culture by David Kushner, Dungeons and Dreamers: The Rise of Computer Game Culture from Geek to Chic by Brad King and John Borland, Principles of Three-Dimensional Computer Animation by Michael O’Rourke, and Computer Graphics from Scratch: A Programmer’s Introduction by Gabriel Gambetta. PC Zone of May 1996; Computer Gaming World of July 1996 and October 1996; Wired of August 1996 and January 2010. Online sources include Michael Abrash’s “Ramblings in Realtime” for Blue’s News.

Quake is available as a digital purchase at GOG.com, as is Star Wars: Dark Forces. Duke Nukem 3D can be found on Steam.)

 
 

Tags: , , , , , , ,

The Ratings Game, Part 3: Dueling Standards

When Sega, Nintendo, and the Software Publishers Association (SPA) announced just before the Senate hearing of December 9, 1993, that they had agreed in principle to create a standardized rating system for videogames, the timing alone marked it as an obvious ploy to deflect some of the heat that was bound to come their way later that day. At the same time, though, it was also more than a ploy: it was in fact the culmination of an effort that had been underway in some quarters of the industry for months already, one which had begun well before the good Senators Lieberman and Kohl discovered the horrors of videogame violence and sex. As Bill White of Sega was at pains to point out throughout the hearing, Sega had been seriously engaged with the question of a rating system for quite some time, and had managed to secure promises of support from a considerable portion of the industry. But the one entity that had absolutely rejected the notion was the very one whose buy-in was most essential for any overarching initiative of this sort: Nintendo. “Howard [Lincoln] was not going to be part of any group created by Sega,” laughs Dr. Arthur Pober, one of the experts the latter consulted.

So, Sega decided to go it alone. Again as described by Bill White at the hearing, they rolled out a thoroughly worked-out rating system for any and all games on their platforms just in time for Mortal Kombat in September of 1993. It divided games into three categories: GA for general audiences, MA-13 for those age thirteen or older, and MA-17 for those age seventeen or older. An independent board of experts was drafted to assign each new game its rating without interference from Sega’s corporate headquarters; its chairman was the aforementioned Arthur Pober, a distinguished educational psychologist with decades of research experience about the role of media in children’s lives on his CV. Under his stewardship, Mortal Kombat wound up with an MA-13 rating; Night Trap, which had already been in stores for the better part of a year by that point, was retroactively assigned a rating of MA-17.

Although one might certainly quibble that these ratings reflected the American media establishment’s terror of sex and relatively blasé attitude toward violence, Sega’s rating system bore all the outward signs of being a good-faith exercise. At the very least it was, as White repeatedly stated at the hearing, a good first step, one that was taken before any of the real controversy even began.

The second step was of course Nintendo’s grudging acquiescence to the concept of a universal rating system on the day of the hearing — a capitulation whose significance should not be underestimated in light of the company’s usual attitude toward intra-industry cooperation, which might be aptly summarized as “our way or the highway.” And the third step came less than a month later, at the 1994 Winter Consumer Electronics Show, which in accordance with long tradition took place over the first week of the new year in Las Vegas.

Anyone wandering the floor at this latest edition of CES would have seen a digital-games industry that was more fiercely competitive than ever. Sega, celebrating a recent report that gave them for the first time a slight edge over Nintendo in overall market share, had several attention-grabbing new products on offer, including the latest of their hugely popular Sonic the Hedgehog games; the Activator, an early attempt at a virtual-reality controller; the CDX, a portable CD player that could also be used as a game console; and, most presciently of all, a partnership with AT&T to bring online multiplayer gaming, including voice communication, to the Genesis. Meanwhile Nintendo gave the first hints about what would see the light of day some 30 months later as the Nintendo 64. And other companies were still trying to muscle their way into the bifurcated milieu of the living-room consoles. Among them were Atari, looking for a second shot at videogame glory with their Jaguar console; Philips, still flogging the dead horse known as CD-I; and a well-financed new company known as 3DO, with a console that bore the same name. Many traditional makers of business-oriented computers were suddenly trying to reach many of the same consumers, through products like Compaq’s new home-oriented Presario line; even stodgy old WordPerfect was introducing a line of entertainment and educational software. Little spirit of cooperation was in evidence amidst any of this. With “multimedia” the buzzword of the zeitgeist, the World Wide Web looming on the near horizon, and no clarity whatsoever about what direction digital technology in the home was likely to take over the next few years, the competition in the space was as cutthroat as it had ever been.

And yet in a far less glitzy back room of the conference center, all of these folks and more met to discuss the biggest cooperative initiative ever proposed for their industry, prompted by the ultimatum they had so recently been given by Senators Lieberman and Kohl: “Come up with a rating system for yourself, or we’ll do it for you.” The meeting was organized by the SPA, which had the virtue of not being any of the arch-rival console makers, and was thus presumably able to evince a degree of impartiality. “Companies such as 3DO, Atari, Acclaim, id Software, and Apogee already have rating systems,” said Ken Wasch, the longstanding head of the SPA, to open the proceedings. “But a proliferation of rating systems is confusing to retailers and consumers alike. Even before this became an issue in the halls of Congress or in the media, there was a growing belief that we needed a single, easily recognizable system to rate and label our products.”

But the SPA lost control of the meeting almost from the moment Wasch stepped down from the podium. The industry was extremely fortunate that neither Senator Lieberman nor Kohl took said organization up on an invitation to attend in person. One participant remembers the meeting consisting mostly of “people sitting around a table screaming and carrying on.” Cries of “Censorship!” and “Screw ’em! We’ll make the games we want to make!” dominated for long stretches. Many regarded the very notion of a rating system as an unacceptable intrusion by holier-than-thou bureaucrats; they wanted to call what they insisted was the senators’ bluff, to force them to put up actual government legislation — legislation whose constitutionality would be highly questionable — or to shut up about it.

Yet such advocates of the principle of free speech over all other concerns weren’t the sum total of the problem. Even many of those who felt that a rating system was probably necessary were thoroughly unimpressed with the hosts of the meeting, and not much disposed to fall meekly into line behind them.

The hard reality was that the SPA had never been viewed as a terribly effectual organization. Formed  to be the voice of the computer-software industry in 1984 — i.e., just after the Great Videogame Crash — it had occupied itself mostly with anti-piracy campaigns and an annual awards banquet in the years since. The return of a viable console marketplace in the form of the Nintendo Entertainment System and later the Sega Genesis had left it in an odd position. Most of the publishers of computer games who began moving some or all of their output to the consoles were members of the SPA, and through them the SPA itself got pulled into this brave new world. But there were certainly grounds to question whether the organization’s remit really ought to involve the console marketplace at all. Was the likes of Acclaim, the publisher of console-based videogames like Mortal Kombat, truly in the same business as such other SPA members as the business-software titans Microsoft and WordPerfect? Nintendo had always pointedly ignored the SPA; Sega had joined as a gesture of goodwill to their outside publishers who were also members, but hardly regarded it as a major part of their corporate strategy. In addition to being judged slow, bureaucratic, and uncreative, the SPA was regarded by everyone involved with the consoles as being much more invested in computer software of all stripes than console-based videogames. And what with computer games representing in the best case fifteen percent of the overall digital-games market, that alone struck them as a disqualifier for spearheading an initiative like this one.

Electronic Arts, the largest of all of the American game publishers, was in an interesting position here. Founded in 1983 to publish games exclusively for computers, EA had begun moving onto consoles in a big way at the dawn of the 1990s, scoring hits there with such games as the first installments in the evergreen John Madden Football series. By the beginning of 1994, console games made up over two-thirds of their total business.

A senior vice president at EA by the name of Jack Heistand felt that an industry-wide rating system was “the right thing to do. I really believed in my heart that we needed to communicate to parents what the content was inside games.” Yet he also felt convinced from long experience that the SPA was hopelessly ill-equipped for a project of this magnitude, and the disheartening meeting which the SPA tried to lead at CES only cemented that belief. So, immediately after the meeting was over, he approached EA’s CEO Larry Probst with a proposal: “Let’s get all the [other] CEOs together to form an industry association. I will chair it.” Probst readily agreed.

Jack Heistand

The SPA was not included in this other, secret meeting, even though it convened at that same CES. Its participants rather included a representative from each of the five manufacturers of currently or potentially viable consoles: Sega, Nintendo, Atari, Philips, and 3DO. Rounding out their numbers were two videogame-software publishers: Acclaim Entertainment of Mortal Kombat fame and of course Electronic Arts. With none of the console makers willing to accept one of their rivals as chairman of the new steering committee, they soon voted to bestow the role upon Jack Heistand, just as he had planned it.

Sega, convinced of the worthiness of their own rating system, would have happily brought the entirety of the industry under its broad tent and been done with it, but this Nintendo’s pride would never allow. It became clear as soon as talks began, if it hadn’t been already, that whatever came next would have to be built from scratch. With Senators Lieberman and Kohl breathing down their necks, they would all have to find a way to come together, and they would have to do so quickly. The conspirators agreed upon an audacious timetable indeed: they wanted to have a rating system in place for all games that shipped after October 31, 1994 — just in time, in other words, for the next Christmas buying season. It was a tall order, but they knew that they would be able to force wayward game publishers to comply if they could only get their own house in order, thanks to the fact all of the console makers in the group employed the walled-garden approach to software: all required licenses to publish on their platforms, meaning they could dictate which games would and would not appear there. They could thus force a rating system to become a ubiquitous reality simply by pledging not to allow any games on their consoles which didn’t include a rating.

On February 3, 1994, Senator Lieberman introduced the “Video Game Rating Act” to the United States Senate, stipulating that an “Interactive Entertainment Rating Commission” should be established, with five members appointed by President Bill Clinton himself; this temporary commission would be tasked with founding a new permanent governmental body to do what the industry had so far not been willing to do for itself. Shortly thereafter, Representative Tom Lantos, a Democrat from California, introduced parallel legislation in the House. Everyone involved made it clear, however, that they would be willing to scrap their legislation if the industry could demonstrate to their satisfaction that it was now addressing the problem itself. Lieberman, Kohl, and Lantos were all pleased when Sega dropped Night Trap from their product line as a sort of gesture of good faith; the controversial game had never been a particularly big seller, and had now become far more trouble than it was worth. (Mortal Kombat, on the other hand, was still posting sales that made it worth the controversy…)

On March 4, 1994, three representatives of the videogame industry appeared before Lieberman, Kohl, and Lantos at a hearing that was billed as a “progress report.” The only participant in the fractious hearing of three months before who returned for this one was Howard Lincoln of Nintendo, who had established something of a rapport with Senator Lieberman on that earlier occasion. Sega kept Bill White, who most definitely had not, well away, sending instead a white-haired senior vice president named Edward Volkwein. But most of the talking was done by the industry’s third representative, Jack Heistand. His overriding goal was to convince the lawmakers that he and his colleagues were moving as rapidly as possible toward a consistent industry-wide rating system, and should be allowed the balance of the year to complete their work before any legislation went forward. He accordingly emphasized over and over that ratings would appear on the boxes of all new videogames released after October 31.

The shift in tone from the one hearing to the next was striking; this one was a much more relaxed, even collegial affair than last time out. Lieberman, Kohl, and Lantos all praised the industry’s efforts so far, and kept the “think of the children!” rhetoric to a minimum in favor of asking practical questions about how the rating system would be implemented. “I don’t need to get into that argument again,” said Senator Lieberman when disagreements over the probability of a linkage between videogame violence and real-world aggression briefly threatened to ruin the good vibe in the room.

“I think you’re doing great,” said Senator Kohl at the end of the hearing. “It’s a wonderful start. I really am very pleased.” Mission accomplished: Heistand had bought himself enough time to either succeed or fail before the heavy hand of government came back on the scene.



Heistand’s remit was rapidly growing into something much more all-encompassing than just a content-rating board. To view his progress was to witness nothing less than an industry waking up to its shared potential and its shared problems. As I’ve already noted, the videogame industry as a whole had long been dissatisfied with its degree of representation in the SPA, as well as with the latter’s overall competence as a trade organization. This, it suddenly realized, was a chance to remedy that. Why not harness the spirit of cooperation that was in the air to create an alternative to the SPA that would focus solely on the needs of videogame makers? Once that was done, this new trade organization could tackle the issue of a rating system as just the first of many missions.

The International Digital Software Association (IDSA) was officially founded in April of 1994. Its initial members included Acclaim, Atari, Capcom, Crystal Dynamics, Electronic Arts, Konami, Nintendo, Philips, Sega, Sony, Viacom, and Virgin, companies whose combined sales made up no less than 60 percent of the whole videogame industry. Its founding chairman was Jack Heistand, and its first assigned task was the creation of an independent Entertainment Software Rating Board (ESRB).

Heistand managed to convince Nintendo and the others to accept the man who had chaired Sega’s ratings board for the same role in the industry-wide system. Arthur Pober had a reputation for being, as Heistand puts it, “very honorable. A man of integrity.” “Arthur was the perfect guy,” says Tom Kalinske, then the president and CEO of Sega of America. “He had good relationships inside of the education world, inside of the child-development world, and knew the proper child psychologists and sociologists. Plus, we knew he could do it — because he had already done it for us!”

Neutral parties like Pober helped to ease some of the tension that inevitably sprang up any time so many fierce competitors were in the room together. Heistand extracted a promise from everyone not to talk publicly about their work here — a necessary measure given that Howard Lincoln and Tom Kalinske normally used each and every occasion that offered itself to advance their own company and disparage their rival. (Witness Lincoln’s performance at the hearing of December 9…)

Over the course of the next several months, the board hammered out a rating system that was more granular and detailed than the one Sega had been using. It divided games into five rather than three categories: “Early Childhood” (EC) for children as young as age three; “Kids to Adults” (K-A) for anyone six years of age or older; “Teen” (T) for those thirteen or older; “Mature” (M) for those seventeen or older; and “Adults Only” (AO) for those eighteen or older. It was not a coincidence that these ratings corresponded fairly closely to the movie industry’s ratings of G, PG, PG-13, R, and NC-17. A team of graphic artists came up with easily recognizable icons for each of the categories — icons which proved so well-designed for their purpose that most of them are still used to this day.

The original slate of ESRB icons. Since 1994, remarkably few changes have been made: the “Kids to Adults” category has been renamed “Everyone,” and a sixth category of games suitable for those ten years and older, known in the rating system’s nomenclature as “Everyone 10+,” has been added.

The ESRB itself was founded as a New York-based non-profit. Each game would be submitted to it in the form of a videotape of 30 to 40 minutes in length, which must contain the game’s most “extreme” content. The board would then assign the game to one of its teams of three reviewers, all of whom were trained and overseen by the ESRB under the close scrutiny of Arthur Pober. The reviewers were allowed to have no financial or personal ties to the videogame industry, and were hired with an eye to demographic diversity: an example which Heistand gave of an ideal panel consisted of a retired black male elementary-school principal, a 35-year-old white full-time mother of two, and a 22-year-old white male law student. A measure of checks and balances was built into the process: publishers would have the chance to appeal ratings with which they disagreed, and all rated games would have to pass a final audit a week before release to ensure that the videotape which had been submitted had been sufficiently representative of the overall experience. The ESRB aimed to begin accepting videotapes on September 1, 1994, in keeping with the promise that all games released after October 31 would have a rating on the box. Everything was coming together with impressive speed.

But as Heistand prepared to return to Washington to report all of this latest progress on July 29, 1994, there remained one part of the games industry which had not fallen into line. The SPA was not at all pleased by the creation of a competing trade association, nor by having the rug pulled out from under its own rating initiative. And the computer-game makers among its members didn’t face the same compulsion to accept the ESRB’s system, given that they published on open platforms with no gatekeepers.



The relationship between computer games and their console-based brethren had always been more complicated than outsiders such as Senators Lieberman and Kohl were wont to assume. While the degree of crossover between the two had always been considerable, computer gaming had been in many ways a distinct form of media in its own right since the late 1970s. Computer-game makers claimed that their works were more sophisticated forms of entertainment, with more variety in terms of theme and subject matter and, in many cases, more complex and cerebral forms of gameplay on offer. They had watched the resurrection of the console marketplace with as much dismay as joy, being unimpressed by what many of them saw as the dumbed-down “kiddie aesthetic” of Nintendo and the stultifying effect which the consoles’ walled gardens had on creativity; there was a real feeling that the success of Nintendo and its ilk had come at the cost of a more diverse and interesting future for interactive entertainment as a whole. Perhaps most of all, computer-game makers and their older-skewing demographic of players profoundly resented the wider culture’s view of digital games of any stripe as essentially children’s toys, to be regulated in the same way that one regulated Barbie dolls and Hot Wheels cars. These resentments had not disappeared even as many of the larger traditional computer-game publishers, such as EA, had been tempted by the booming market for console-based videogames into making products for those systems as well.

Johnny L. Wilson, the editor-in-chief of Computer Gaming World magazine, voiced in an editorial the objections which many who made or played computer games had to the ESRB:

[The ESRB rating system] has been developed by videogame manufacturers and videogame publishers without significant input by computer-based publishers. The lone exception to this rule is Electronic Arts, which publishes personal-computer titles but nets more than two-thirds of its proceeds from videogame sales. The plan advocated by this group of videogame-oriented companies calls for every game to be viewed by an independent panel prior to release. This independent panel would consist of parents, child psychologists, and educators.

How does this hurt you? This panel is not going to understand that you are a largely adult audience. They are not going to perceive that there is a marketplace of mature gamers. Everything they evaluate will be examined under the rubric, “Is it good for children?” As a result, many of the games covered in Computer Gaming World will be rated as unsuitable for children, and many retailers will refuse to handle these games because they perceive themselves as family-oriented stores and cannot sell unsuitable merchandise.

The fate of Night Trap, an unusually “computer-like” console game, struck people like Wilson as an ominous example of how rating games could lead to censoring them.

Honestly held if debatable opinions like the above, combined perhaps with pettier resentments about the stratospheric sales of console games in comparison to those that ran on computers and its own sidelining by the IDSA, led the SPA to reject the ESRB, and to announce the formation of its own ratings board just for computer games. It was to be called the Recreational Software Advisory Council (RSAC), and its founding president was to be Robert Roden, the general counsel and director of business affairs for the computer-game publisher LucasArts. This choice of an industry insider rather than an outside expert like Arthur Pober reflected much of what was questionable about the alternative rating initiative.

Indeed, and although much of the reasoning used to justify a competing standard was cogent enough, the RSAC’s actual plan for its rating process was remarkable mostly for how comprehensively it failed to address the senators’ most frequently stated concerns about any self-imposed rating standard. Instead of asking publishers to submit videotapes of gameplay for review by an independent panel, the RSAC merely provided them with a highly subjective questionnaire to fill out; in effect, it allowed them to “self-rate” their own games. And, in a reflection of computer-game makers’ extreme sensitivity to any insinuation that their creations were just kids’ stuff, the RSAC rejected outright any form of age-based content rating. Age-based rating systems were “patronizing,” claimed the noted RSAC booster Johnny L. Wilson, because “different people of widely disparate ages have different perceptions of what is appropriate.” In lieu of sorting ratings by age groups, the RSAC would use descriptive labels stipulating the amount and type of violence, sex, and profanity, with each being ranked on a scale from zero to four.

The movie industry’s rating system was an obvious counterexample to this idea that age-based classification must necessarily entail the infantilization of art; certainly cinema still enjoyed vastly more cultural cachet than computer games, despite its own longstanding embrace of just such a system. But the computer-game makers were, it would seem, fairly blinded by their own insecurities and resentments.

A representative of the SPA named Mark Traphagen was invited to join Jack Heistand at the hearing of July 29 in order to make the case for the RSAC’s approach to rating computer games. The hearing began in an inauspicious fashion for him. Senator Lieberman, it emerged during opening statements, had discovered id Software’s hyper-violent computer game of DOOM in the interim between this hearing and the last. This occasion thus came to mark the game’s coming-out party on the national stage. For the first but by no means the last time, a politician showed a clip of it in action, then lit into what the audience had just seen.

What you see there is an individual with a successive round of weapons — a handgun, machine gun, chainsaw — just continuing to attack targets. The bloodshed, the gunfire, and the increasingly realistic imagery combine to create a game that I would not want my daughter or any other child to see or to play.

What you have not seen is some of the language that is displayed onscreen when the game is about to be played. “Act like a man!” the player is told. “Slap a few shells into your shotgun and let’s kick some demonic butt! You’ll probably end up in Hell eventually. Shouldn’t you know your way around before you make an extended visit?”

Well, some may say this is funny, but I think it sends the wrong message to our kids. The game’s skill levels include “I’m Too Young To Die” and “Hurt Me Plenty.” That obviously is not the message parents want their kids to hear.

Mark Traphagen received quite a grilling from Lieberman for the patent failings of the RSAC self-rating system. He did the best he could, whilst struggling to educate his interrogators on the differences between computer and console games. He stipulated that the two were in effect different industries entirely — despite the fact that many software publishers were, as we’ve seen, active in both. This was an interesting stand to take, not least in the way that it effectively ceded the ground of console-based software to the newly instituted IDSA, in the hope that the SPA could hang onto computer games.

Traphagen: Despite popular misconceptions and their admitted similarities to consumers, there are major differences between the personal-computer-software industry and the videogame industry. While personal-computer software and videogame software may be converging toward the compact disc as the preferred storage medium, those of us who develop and publish entertainment software see no signs of a convergence in either product development or marketing.

The personal-computer-software industry is primarily U.S.-based, small to medium in size, entrepreneurial, and highly innovative. Like our plan to rate software, it is based on openness. Its products run on open-platform computers and can be produced by any of thousands of companies of different sizes, without restrictive licensing agreements. There is intense competition between our industry and the videogame industry, marked by the great uncertainty about whether personal computers or some closed platform will prevail in the forthcoming “information superhighway.”

Senator Lieberman: Maybe you should define what a closed platform is in this regard.

Traphagen: A closed platform, Senator, is one in which the ability to create software that will run on that particular equipment is controlled by licensing agreements. In order to create software that will run on those platforms, one has to have the permission and consent of the equipment manufacturer.

Senator Lieberman: And give us an example of that.

Traphagen: A closed platform would be a videogame player.

Senator Lieberman: Such as a Sega or Nintendo?

Traphagen: That is right. In contrast, personal computers are an open platform in which any number of different companies can simply buy a development package at a retailer or a specialty store and then create software that will operate on the computer.

Traphagen explained the unwillingness of computer-game makers to fall under the thumb of the IDSA by comparing them to indie film studios attempting to negotiate the Hollywood machine. Yet he was able to offer little in defense of the RSAC’s chosen method of rating games. He made the dubious claim that creating a videotape for independent evaluation would be too technically burdensome on a small studio, and had even less to offer when asked what advantage accrued to not rating games by suitable age groups: “I do not believe there is an advantage, Senator. There was simply a decision that was taken that the ratings would be as informative as possible, without being judgmental.”

Some five weeks after this hearing, the RSAC would hold a press conference in Dallas, Texas, the home of id Software of DOOM fame. In fact, that game was used to illustrate how the rating system would work. Even some of the more sanguine members of the gaming press were surprised when it received a rating of just three out of four for violence. The difference maker, the RSAC representatives explained, was the fact that DOOM‘s violence wasn’t “gratuitous”; the monsters were trying to kill you, so you had no choice but to kill them. One has to presume that Senators Lieberman and Kohl would not have been impressed, and that Mark Traphagen was profoundly thankful that the press conference occurred after his appearance before them.

Even as it was, the senators’ skepticism toward the RSAC’s rating system at the hearing stood out all the more in contrast to their reception of the ESRB’s plan. The relationship between Senator Lieberman and Jack Heistand had now progressed from the cordial to the downright genial; the two men, now on a first-name basis, even made room for some banter on Heistand’s abortive youthful attempts to become a rock star. The specter of government legislation was never even raised to Heistand. It was, needless to say, a completely different atmosphere from the one of December 9. When the hearing was finished, both sides sent out press notices praising the wisdom and can-do spirit of the other in glowing terms.

But much of the rest of the games industry showed far less good grace. As the summer became the fall and it became clear that game ratings really were happening, the rants began, complete with overheated references to Fahrenheit 451 and all of the other usual suspects. Larry O’Brien, the editor of the new Game Developer magazine, made his position clear in the first line of his editorial: “Rating systems are crap.”

With the entire entertainment industry rolling over whenever Congress calls a hearing, it’s fallen on us to denounce these initiatives for what they are: cynical posturing and electioneering with no substance. Rating systems, whether for movies, television, videogames, or any other form of communication, don’t work, cost money, and impede creativity. Everyone at those hearings, politicians and witnesses alike, knows that. But there’s nothing politicians love more than “standing up for the family” and blaming America’s cultural violence on Hollywood. So the entertainment industry submissively pisses all over itself and proposes “voluntary” systems from the pathetic to the laughable.

Parents should decide. If parents don’t want their kids to play X-COM or see Terminator 2, they should say no and put up with the ensuing argument. They don’t need and shouldn’t get a rating system to supplement their authority. The government has no right to help parents say no at the video store if that governmental interference impedes your right to develop whatever content you feel appropriate.

We all have responsibilities. To create responsibly, to control the viewing and gaming habits of our own children, and to call the government’s ratings initiatives what they are: cynical, ineffective, and wrong-headed.

The libertarian-leaning Wired magazine, that voice of cyber-futurism, published a jeremiad from Rogier Van Bakel that was equally strident.

Violent games such as DOOM, Night Trap, and Mortal Kombat are corrupting the minds and morals of millions of American children. So what do you do? Easy.

You elect people like Herb Kohl and Joe Lieberman to the US Senate. You applaud them when they tell the videogame industry that it’s made up of irrepressible purveyors of gratuitous gore and nefarious nudity. You nod contentedly when the senators give the industry an ultimatum: “Either you start rating and stickering your games real soon, or we, the government, will do it for you.”

You are pleasantly surprised by the industry’s immediate white flag: a rating system that is almost as detailed as the FDA-mandated nutrition information on a can of Campbell’s. You contend that that is, in fact, a perfect analogy: all you want, as a consumer, is honest product labeling. Campbell’s equals Sega equals Kraft equals 3DO.

Finally, you shrug when someone remarks that it may not be a good idea to equate soup with freedom of speech.

All that was needed now was a good conspiracy theory. This Karen Crowther, a spokesperson for makers of shareware computer games, helpfully provided when she said that the government had gotten “hoodwinked by a bunch of foreign billion-dollar corporations (such as Sony, Nintendo, and Sega) out to crush their US competition.”

Robert Peck, a lawyer for the American Civil Liberties Union, flirted with a legal challenge:

This [rating] system is a response to the threat of Senators Lieberman and Kohl that they would enact legislation requiring labels unless the industry did something to preempt them. The game manufacturers are being required to engage in speech that they would otherwise not engage in. These ratings have the government’s fingerprints all over them.

This present labeling system isn’t going to be the end of it. I think some games are going to be negatively affected, sales-wise, and the producers of those games will probably bring a lawsuit. We will then see that this system will be invalidated.

The above bears a distinct whiff of legalistic wishful thinking; none of it came to pass.

While voices like these ranted and raved, Jack Heistand, Arthur Pober, and their associates buckled down soberly to the non-trivial task of putting a rating on all new console-based videogames that holiday season, and succeeded in doing so with an efficiency that one has to admire, regardless of one’s position on the need for such a system. Once the initial shock to the media ecosystem subsided, even some of the naysayers began to see the value in the ESRB’s work.

Under the cover of the rating system, for example, Nintendo felt able to relax many of their strict “family-friendly” content policies. The second “Mortal Monday,” heralding the release of Mortal Kombat II on home consoles, came in September of 1994, before the ESRB’s icons had even started to appear on games. Nevertheless, Nintendo improvised a stopgap badge labeling the game unsuitable for those under the age of seventeen, and felt protected enough by it to allow the full version of the coin-op original on their platform this time, complete with even more blood and gore than its predecessor. It was an early sign that content ratings might, rather than leading game makers to censor themselves, give them a feeling of carte blanche to be more extreme.

By 1997, Game Developer was no longer railing against the very idea of a rating system, but was fretting instead over whether the ESRB’s existing approach was looking hard enough at the ever more lifelike violence made possible by the latest graphics hardware. The magazine worried about unscrupulous publishers submitting videotapes that did not contain their games’ most extreme content, and the ESRB failing to catch on to this as games continued to grow larger and larger: “The ESRB system uses three (count ’em, three) ‘demographically diverse’ people to rate a game. (And I thought television’s Nielsen rating system used a small sample set.) As the stakes go up in the ratings game, the threat of a publisher abusing our rating system grows larger and larger.”

Meanwhile the RSAC strolled along in a more shambolic manner, stickering games here and there, but never getting anything close to the complete buy-in from computer-game publishers that the ESRB received from console publishers. These respective patterns held throughout the five years in which the dueling standards existed.

In the end, in other words, the computer-game people got what they had really wanted all along: a continuing lack of any concerted examination of the content of their works. Some computer games did appear with the ESRB icons on their boxes, others with the RSAC schemas, but plenty more bothered to include no content guidance at all. Satisfied for the time being with the ESRB, Senators Lieberman and Kohl didn’t call any more hearings, allowing the less satisfying RSAC system to slip under the radar along with the distinct minority of digital games to which it was applied, even as computer games like Duke Nukem 3D raised the bar for violence far beyond the standard set by DOOM. The content of computer games wouldn’t suffer serious outside scrutiny again until 1999, the year that a pair of rabid DOOM and Duke Nukem fans shot up their high school in Columbine, Colorado, killing thirteen teachers and students and injuring another 24. But that is a tragedy and a controversy for a much, much later article…

(Sources: the books Dungeons and Dreamers: The Rise of Computer Game Culture from Geek to Chic by Brad King and John Borland, The Ultimate History of Video Games by Steven L. Kent, and Game Over: How Nintendo Conquered the World by David Sheff; Game Developer of September 1994, December 1994, August/September 1995, September 1997, and January 1998; Computer Gaming World of June 1994, December 1994, May 1996, and July 1999; Electronic Entertainment of November 1994 and January 1995; Mac Addict of January 1996; Sierra’s newsletter InterAction of Spring 1994; Washington Post of July 29 1994; the article “Regulating Violence in Video Games: Virtually Everything” by Alex Wilcox in the Journal of the National Association of Administrative Law Judiciary, Volume 31, Issue 1; the United States Senate Committee on the Judiciary’s publication Rating Video Games: A Parent’s Guide to Games; the 1994 episode of the television show Computer Chronicles entitled “Consumer Electronics Show.” Online sources include Blake J. Harris’s “Oral History of the ESRB” at VentureBeat and C-SPAN’s coverage of the Senate hearings of December 9 1993, March 4 1994, and July 29 1994.)

 

Tags: , , , , , , , ,

The Shareware Scene, Part 5: Narratives of DOOM

Let me begin today by restating the obvious: DOOM was very, very popular, probably the most popular computer game to date.

That “probably” has to stand there because DOOM‘s unusual distribution model makes quantifying its popularity frustratingly difficult. It’s been estimated that id sold 2 to 3 million copies of the shareware episodes of the original DOOM. The boxed-retail-only DOOM II may have sold a similar quantity; it reportedly became the third best-selling boxed computer game of the 1990s. But these numbers, impressive as they are in their own right, leave out not only the ever-present reality of piracy but also the free episode of DOOM, which was packaged and distributed in such an unprecedented variety of ways all over the world. Players of it likely numbered well into the eight digits.

Yet if the precise numbers associated with the game’s success are slippery, the cultural impact of the game is easier to get a grip on. The release of DOOM marks the biggest single sea change in the history of computer gaming. It didn’t change gaming instantly, mind you — a contemporaneous observer could be forgiven for assuming it was still largely business as usual a year or even two years after DOOM‘s release — but it did change it forever.

I should admit here and now that I’m not entirely comfortable with the changes DOOM brought to gaming. In fact, for a long time, when I was asked when I thought I might bring this historical project to a conclusion, I pointed to the arrival of DOOM as perhaps the most logical place to hang it up. I trust that most of you will be pleased to hear that I no longer feel so inclined, but I do recognize that my feelings about DOOM are, at best, conflicted. I can’t help but see it as at least partially responsible for a certain coarsening in the culture of gaming that followed it. I can muster respect for the id boys’ accomplishment, but no love. Hopefully the former will be enough to give the game its due.

As the title of this article alludes, there are many possible narratives to spin about DOOM‘s impact. Sometimes the threads are contradictory — sometimes even self-contradictory. Nevertheless, let’s take this opportunity to follow a few of them to wherever they lead us as we wrap up this series on the shareware movement and the monster it spawned.


3D 4EVA!

The least controversial, most incontrovertible aspect of DOOM‘s impact is its influence on the technology of games. It was nothing less than the coming-out party for 3D graphics as a near-universal tool — this despite the fact that 3D graphics had been around in some genres, most notably vehicular simulations, almost as long as microcomputer games themselves had been around, and despite the fact that DOOM itself was far from a complete implementation of a 3D environment. (John Carmack wouldn’t get all the way to that goal until 1996’s Quake, the id boys’ anointed successor to DOOM.) As we’ve seen already, Blue Sky Productions’s Ultima Underworld actually offered the complete 3D implementation which DOOM lacked twenty months before the latter’s arrival.

But as I also noted earlier, Ultima Underworld was complex, a little esoteric, hard to come to terms with at first sight. DOOM, on the other hand, took what the id boys had started with Wolfenstein 3D, added just enough additional complexity to make it into a more satisfying game over the long haul, topped it off with superb level design that took full advantage of all the new affordances, and rammed it down the throat of the gaming mainstream with all the force of one of its coveted rocket launchers. The industry never looked back. By the end of the decade, it would be hard to find a big boxed game that didn’t use 3D graphics.

Many if not all of these applications of 3D were more than warranted: the simple fact is that 3D lets you do things in games that aren’t possible any other way. Other forms of graphics consist at bottom of fixed, discrete patterns of colored pixels. These patterns can be moved about the screen — think of the sprites in a classic 2D videogame, such as Nintendo’s Super Mario Bros. or id’s Commander Keen — but their forms cannot be altered with any great degree of flexibility. And this in turn limits the degree to which the world of a game can become an embodied, living place of emergent interactions; it does no good to simulate something in the world model if you can’t represent it on the player’s screen.

3D graphics, on the other hand, are stored not as pixels but as a sort of architectural plan of an imaginary 3D space, expressed in the language of mathematics. The computer then extrapolates from said plan to render the individual pixels on the fly in response to the player’s actions. In other words, the world and the representation of the world are stored as one in the computer’s memory. This means that things can happen there which no artist ever anticipated. 3D allowed game makers to move beyond hand-crafted fictions and set-piece puzzles to begin building virtual realities in earnest. Not for nothing did many people refer to DOOM-like games in the time before the term “first-person shooter” was invented as “virtual-reality games.”

Ironically, others showed more interest than the id boys themselves in probing the frontiers of formal possibility thus opened. While id continued to focus purely on ballistics and virtual violence in their extended series of Quake games after making DOOM, Looking Glass Technologies — the studio which had previously been known as Blue Sky Productions — worked many of the innovations of Ultima Underworld and DOOM alike into more complex virtual worlds in games like System Shock and Thief. Nevertheless, DOOM was the proof of concept, the game which demonstrated indubitably to everyone that 3D graphics could provide amazing experiences which weren’t possible any other way.

From the standpoint of the people making the games, 3D graphics had another massive advantage: they were also cheaper than the alternative. When DOOM first appeared in December of 1993, the industry was facing a budgetary catch-22 with no obvious solution. Hiring armies of artists to hand-paint every screen in a game was expensive; renting or building a sound stage, then hiring directors and camera people and dozens of actors to provide hours of full-motion-video footage was even more so. Players expected ever bigger, richer, longer games, which was intensely problematic when every single element in their worlds had to be drawn or filmed by hand. Sales were increasing at a steady clip by 1993, but they weren’t increasing quickly enough to offset the spiraling costs of production. Even major publishers like Sierra were beginning to post ugly losses on their bottom lines despite their increasing gross revenues.

3D graphics had the potential to fix all that, practically at a stroke. A 3D world is, almost by definition, a collection of interchangeable parts. Consider a simple item of furniture, like, say, a desk. In a 2D world, every desk must be laboriously hand-drawn by an artist in the same way that a traditional carpenter planes and joins the wood for such a thing in a workshop. But in a 3D world, the data constituting the basic form of “desk” can be inserted in a matter of seconds; desks can now make their way into games with the same alacrity with which they roll off of an IKEA production line. But you say that you don’t want every desk in your world to look exactly the same? Very well; it takes just a few keystrokes to change the color or wood grain or even the size of your desk, or to add or take away a drawer. We can arrive at endless individual implementations of “desk” from our Platonic ideal with surprising speed. Small wonder that, when the established industry was done marveling at DOOM‘s achievements in terms of gameplay, the thing they kept coming back to over and over was its astronomical profit margins. 3D graphics provided a way to make games make money again.

So, 3D offered worlds with vastly more emergent potential, made at a greatly reduced cost. There had to be a catch, right?

Alas, there was indeed. In many contexts, 3D graphics were right on the edge of what a typical computer could do at all in the mid-1990s, much less do with any sort of aesthetic appeal. Gamers would have to accept jagged edges, tearing textures, and a generalized visual crudity in 3D games for quite some time to come. A freeze-frame visual comparison with the games the industry had been making immediately before the 3D revolution did the new ones no favors: the games coming out of studios like Sierra and LucasArts had become genuinely beautiful by the early 1990s, thanks to those companies’ rooms full of dedicated pixel artists. It would take a considerable amount of time before 3D games would look anywhere near this nice. One can certainly argue that 3D was in some fairly fundamental sense necessary for the continuing evolution of game design, that this period of ugliness was one that the industry simply needed to plow through in order to emerge on the other side with a whole new universe of visual and emergent possibility to hand. Still, people mired in the middle of it could be forgiven for asking whether, from the evidence of screenshots alone, gaming technology wasn’t regressing rather than progressing.

But be that as it may, the 3D revolution ushered in by DOOM was here to stay. People would just have to get used to the visual crudity for the time being, and trust that eventually things would start to look better again.


Playing to the Base

There’s an eternal question in political and commercial marketing alike: do you play to the base, or do you try to reach out to a broader spectrum of people? The former may be safer, but raises the question of how many more followers you can collect from the same narrow slice of the population; the latter tempts you with the prospect of countless virgin souls waiting to embrace you, but is far riskier, with immense potential to backfire spectacularly if you don’t get the message and tone just right. This was the dichotomy confronting the boxed-games industry in the early 1990s.

By 1993, the conventional wisdom inside the industry had settled on the belief that outreach was the way forward. This dream of reaching a broader swath of people, of becoming as commonplace in living rooms as prime-time dramas and sitcoms, was inextricably bound up with the technology of CD-ROM, what with its potential to put footage of real human actors into games alongside spoken dialog and orchestral soundtracks. “What we think of today as a computer or a videogame system,” wrote Ken Williams of Sierra that year, “will someday assume a much broader role in our homes. I foresee a day when there is one home-entertainment device which combines the functions of a CD-audio player, VCR, videogame system, and computer.”

And then along came DOOM with its stereotypically adolescent-male orientation, along with sales numbers that threatened to turn the conventional wisdom about how well the industry could continue to feed off the same old demographic on its head. About six months after DOOM‘s release, when the powers that were were just beginning to grapple with its success and what it meant to each and every one of them, Alexander Antoniades, a founding editor of the new Game Developer magazine, more fully articulated the dream of outreach, as well as some of the doubts that were already beginning to plague it.

The potential of CD-ROM is tremendous because it is viewed as a superset not [a] subset of the existing computer-games industry. Everyone’s hoping that non-technical people who would never buy an Ultima, flight simulator, or DOOM will be willing to buy a CD-ROM game designed to appeal to a wider audience — changing the computer into [an] interactive VCR. If these technical neophytes’ first experience is a bad one, for $60 a disc, they’re not going to continue making the same mistake.

It will be this next year, as these consumers make their first CD-ROM purchases, that will determine the shape of the industry. If CD-ROM games are able to vary more in subject matter than traditional computer games, retain their platform independence, and capture new demographics, they will attain the status of a new platform [in themselves]. If not, they will just be another means to get product to market and will be just another label on the side of a box.

The next couple of years did indeed become a de-facto contest between these two ideas of gaming’s future. At first, the outreach camp could point to some notable successes on a scale similar to that of DOOM: The 7th Guest sold over 2 million copies, Myst sold an extraordinary 6 million or more. Yet the reality slowly dawned that most of those outside the traditional gaming demographic who purchased those games regarded them as little more than curiosities; most evidence would seem to indicate that they were never seriously played to a degree commensurate with their sales. Meanwhile the many similar titles which the industry rushed out in the wake of these success stories almost invariably became commercial disappointments.

The problems inherent in these multimedia-heavy “interactive movies” weren’t hard to see even at the time. In the same piece from which I quoted above, Alexander Antoniades noted that too many CD-ROM productions were “the equivalent of Pong games with captured video images of professional tennis players and CD-quality sounds of bouncing balls.” For various reasons — the limitations inherent in mixing and matching canned video clips; the core limitations of the software and hardware technology; perhaps simply a failure of imagination — the makers of too many of these extravaganzas never devised new modes of gameplay to complement their new modes of presentation. Instead they seemed to believe that the latter alone ought to be enough. Too often, these games fell back on rote set-piece puzzle-solving — an inherently niche activity even if done more creatively than we often saw in these games — for lack of any better ideas for making the “interactive” in interactive movies a reality. The proverbial everyday person firing up the computer-cum-stereo-cum-VCR at the end of a long workday wasn’t going to do so in order to watch a badly acted movie gated with frustrating logic puzzles.

While the multimedia came first with these productions, games of the DOOM school flipped that script. As the years went on and they too started to ship on the now-ubiquitous medium of CD-ROM, they too picked up cut scenes and spoken dialog, but they never suffered the identity crisis of their rivals; they knew that they were games first and foremost, and knew exactly what forms their interactivity should take. And most importantly from the point of view of the industry, these games sold. Post-1996 or so, high-concept interactive movies were out, as was most serious talk of outreach to new demographics. Visceral 3D action games were in, along with a doubling-down on the base.

To blame the industry’s retrenchment — its return to the demographically tried-and-true — entirely on DOOM is a stretch. Yet DOOM was a hugely important factor, standing as it did as a living proof of just how well the traditional core values of gaming could pay. The popularity of DOOM, combined with the exercise in diminishing commercial returns that interactive movies became, did much to push the industry down the path of retrenchment.

The minor tragedy in all this was not so much the end of interactive movies, given what intensely problematic endeavors they so clearly were, but rather that the latest games’ vision proved to be so circumscribed in terms of fiction, theme, and mechanics alike. By late in the decade, they had brought the boxed industry to a place of dismaying homogeneity; the values of the id boys had become the values of computer gaming writ large. Game fictions almost universally drew from the same shallow well of sci-fi action flicks and Dungeons & Dragons, with perhaps an occasional detour into military simulation. A shocking proportion of the new games being released fell into one of just two narrow gameplay genres: the first-person shooter and the real-time-strategy game.

These fictional and ludic genres are not, I hasten to note, illegitimate in themselves; I’ve enjoyed plenty of games in all of them. But one craves a little diversity, a more vibrant set of possibilities to choose from when wandering into one’s local software store. It would take a new outsider movement coupled with the rise of convenient digital distribution in the new millennium to finally make good on that early-1990s dream of making games for everyone. (How fitting that shaking loose the stranglehold of DOOM‘s progeny would require the exploitation of another alternative form of distribution, just as the id boys exploited the shareware model…)


The Murder Simulator

DOOM was mentioned occasionally in a vaguely disapproving way by mainstream media outlets immediately after its release, but largely escaped the ire of the politicians who were going after games like Night Trap and Mortal Kombat at the time; this was probably because its status as a computer rather than a console game led to its being played in bedrooms rather than living rooms, free from the prying eyes of concerned adults. It didn’t become the subject of a full-blown moral panic until weirdly late in its history.

On April 20, 1999, Eric Harris and Dylan Klebold, a pair of students at Columbine High School in the Colorado town of the same name, walked into their school armed to the teeth with knives, explosives, and automatic weapons. They proceeded to kill 13 students and teachers and to injure 24 more before turning their guns on themselves. The day after the massacre, an Internet gaming news site called Blue’s News posted a message that “several readers have written in reporting having seen televised news reports showing the DOOM logo on something visible through clear bags containing materials said to be related to the suspected shooters. There is no word yet of what connection anyone is drawing between these materials and this case.” The word would come soon enough.

It turned out that Harris and Klebold had been great devotees of the game, not only as players but as creators of their own levels. “It’s going to be just like DOOM,” wrote Harris in his diary just before the massacre. “I must not be sidetracked by my feelings of sympathy. I will force myself to believe that everyone is just a monster from DOOM.” He chose his prize shotgun because it looked like one found in the game. On the surveillance tapes that recorded the horror in real time, the weapons-festooned boys pranced and preened as if they were consciously imitating the game they loved so much. Weapons experts noted that they seemed to have adopted their approach to shooting from what worked in DOOM. (In this case, of course, that was a wonderful thing, in that it kept them from killing anywhere close to the number of people they might otherwise have with the armaments at their disposal.)

There followed a storm of controversy over videogame content, with DOOM and the genre it had spawned squarely at its center. Journalists turned their attention to the FPS subculture for the first time, and discovered that more recent games like Duke Nukem 3D — the Columbine shooters’ other favorite game, a creation of Scott Miller’s old Apogee Software, now trading under the name of 3D Realms — made DOOM‘s blood and gore look downright tame. Senator Joseph Lieberman, a longstanding critic of videogames, beat the drum for legislation, and the name of DOOM even crossed the lips of President Bill Clinton. “My hope,” he said, “[is] to persuade the nation’s top cultural producers to call a cease-fire in the virtual arms race, to stop the release of ultra-violent videogames such as DOOM. Several of the school gunmen murderously mimicked [it] down to the choice of weapons and apparel.”

When one digs into the subject, one can’t help but note how the early life stories of John Carmack and John Romero bear some eerie similarities with those of Eric Harris and Dylan Klebold. The two Johns as well were angry kids who found it hard to fit in with their peers, who engaged in petty crime and found solace in action movies, heavy-metal music, and computer games. Indeed, a big part of the appeal of DOOM for its most committed fans was the sense that it had been made by people just like them, people who were coming from the same place. What caused Harris and Klebold, alone among the millions like them, to exorcise their anger and aggression in such a horrifying way? It’s a question that we can’t begin to answer. We can only say that, unfair though it may be, perceptions of DOOM outside the insular subculture of FPS fandom must always bear the taint of its connection with a mass murder.

And yet the public controversy over DOOM and its progeny resulted in little concrete change in the end. Lieberman’s proposed legislation died on the vine after the industry fecklessly promised to do a better job with content warnings, and the newspaper pundits moved on to other outrages. Forget talk of free speech; there was too much money in these types of games for them to go away. Just ten months after Columbine, Activision released Soldier of Fortune, which made a selling point of dismembered bodies and screams of pain so realistic that one reviewer claimed they left his dog a nervous wreck cowering in a corner. After the requisite wave of condemnation, the mainstream media forgot about it too.

Violence in games didn’t begin with DOOM or even Wolfenstein 3D, but it was certainly amplified and glorified by those games and the subculture they wrought. While a player may very well run up a huge body count in, say, a classic arcade game or an old-school CRPG, the violence there is so abstract as to be little more than a game mechanic. But in DOOM — and even more so in the games that followed it — experiential violence is a core part of the appeal. One revels in killing not just because of the new high score or character experience level one gets out of it, but for the thrill of killing itself, as depicted in such a visceral, embodied way. This does strike me as a fundamental qualitative shift from most of the games that came before.

Yet it’s very difficult to have a reasonable discussion on said violence’s implications, simply because opinions have become so hardened on the subject. To express concern on any level is to invite association with the likes of Joe Lieberman, a thoroughly conventional thinker with a knack for embracing the most flawed of all conventional wisdoms on every single issue, who apparently was never fortunate enough to have a social-science professor drill the fact that correlation isn’t causation into his head.

Make no mistake: the gamers who scoff at the politicians’ hand-wringing have a point. Harris and Klebold probably were drawn to games like DOOM and Duke Nukem 3D because they already had violent fantasies, rather than having said fantasies inculcated by the games they happened to play. In a best-case scenario, we can even imagine other potential mass murderers channeling their aggression into a game rather than taking it out on real people, in much the same way that easy access to pornography may be a cause of the dramatic decline in incidents of rape and sexual violence in most Western countries since the rise of the World Wide Web.

That said, I for one am also willing to entertain the notion that spending hours every day killing things in the most brutal, visceral manner imaginable inside an embodied virtual space may have some negative effects on some personalities. Something John Carmack said about the subject in a fairly recent interview strikes me as alarmingly fallacious:

In later games and later times, when games [came complete with] moral ambiguity or actual negativity about what you’re doing, I always felt good about the decision that in DOOM, you’re fighting demons. There’s no gray area here. It is black and white. You’re the good guys, they’re the bad guys, and everything that you’re doing to them is fully deserved.

In reality, though, the danger which games like DOOM may present, especially in the polarized societies many of us live in in our current troubled times, is not that they ask us to revel in our moral ambiguity, much less our pure evil. It’s rather the way they’re able to convince us that the Others whom we’re killing “fully deserve” the violence we visit upon them because “they’re the bad guys.” (Recall those chilling words from Eric Harris’s diary, about convincing himself that his teachers and classmates are really just monsters…) This tendency is arguably less insidious when the bad guys in question are ridiculously over-the-top demons from Hell than when they’re soldiers who just happen to be wearing a different uniform, one which they may quite possibly have had no other choice but to don. Nevertheless, DOOM started something which games like the interminable Call of Duty franchise were only too happy to run with.

I personally would like to see less violence rather than more in games, all things being equal, and would like to see more games about building things up rather than tearing them down, fun though the latter can be on occasion. It strikes me that the disturbing association of some strands of gamer culture with some of the more hateful political movements of our times may not be entirely accidental, and that some of the root causes may stretch all the way back to DOOM — which is not to say that it’s wrong for any given individual to play DOOM or even Call of Duty. It’s only to say that the likes of GamerGate may be yet another weirdly attenuated part of DOOM‘s endlessly multi-faceted legacy.


Creative Destruction?

In other ways, though, the DOOM community actually was — and is — a community of creation rather than destruction. (I did say these narratives of DOOM wouldn’t be cut-and-dried, didn’t I?)

John Carmack, by his own account alone among the id boys, was inspired rather than dismayed by the modding scene that sprang up around Wolfenstein 3D — so much so that, rather than taking steps to make such things more difficult in DOOM, he did just the opposite: he separated the level data from the game engine much more completely than had been the case with Wolfenstein 3D, thus making it possible to distribute new DOOM levels completely legally, and released documentation of the WAD format in which the levels were stored on the same day that id released the game itself.

The origins of his generosity hearken back once again to this idea that the people who made DOOM weren’t so very different from the people who played it. One of Carmack’s formative experiences as a hacker was his exploration of Ultima II on his first Apple II. Carmack:

To go ahead and hack things to turn trees into chests or modify my gold or whatever… I loved that. The ability to go several steps further and release actual source code, make it easy to modify things, to let future generations get what I wished I had had a decade earlier—I think that’s been a really good thing. To this day I run into people all the time that say, whether it was Doom, or maybe even more so Quake later on, that that openness and that ability to get into the guts of things was what got them into the industry or into technology. A lot of people who are really significant people in significant places still have good things to say about that.

Carmack speaks of “a decade-long fight inside id about how open we should be with the technology and the modifiability.” The others questioned this commitment to what Carmack called “open gaming” more skeptically than ever when some companies started scooping up some of the thousands of fan-made levels, plopping them onto CDs, and selling them without paying a cent to id. But in the long run, the commitment to openness kept DOOM alive; rather than a mere computer game, it became a veritable cottage industry of its own. Plenty of people played literally nothing else for months or even years at a stretch.

The debate inside id raged more than ever in 1997, when Carmack insisted on releasing the complete original source code to DOOM. (He had done the same for the Wolfenstein 3D code two years before.) As he alludes above, the DOOM code became a touchstone for an up-and-coming generation of game programmers, even as many future game designers cut their teeth and made early names for themselves by creating custom levels to run within the engine. And, inevitably, the release of the source code led to a flurry of ports to every imaginable platform: “Everything that has a 32-bit [or better] processor has had DOOM run on it,” says Carmack with justifiable pride. Today you can play DOOM on digital cameras, printers, and even thermostats, and do so if you like in hobbyist-created levels that coax the engine into entirely new modes of play that the id boys never even began to conceive of.

This narrative of DOOM bears a distinct similarity to that of another community of creation with which I happen to be much better acquainted: the post-Infocom interactive-fiction community that arose at about the same time that the original DOOM was taking the world by storm. Like the DOOM people, the interactive-fiction people built upon a beloved company’s well-nigh timeless software engineering; like them, they eventually stretched that engine in all sorts of unanticipated directions, and are still doing it to this day. A comparison between the cerebral text adventures of Infocom and the frenetic shooters of id might seem incongruous at first blush, but there you are. Long may their separate communities of love and craft continue to thrive.



As you have doubtless gathered by now, the legacy of DOOM is a complicated one that’s almost uniquely resistant to simplification. Every statement has a qualifier; every yang has a yin. This can be frustrating for a writer; it’s in the nature of us as a breed to want straightforward causes and effects. The desire for them may lead one to make trends that were obscure at best to the people living through them seem more obvious than they really were. Therefore allow me to reiterate that the new gaming order which DOOM created wouldn’t become undeniable to everyone until fully three or four years after its release. A reader recently emailed me the argument that 1996 was actually the best year ever for adventure games, the genre which, according to some oversimplified histories, DOOM and games like it killed at a stroke — and darned if he didn’t make a pretty good case for it.

So, while I’m afraid I’ll never be much of a gibber and/or fragger, we should continue to have much to talk about. Onward, then, into the new order. I dare say that from the perspective of the boots on the ground it will continue to look much like the old one for quite some time to come. And after that? Well, we’ll take it as it comes. I won’t be mooting any more stopping dates.

(Sources: the books The Complete Wargames Handbook (2000 edition) by James F. Dunnigan, Masters of Doom by David Kushner, Game Engine Black Book: DOOM by Fabien Sanglard, Principles of Three-Dimensional Computer Animation by Michael O’Rourke, and Columbine by Dave Cullen; Retro Gamer 75; Game Developer of June 1994; Chris Kohler’s interview with John Carmack for Wired. And a special thanks to Alex Sarosi, a.k.a. Lt. Nitpicker, for his valuable email correspondence on the legacy of DOOM, as well as to Josh Martin for pointing out in a timely comment to the last article the delightful fact that DOOM can now be run on a thermostat.)

 

Tags: , , , ,

The Shareware Scene, Part 4: DOOM

The full extent of Wolfenstein 3D‘s popularity during 1992 and 1993 is difficult to quantify with any precision due to the peculiarities of the shareware distribution model. But the one thing we can say for sure is that it was enormously popular by any standard. Apogee sold roughly 200,000 copies of the paid episodes, yet that number hardly begins to express the game’s real reach. Most people who acquired the free episode were content with it alone, or couldn’t afford to buy the other installments, or had friends who had bought them already and were happy to share. It thus seems reasonable to assume that the total number of Wolfenstein 3D players reached well into seven digits, putting the game’s exposure on a par with The 7th Guest, the boxed industry’s biggest hit of 1993, the game generally agreed to have put CD-ROM on the map. And yet Wolfenstein 3D‘s impact would prove even more earthshaking than that of The 7th Guest in the long run.

One telling sign of its influence — and of the way that it was just a fundamentally different type of game than The 7th Guest, that stately multimedia showpiece — is the modding scene that sprang up around it. The game’s levels were stored in a rather easily decipherable format: the “WAD” file, standing for “Where’s All the Data?” Enterprising hackers were soon writing and distributing their own level editors, along with custom levels. (The most popular of them all filled the corridors of the Nazi headquarters with facsimiles of the sickly sweet, thuddingly unclever, unbelievably grating children’s-television character Barney the Dinosaur and let you take out your frustrations with an automatic weapon.) The id boys debated fiercely among themselves whether they should crack down on the modders, but John Carmack, who had read Steven Levy’s landmark book Hackers at an impressionable age and thoroughly absorbed its heroes’ ethos of openness and transparency, insisted that people be allowed to do whatever they wished with his creation. And when Carmack put his foot down, he always got his way; at the end of the day, he was the one irreplaceable member of the id collective, and every one of the others knew it.

With Wolfenstein 3D‘s popularity soaring, the id boys started eyeing the territory of the boxed publishers greedily. They struck a deal with a company called FormGen to release a seventh, lengthier installment of the game exclusively as a boxed retail product; it appeared under the name of Spear of Destiny in September of 1992. Thus readers of magazines like Computer Gaming World could scratch their heads over two separate luridly violent full-page advertisements for Wolfenstein 3D games, each with a different publisher’s name at the bottom. Spear of Destiny sold at least 100,000 copies at retail, both to hardcore Wolfenstein 3D addicts who couldn’t get enough and to many others, isolated from the typical means of shareware distribution, who came upon the game for the first time in this form.

Even Nintendo came calling with hat in hand, just a couple of years after summarily rejecting id’s offer to make a version of Super Mario Bros. 3 that ran on computers. The id boys now heeded Nintendo’s plea to port Wolfenstein 3D to the new Super Nintendo Entertainment System, whilst also grudgingly agreeing to abide by the dictates of Nintendo’s infamously strict censors. They had no idea what they had signed up for. Before they were through, Nintendo demanded that they replace blood with sweat, guard dogs with mutant rats, and Adolf Hitler, the game’s inevitable final boss, with a generic villain named the “Staatmeister.” They hated this bowdlerization with a passion, but, having agreed to do the port, they duly saw it through, muttering “Never again!” to themselves all the while. And indeed, when they were finished they took a mutual vow never to work with Nintendo again. Who needed them? The world was id’s oyster.

By now, 1992 was drawing  to a close, and they all felt it was high time that they moved on to the next new thing. For everyone at id, and most especially John Carmack, was beginning to look upon Wolfenstein 3D with a decidedly jaundiced eye.


The dirty little secret that was occluded by Wolfenstein 3D‘s immense success was that it wasn’t all that great a game once it was stripped of its novelty value. Its engine was just too basic to allow for compelling level design. You glided through its corridors as if you were on a branching tram line running past a series of fairground shooting galleries, trying to shoot the Nazis who popped up before they could shoot you. The lack of any sort of in-game map meant that you didn’t even know where you were most of the time; you just kept moving around shooting Nazis until you stumbled upon the elevator to the next level. Anyone who made it through seven episodes of this — and make no mistake, there were plenty of players who did — either had an awful lot of aggression to vent or really, really loved the unprecedented look and style of the game. The levels were even boring for their designers. John Romero:

Tom [Hall] and I [designed] levels [for Wolfenstein 3D] fast. Making those levels was the most boring shit ever because they were so simple. Tom was so bored; I kept on bugging him to do it. I told him about Scott Miller’s 300ZX and George Broussard’s Acura NSX. We needed cool cars too! Whenever he got distracted, I’d tell him, “Dude, NSX! NSX!”

Tom Hall had it doubly hard. The fact was, the ultra-violence of Wolfenstein 3D just wasn’t really his thing. He preferred worlds of candy-apple red, not bloody scarlet; of precocious kids and cuddly robots, not rabid vigilantes and sadistic Nazis. Still, he was nothing if not a team player. John Romero and Adrian Carmack had gone along with him for Commander Keen, so it was only fair that he humored them with Wolfenstein 3D. But now, he thought, all of that business was finally over, and they could all start thinking about making a third Commander Keen trilogy.

Poor Tom. It took a sweetly naïve nature like his to believe that the other id boys would be willing to go back to the innocent fun of their Nintendo pastiches. Wolfenstein 3D was a different beast entirely than Commander Keen. It wasn’t remarkable just for being as good as something someone else had already done; it was like nothing anyone had ever done before. And they owned this new thing, had it all to themselves. Hall’s third Commander Keen trilogy just wasn’t in the cards — not even when he offered to do it in 3D, using an updated version of the Wolfenstein 3D engine. Cute and whimsical was id’s yesterday; gritty and bloody was their today and, if they had anything to say about it, their tomorrow as well.

Digging into their less-than-bulging bag of pop-culture reference points, the id boys pulled out the Alien film franchise. What a 3D game those movies would make! Running through a labyrinth of claustrophobic corridors, shooting aliens… that would be amazing! On further reflection, though, no one wanted the hassle that would come with trying to live up to an official license, even assuming such a thing was possible; id was still an underground insurgency at heart, bereft of lawyers and Hollywood contacts. Their thinking moved toward creating a similar effect via a different story line.

The id boys had a long-running tabletop Dungeon & Dragons campaign involving demons who spilled over from their infernal plane of existence into the so-called “Prime Material Plane” of everyday fantasy. What if they did something like that, only in a science-fiction context? Demons in space! It would be perfect! It was actually John Carmack, normally the id boy least engaged by these sorts of discussions, who proposed the name. In a scene from the 1986 Martin Scorsese movie The Color of Money, a young pool shark played by Tom Cruise struts into a bar carrying what looks like a clarinet case. “What you got in there?” asks his eventual patsy with an intimidating scowl. As our hero opens the case to reveal his pool cue, he flashes a 100-kilowatt Tom Cruise smile and says a single word: “Doom.”

Once again, Tom Hall tried to be supportive and make the best of it. He still held the official role of world-builder for id’s fictions. So, he went to work for some weeks, emerging at last with the most comprehensive design document which anyone at id had ever written, appropriately entitled The DOOM Bible. It offered plenty of opportunity for gunplay, but it also told an earnest story, in which you, as an astronaut trapped aboard a space station under assault by mysterious aliens, gradually learned to your horror that they were literal demons out of Hell, escaping into our dimension through a rift in the fabric of space-time. It was full of goals to advance and problems to solve beyond that of mowing down hordes of monsters, with a plot that evolved as you played. The history of gaming would have been markedly different, at least in the short term, if the other id boys had been interested in pursuing Hall’s path of complex storytelling within a richly simulated embodied virtual reality.

As it was, though, Hall’s ambitions landed with a resounding thud. Granted, there were all sorts of valid practical reasons for his friends to be skeptical. It was true enough that to go from the pseudo-3D engine of Wolfenstein 3D to one capable of supporting the type of complex puzzles and situations envisioned by Hall, and to get it all to run at an acceptable speed on everyday hardware, might be an insurmountable challenge even for a wizard like John Carmack. And yet the fact remains that the problem was at least as much one of motivation as one of technology. The other id boys just didn’t care about the sort of things that had Tom Hall so juiced. It again came down to John Carmack, normally the least articulate member of the group, to articulate their objections. “Story in a game,” he said, “is like story in a porn movie. It’s expected to be there, but it’s not that important.”

Tom Hall held out for several more months, but he just couldn’t convince himself to get fully onboard with the game his friends wanted to make. His relationship with the others went from bad to worse, until finally, in August of 1993, the others asked him to leave: “Obviously this isn’t working out.” By that time, DOOM was easily the most hotly anticipated game in the world, and nobody cared that it wouldn’t have a complicated story. “DOOM means two things,” said John Carmack. “Demons and shotguns!” And most of its fans wouldn’t have it any other way, then or now.


Tom Hall doesn’t look very happy about working on DOOM. Note the computer he works with: a NeXT workstation rather than an MS-DOS machine. John Carmack switched virtually all development to these $10,000 machines in the wake of Wolfenstein 3D‘s success, despite their tiny market footprint. The fact that the DOOM code was thus designed to be cross-platform from the beginning was undoubtedly a factor in the plethora of ports that appeared during and after its commercial heyday — that in fact still continue to appear today any time a new platform reaches a critical mass.

Making DOOM wound up requiring more than three times as many man-hours as anything the id boys had ever done before. It absorbed their every waking hour from January of 1993 to December of that year. Early on in that period, they decided that they wouldn’t be publishing it through Apogee. Cracks in the relationship between the id boys and Scott Miller had started forming around the latter’s business practices, which were scrupulously honest but also chaotic in that way dismayingly typical of a fast-growing business helmed by a first-time entrepreneur. Reports kept reaching id of people who wanted to buy Wolfenstein 3D, but couldn’t get through on the phone, or who managed to give Apogee their order only to have it never fulfilled.

But those complaints were perhaps just a convenient excuse. The reality was that the id boys just didn’t feel that they needed Apogee anymore. They had huge name recognition of their own now and plenty of money coming in to spend on advertising and promotion, and they could upload their new game to the major online services just as easily as Scott Miller could. Why keep giving him half of their money? Miller, for his part, handled the loss of his cash cow with graceful aplomb. He saw it as just business, nothing personal. “I would have done the same thing in their shoes,” he would frequently say in later interviews. He even hired Tom Hall to work at Apogee after the id boys cast him adrift in the foreign environs of Dallas.

Jay Wilbur now stepped into Miller’s old role for id. He prowled the commercial online services, the major bulletin-board systems, and the early Internet for hours each day, stoking the flames of anticipation here, answering questions there.

And there were lots of questions, for DOOM was actually about a bit more than demons and shotguns: it was also about technology. Whatever else it might become, DOOM was to be a showcase for the latest engine from John Carmack, a young man who was swiftly making a name for himself as the best game programmer in the world. With DOOM, he allowed himself to set the floor considerably higher in terms of system requirements than he had for Wolfenstein 3D.

System requirements have always been a moving target for any game developer. Push too hard, and you may end up releasing a game that almost no one can play; stay too conservative, and you may release something that looks like yesterday’s news. Striking precisely the right point on this continuum requires knowing your customers. The Apogee shareware demographic didn’t typically have cutting-edge computers; they tended to be younger and a bit less affluent than those buying the big boxed games. Thus id had made it possible to run Wolfenstein 3D on a two-generations-behind 80286-based machine with just 640 K of memory. The marked limitations of its pseudo-3D engine sprang as much from the limitations of such hardware as it did from John Carmack’s philosophy that, any time it came down to a contest between fidelity to the real world and speed, the latter should win.

He still held to that philosophy as firmly as ever when he moved on to DOOM, but the slow progression of the market’s trailing edge did give him more to work with: he designed DOOM for at least an 80386-based computer — 80486 recommended — with at least 4 MB of memory. He was able to ignore that bane of a generation of programmers, MS-DOS’s inability to seamlessly address memory beyond 640 K, by using a relatively new piece of software technology called a “DOS extender,” which built upon Microsoft’s recent memory-management innovations for their MS-DOS-hosted versions of Windows. DOS/4GW was included in the latest versions of what had heretofore been something of an also-ran in the compiler sweepstakes: the C compiler made by a small Canadian company known as Watcom. Carmack chose the Watcom compiler because of DOS/4GW; DOOM would quite literally have been impossible without it. In the aftermath of DOOM‘s prominent use of it, Watcom’s would become the C compiler of choice for game development, right through the remaining years of the MS-DOS-gaming era.

Rational Systems, the makers of DOS/4GW, were clever enough to stipulate in their licensing terms that the blurb above must appear whenever a program using it was started. Thus DOOM served as a prominent advertisement for the new software technology as it exploded across the world of computing in 1994. Soon you would have to look far and wide to find a game that didn’t mention DOS/4GW at startup.

Thanks not only to these new affordances but also — most of all, really — to John Carmack’s continuing evolution as a programmer, the DOOM engine advanced beyond that of Wolfenstein 3D in several important ways. Ironically, his work on the detested censored version of Wolfenstein 3D for the Super NES, a platform designed with 2D sprite-based games in mind rather than 3D graphics, had led him to discover a lightning-fast new way of sorting through visible surfaces, known as binary space partitioning, in a doctoral thesis by one Bruce Naylor. It had a well-nigh revelatory effect on the new engine’s capabilities.

That said, the new engine did remain caught, like its predecessor, in a liminal space between 2D and true 3D; it was just that it moved significantly further on the continuum toward the latter. No longer must everything and everyone exist on the same flat horizontal plane; you could now climb stairs and walk onto desks and daises. And walls must no longer all be at right angles to one another, meaning the world needed no longer resemble one of those steel-ball mazes children used to play with.

The DOOM level editor was a much more complicated tool than its Wolfenstein 3D equivalent, reflecting the enhanced capabilities of John Carmack’s latest engine. Most notably, the designer now had variable height at his disposal.

On the other hand, walls must still all be exactly vertical, and floors and ceilings must all be exactly horizontal; DOOM allowed stairs but not hills or ramps. These restrictions made it possible to map textures onto the environment without the ugly discontinuities that had plagued Blue Sky Productions’s earlier but more “honest” 3D game Ultima Underworld. DOOM makes such a useful study in game engineering because it so vividly illustrates that faking it convincingly for the sake of the player is better than simulating things which delight only the programmer of the virtual world. Its engine is perfect for the game it wants to be.

In a telling sign of John Carmack’s march toward a more complete 3D engine, the monsters in DOOM were sculpted as three-dimensional physical models by Adrian Carmack and Greg Punchatz, an artist hired just for the task. (The former is shown above.) The id boys then took snapshots of the models from eight separate angles for insertion into the game.

The value of the simple addition of height to the equation was revealed subtly — admittedly not an adverb often associated with DOOM! — as soon as you started the game. Instead of gliding smoothly about like a tram, your view now bobbed with uncanny verisimilitude as you ran about. You might never consciously notice the effect, but it made a huge difference to your feeling of really being in the world; if you tried to go back to Wolfenstein 3D after playing DOOM, you immediately had the feeling that something was somehow off.

But the introduction of varying height was most important for what it meant in terms of the game’s tactical possibilities. Now monsters could stand on balconies and shoot fireballs down at you, or you could do the same to them. Instead of a straightforward shooting gallery, the world of DOOM became a devious place of traps and ambushes. Carmack’s latest engine also supported variable levels of lighting for the first time, which opened up a whole new realm of both dramatic and tactical possibility in itself; entering an unexplored pitch-dark room could be, to say the least, an intimidating prospect.

This outdoor scene nicely showcases some of the engine’s capabilities. Note the fireball flying toward you. It’s implemented as a physical object in the world like any other.

In addition, the new engine dramatically improved upon the nearly non-existent degree of physics simulation in Wolfenstein 3D. Weight and momentum were implemented; even bullets were simulated as physical objects in the world. A stereo soundscape was implemented as well; in addition to being unnerving as all get-out, it could become another vital tactical tool. Meanwhile the artificial intelligence of the monsters, while still fairly rudimentary, advanced significantly over that of Wolfenstein 3D. It was even possible to lure two monsters into fighting each other instead of you.

John Carmack also added a modicum of support for doing things other than killing monsters, although to nowhere near the degree once envisioned by Tom Hall. The engine could be used to present simple set-piece interactions, such as locked doors and keys, switches and levers for manipulating parts of the environment: platforms could move up and down, bridges could extend and retract. And in recognition of this added level of complexity, which could suddenly make the details of the geography and your precise position within it truly relevant, the engine offered a well-done auto-map for keeping track of those things.


The DOOM automap, an impressive technical achievement in itself.

Of course, none of these new affordances would matter without level designs that took advantage of them. The original plan was for Tom Hall and John Romero to create the levels. But, as we’ve seen, Hall just couldn’t seem to hit the mark that the id boys were aiming for. After finally dismissing him, they realized that Romero still needed help to shoulder the design burden. It arrived from a most unlikely source — from a fellow far removed from the rest of the id boys in age, experience, and temperament.

Sandy Petersen was already a cult hero in certain circles for having created a tabletop RPG called Call of Cthulhu in 1981. Based on the works of the horror writer H.P. Lovecraft, it was the first RPG ever to convincingly transcend the kill-monsters-to-level-up-so-you-can-kill-bigger-monsters dynamic of Dungeons & Dragons. But Call of Cthulhu remained a cult game even when the tabletop-RPG boom was at its height, and by the early 1990s Petersen was serving as an in-house design consultant at the computer-game publisher MicroProse. Unhappy in this role, he sent his résumé to the upstart id.

The résumé was greeted with considerable skepticism. It’s doubtful whether any of the id boys fully grasped the significance of Petersen’s achievement with Call of Cthulhu; while they were hardcore tabletop-RPG players, they were perfectly happy with the traditional power-gaming approach of Dungeons & Dragons, thank you very much. Still, the résumé was more impressive than any other they had received, and they did urgently need a level designer… they called him in for an interview.

Their initial skepticism wasn’t lessened by the man himself. Petersen was pudgy and balding, looking even older than his already ancient 38 years, coming across rather like a genial university professor. And he was a devout Mormon to boot, washed up among this tribe of atheists and nihilists. Surely it could never work out.

Nevertheless, they decided to grant him the favor of a test before they rejected him; he had, after all, flown all the way from Baltimore to Dallas just to meet with them. They gave him a brief introduction to the DOOM engine and its level editor, and asked him to throw something together for them. Within minutes, Petersen produced a cunningly dramatic trap room, featuring lights that suddenly winked out when the player entered and a demon waiting in ambush behind a hidden door. He was hired.

Romero and Petersen proved to complement each other very well, with individual design aesthetics that reflected their personalities. Romero favored straight-up carnage — the more demon blood the better — while Petersen evinced a subtler, more cerebral approach in levels that could almost have a puzzle-like feel, where charging in with shotgun blazing was usually not the best tactic. Together the two approaches gave the game a nice balance.

Indeed, superb level design became DOOM‘s secret weapon, one that has allowed it to remain relevant to this day, when its degree of gore and violence seems humdrum, its pixels look as big as houses, and the limitations of its engine seem downright absurd. (You can’t even look up or down, for Pete’s sake. Nor is there a “jump” command, meaning that your brawny super-soldier can be stopped in his tracks by an inconveniently high curb.)

It’s disarmingly easy to underestimate DOOM today on your first encounter with it, simply because its visual aesthetic seems so tossed-off, so hopelessly juvenile; it’s the same crude mixture of action movies, heavy-metal album covers, and affected adolescent nihilism that defined the underground game-cracking scene of the 1980s. And yet behind it all is a game design that oozes as much thought and care as it does blood. These levels were obsessed over by their designers, and then, just as importantly, extensively critiqued by the other id boys and their immediate hangers-on, who weren’t inclined to pull their punches. Whatever your opinion of DOOM as a whole and/or the changes it wrought to the culture of gaming — I for one have thoroughly mixed feelings at best on both of those subjects — one cannot deny that it’s a veritable clinic of clever level design. In this sense, it still offers lessons for today’s game developers, whether they happen to be working inside or outside of the genre it came to define.


Subtle DOOM isn’t…

DOOM‘s other, not-so-secret weapon went by the name of “deathmatch.”

There had been significant experimentation with networked gaming on personal computers in the past: the legendary designer Dani Bunten Berry had spent the last half-decade making action-strategy games that were primarily or exclusively intended to be played by two humans connected via modem; Peter Molyneux’s “god game” Populous and its sequels had also allowed two players to compete on linked computers, as had a fair number of others. But computer-to-computer multiplayer-only games never sold very well, and most games that had networked multiplayer as an option seldom saw it used. Most people in those days didn’t even own modems; most computers were islands unto themselves.

By 1993, however, the isolationist mode of computing was slowly being nibbled away at. Not only was the World Wide Web on the verge of bursting into the cultural consciousness, but many offices and campuses were already networked internally, mostly using the systems of a company known as Novell. In fact, the id boys had just such a system in their Dallas office. When John Carmack told John Romero many months into the development of DOOM that multiplayer was feasible, the latter’s level of excitement was noteworthy even for him: “If we can get this done, this is going to be the fucking coolest game that the planet Earth has ever fucking seen in its entire history.” And it turned out that they could get it done because John Carmack was a programming genius.

While Carmack also implemented support for a modem connection or a direct computer-to-computer cable, it was under Novell’s IPX networking protocol that multiplayer DOOM really shined. Here you had a connection that was rock-solid and lightning-fast — and, best of all, here you could have up to four players in the same world instead of just two. You could tackle the single-player game as a team if you wanted to, but the id boys all agreed that deathmatch — all-out player-versus-player anarchy — was where the real fun lived. It made DOOM into more of a sport than a conventional computer game, something you could literally play forever. Soon the corridors at id were echoing with cries of “Suck it down!” as everyone engaged in frenzied online free-for-alls. Deathmatch was, in the diction of the id boys, “awesome.” It wasn’t just an improvement on what Wolfenstein 3D had done; it was something fundamentally different from it, something that was genuinely new under the sun. “This is the shit!” chortled Romero, and for once it sounded like an understatement.



The excitement over DOOM had reached a fever pitch by the fall of 1993. Some people seemed on the verge of a complete emotional meltdown, and launched into overwrought tirades every time Jay Wilbur had to push the release date back a bit more; people wrote poetry about the big day soon to come (“The Night Before DOOM“), and rang id’s offices at all hours of the day and night like junkies begging for a fix.

Even fuddy-duddy old Computer Gaming World stopped by the id offices to write up a two-page preview. This time out, no reservations whatsoever about the violence were expressed, much less any of the full-fledged hand-wringing that had been seen earlier from editor Johnny Wilson. Far from giving in to the gaming establishment, the id boys were, slowly but surely, remaking it in their own image.

At last, id announced that the free first episode of DOOM would go up at the stroke of midnight on December 10, 1993, on, of all places, the file server belonging to the University of Wisconsin–Parkside. When the id boys tried to log on to do the upload, so many users were already online waiting for the file to appear that they couldn’t get in; they had to call the university’s system administrator and have him kick everyone else off. Then, once the file did appear, the server promptly crashed under the load of 10,000 people, all trying to get DOOM at once on a system that expected no more than 175 users at a time. The administrator rebooted it; it crashed again. They would have a hard go of things at the modest small-town university for quite some time to come.



Legend had it that when Don Woods first uploaded his and Will Crowthers’s game Adventure in 1977, all work in the field of data processing stopped for a week while everyone tried to solve it. Now, not quite seventeen years later, something similar happened in the case of DOOM, arguably the most important computer game to appear since Adventure. The id boys had joked in an early press release that they expected DOOM to become “the number-one cause of decreased productivity in businesses around the world.” Even they were surprised by the extent to which that prediction came true.

Network administrators all over the world had to contend with this new phenomenon known as deathmatch. John Carmack had had no experience with network programming before DOOM, and in his naïveté had used a transmission method known as a broadcast packet that forced every computer on the network, whether it was running DOOM or not, to stop and analyze every packet which every DOOM-playing computer generated. As reports of the chaos that resulted poured in, Carmack scrambled to code an update which would use machine-to-machine packets instead.

In the meantime, DOOM brought entire information-technology infrastructures to their knees. Intel banned the game; high-school and university computers labs hardly knew what had hit them. A sign posted at Carnegie Mellon University before the day of release was even over was typical: “Since today’s release of DOOM, we have discovered [that the game is] bringing the campus network to a halt. Computing Services asks that all DOOM players please do not play DOOM in network mode. Use of DOOM in network mode causes serious degradation of performance for the players’ network, and during this time of finals network use is already at its peak. We may be forced to disconnect the PCs of those who are playing the game in network mode. Again, please do not play DOOM in network mode.” One clever system administrator at the University of Louisville created a program to search the hard drives of all machines on the network for the game, and delete it wherever it was found. All to no avail: DOOM was unstoppable.

But in these final months of the mostly-unconnected era of of personal computing — the World Wide Web would begin to hit big over the course of 1994 — a game still needed to reach those without modems or network cards in their computers in order to become a hit on the scale that id envisioned for DOOM. Jay Wilbur, displaying a wily marketing genius that went Scott Miller one better, decided that absolutely everyone should be allowed to distribute the first episode of DOOM on disk, charging whatever they could get for it: “We don’t care if you make money off this shareware demo. Move it! Move it in mass quantities.” For distribution, Wilbur realized, was the key to success. There are many ways to frame the story of DOOM, but certainly one of them is a story of guerrilla marketing at its finest.

The free episode of DOOM appeared in stores under many different imprints, but most, like this Australian edition, used the iconic cover id themselves provided. John Romero claims that he served as the artist’s model for the image.

The incentives for distribution were massive. If a little mom-and-pop operation in, say, far-off Australia could become the first to stick that episode onto disks, stick those disks in a box, and get the box onto store shelves, they could make a killing, free and clear. DOOM became omnipresent, inescapable all over the world. When you logged into CompuServe, there was DOOM; when you wandered into your local software store, there was DOOM again, possibly in several different forms of packaging; when you popped in the disk or CD that came with your favorite gaming magazine, there it was yet again. The traditional industry was utterly gobsmacked by this virulent weed of a game.

As with Wolfenstein 3D, a large majority of the people who acquired the first episode of DOOM in one way or another were perfectly satisfied with its eight big levels and unlimited deathmatch play; plenty of others doubtless never bothered to read the fine print, never even realized that more DOOM was on offer if they called 1-800-IDGAMES with their credit card in hand. And then, of course, there was the ever-present specter of piracy; nothing whatsoever stopped buyers of the paid episodes from sharing them with all of their DOOM-loving friends. By some estimates, the conversion rate from the free to the paid episodes was as low as 1 percent. Nevertheless, it was enough to make the id boys very, very rich young men.

Sometimes $100,000 worth of orders would roll in on a single day. John Carmack and John Romero each went out and bought a new Ferrari Testarossa; now it was the turn of Scott Miller and George Broussard to look on the id boys’ cars with envy. Glossy magazines, newspapers, and television news programs all begged to visit the id offices, where they wondered over the cars in the parking lot and the unkempt young men inside screaming the most horrid scatological and sexual insults at one another as they played deathmatch. If nothing else, the id boys were certainly a colorful story.

The id boys’ cars got almost as much magazine coverage as their games. Here we see John Carmack with his Ferrari, which he had modified to produce 800 horsepower: “I want dangerous acceleration.”

Indeed, the id story is as close as gaming ever came to fulfilling one of its most longstanding dreams: that of game developers as rock stars, as first articulated by Trip Hawkins in 1983 upon his founding of Electronic Arts. Yet if Hawkins’s initial stable of developers, so carefully posed in black and white in EA’s iconic early advertisements, resembled an artsy post-punk band — the interactive version of Talking Heads — the id boys were meat-and-potatoes heavy metal for the masses — Metallica at their Black Album peak. John Romero, the id boy who most looked the part of rock star, particularly reveled in the odd sort of obsequious hero worship that marks certain corners of gamer culture. He almost visibly swelled with pride every time a group of his minions started chanting “We’re not worthy!” and literally bowed down in his presence, and wore his “DOOM: Wrote It!” tee-shirt until the print peeled off.

The impact DOOM was having on the industry had become undeniable by the time of the Summer Consumer Electronics Show in June of 1994. Here everyone seemed to want in on id’s action. The phrase “first-person shooter” had yet to be invented, so the many soon-to-be-released games of the type were commonly referred to as “DOOM clones” — or, as Computer Gaming World preferred, “DOOM toos.” The same magazine, still seeming just a trifle ambivalent about it all, called it the “3D action fad.” But this was no fad; these games were here to stay. The boxed publishers who had scoffed at the shareware scene a year or two before were now all scrambling to follow id’s lead. LucasArts previewed a DOOM clone set in the Star Wars universe; SSI, previously known for their complicated strategic war games and licensed Dungeons & Dragons CRPGs, dipped a toe into these very different waters with something called CyClones.

And then, inevitably, there was id’s own DOOM II: Hell on Earth. As a piece of game design, it evinced no sign of the dreaded sophomore slump that afflicts so many rock groups — this even though it used the exact same engine as its predecessor, and even though John Romero, id’s rock-star-in-chief, was increasingly busy with extracurriculars and contributed only a handful of levels. His slack was largely taken up by one American McGee, the latest scruffy rebel to join the id boys, a 21-year-old former auto mechanic who had suffered through an even more hardscrabble upbringing than the two Johns. After beginning at id as a tester, he had gradually revealed an uncanny talent for making levels that combined the intricacy of Sandy Petersen’s with the gung-ho flair of John Romero’s. Now, he joined Petersen and, more intermittently, Romero to create a game that was if anything even more devious than its predecessor. The id boys had grown cockier than ever, but they could still back it up.

John Romero in 1994, doing something the other id boys wished he would do a bit more of: making a level for DOOM II.

They were approached by a New York City wheeler-and-dealer named Ron Chaimowitz who wanted to publish DOOM II exclusively to retail. His was not an established name in the gaming world; he had come of age in the music industry, where he had broken big acts like Gloria Estefan and Julio Iglesias during the previous decade, and he was now publishing Jane Fonda’s workout videos through a company called GoodTimes Entertainment. But he had distribution connections — and, as Jay Wilbur has so recently proved, distribution often means everything. GoodTimes sold millions of videotapes through Wal-Mart, the exploding epicenter of heartland retail, and Chaimowitz promised that the new software label he had in mind would be able to leverage those connections. He further promised to spend $2 million on advertising. He would prove as good as his word in both respects. The new GT Interactive manufactured an extraordinary 600,000 copies of DOOM II prior to its release, marking by far the largest initial production run in the history of computer gaming to date.

In marked contrast to the simple uploading of the first episode of the original DOOM, DOOM II was launched with all the pomp and circumstance that a $2 million promotional budget could provide. A party to commemorate the event took place on October 10, 1994, at a hip Gothic night club in New York City which had been re-decorated in a predictably gory manner. The party even came complete with protesters against the game’s violence, to add that delicious note of controversy that any group of rock stars worth their salt requires.

At the party, a fellow named Bob Huntley, owner of a small Houston software company, foisted a disk on John Romero containing “The Dial-Up Wide-Area Network Games Operation,” or “DWANGO.” Using it, you could dial into Huntley’s Houston server at any time to play a pick-up game of four-player DOOM deathmatch with strangers who might happen to be on the other side of the world. Romero expressed his love for the concept in his trademark profane logorrhea: “I like staying up late and I want to play people whenever the fuck I want to and I don’t want to have to wake up my buddy at three in the morning and go, ‘Hey, uh, you wanna get your skull cracked?’ This is the thing that you can dial into and just play!” He convinced the other id boys to give DWANGO their official endorsement, and the service went live within weeks. For just $8.96 per month, you could now deathmatch any time you wanted. And thus another indelible piece of modern gaming culture, as well as a milestone in the cultural history of the Internet, fell into place.

DOOM was becoming not just a way of gaming but a way of life, one that left little space in the hearts of its most committed adherents for anything else. Some say that gaming became better after DOOM, some that it became worse. One thing that everyone can agree on, however, is that it changed; it’s by no means unreasonable to divide the entire history of computer gaming into pre-DOOM and post-DOOM eras. Next time, then, in the concluding article of this series, we’ll do our best to come to terms with that seismic shift.

(Sources: the books Masters of Doom by David Kushner, Game Engine Black Book: Wolfenstein 3D and Game Engine Black Book: DOOM by Fabien Sanglard, and Principles of Three-Dimensional Computer Animation by Michael O’Rourke; Retro Gamer 75; Game Developer premiere issue and issues of June 1994 and February/March 1995; Computer Gaming World of July 1993, March 1994, July 1994, August 1994, September 1994. Online sources include “Apogee: Where Wolfenstein Got Its Start” by Chris Plante at Polygon, “Rocket Jump: Quake and the Golden Era of First-Person Shooters” by David L. Craddock at Shack News, Benj Edwards’s interview with Scott Miller for Game Developer, Jeremy Peels’s interview with John Romero for PC Games N, and Jay Wilbur’s old Usenet posts, which can now be accessed via Google Groups. And a special thanks to Alex Sarosi, better known in our comment threads as Lt. Nitpicker, for pointing out to me how important Jay Wilbur’s anything-goes approach to distribution of the free episode of DOOM was to the game’s success.

The original Doom episodes and Doom II are available as digital purchases on GOG.com.)

 

Tags: , , , , , ,