RSS

Tag Archives: carmack

The Rise of POMG, Part 1: It Takes a Village…

No one on their deathbed ever said, “I wish I had spent more time alone with my computer!”

— Dani Bunten Berry

If you ever want to feel old, just talk to the younger generation.

A few years ago now, I met the kids of a good friend of mine for the very first time: four boys between the ages of four and twelve, all more or less crazy about videogames. As someone who spends a lot of his time and earns a lot of his income writing about games, I arrived at their house with high expectations attached.

Alas, I’m afraid I proved a bit of a disappointment to them. The distance between the musty old games that I knew and the shiny modern ones that they played was just too far to bridge; shared frames of reference were tough to come up with. This was more or less what I had anticipated, given how painfully limited I already knew my knowledge of modern gaming to be. But one thing did genuinely surprise me: it was tough for these youngsters to wrap their heads around the very notion of a game that you played to completion by yourself and then put on the shelf, much as you might a book. The games they knew, from Roblox to Fortnite, were all social affairs that you played online with friends or strangers, that ended only when you got sick of them or your peer group moved on to something else. Games that you played alone, without at the very least leader boards and achievements on-hand to measure yourself against others, were utterly alien to them. It was quite a reality check for me.

So, I immediately started to wonder how we had gotten to this point — a point not necessarily better or worse than the sort of gaming that I knew growing up and am still most comfortable with, just very different. This series of articles should serve as the beginning of an answer to that complicated question. Their primary focus is not so much how computer games went multiplayer, nor even how they first went online; those things are in some ways the easy, obvious parts of the equation. It’s rather how games did those things persistently — i.e., permanently, so that each session became part of a larger meta-game, if you will, embedded in a virtual community. Or perhaps the virtual community is embedded in the game. It all depends on how you look at it, and which precise game you happen to be talking about. Whichever way, it has left folks like me, whose natural tendency is still to read games like books with distinct beginnings, middles, and ends, anachronistic iconoclasts in the eyes of the youthful mainstream.

Which, I hasten to add, is perfectly okay; I’ve always found the ditch more fun than the middle of the road anyway. Still, sometimes it’s good to know how the other 90 percent lives, especially if you claim to be a gaming historian…



“Persistent online multiplayer gaming” (POMG, shall we say?) is a mouthful to be sure, but it will have to do for lack of a better descriptor of the phenomenon that has created such a divide between myself and my friend’s children.  It’s actually older than you might expect, having first come to be in the 1970s on PLATO, a non-profit computer network run out of the University of Illinois but encompassing several other American educational institutions as well. Much has been written about this pioneering network, which uncannily presaged in so many of its particulars what the Internet would become for the world writ large two decades later. (I recommend Brian Dear’s The Friendly Orange Glow for a book-length treatment.) It should suffice for our purposes today to say that PLATO became host to, among other online communities of interest, an extraordinarily vibrant gaming culture. Thanks to the fact that PLATO games lived on a multi-user network rather than standalone single-user personal computers, they could do stuff that most gamers who were not lucky enough to be affiliated with a PLATO-connected university would have to wait many more years to experience.

The first recognizable single-player CRPGs were born on PLATO in the mid-1970s, inspired by the revolutionary new tabletop game known as Dungeons & Dragons. They were followed by the first multiplayer ones in amazingly short order. Already in 1975’s Moria,[1]The PLATO Moria was a completely different game from the 1983 single-player roguelike that bore the same name. players met up with their peers online to chat, brag, and sell or trade loot to one another. When they were ready to venture forth to kill monsters, they could do so in groups of up to ten, pooling their resources and sharing the rewards. A slightly later PLATO game called Oubliette implemented the same basic concept in an even more sophisticated way. The degree of persistence of these games was limited by a lack of storage capacity — the only data that was saved between sessions were the statistics and inventory of each player’s character, with the rest of the environment being generated randomly each time out — but they were miles ahead of anything available for the early personal computers that were beginning to appear at the same time. Indeed, Wizardry, the game that cemented the CRPG’s status as a staple genre on personal computers in 1981, was in many ways simply a scaled-down version of Oubliette, with the multiplayer party replaced by a party of characters that were all controlled by the same player.

Chester Bolingbroke, better known online as The CRPG Addict, plays Moria. Note the “Group Members” field at bottom right. Chester is alone here, but he could be adventuring with up to nine others.

A more comprehensive sort of persistence arrived with the first Multi-User Dungeon (MUD), developed by Roy Trubshaw and Richard Bartle, two students at the University of Essex in Britain, and first deployed there in a nascent form in late 1978 or 1979. A MUD borrowed the text-only interface and presentation of Will Crowther and Don Woods’s seminal game of Adventure, but the world it presented was a shared, fully persistent one between its periodic resets to a virgin state, chockablock with other real humans to interact with and perhaps fight. “The Land,” as Bartle dubbed his game’s environs, expanded to more than 600 rooms by the early 1980s, even as its ideas and a good portion of its code were used to set up other, similar environments at many more universities.

In the meanwhile, the first commercial online services were starting up in the United States. By 1984, you could, for the price of a substantial hourly fee, dial into the big mainframes of services like CompuServe using your home computer. Once logged in there, you could socialize, shop, bank, make travel reservations, read newspapers, and do much else that most people wouldn’t begin to do online until more than a decade later — including gaming. For example, CompuServe offered MegaWars, a persistent grand-strategy game of galactic conquest whose campaigns took groups of up to 100 players four to six weeks to complete. (Woe betide the ones who couldn’t log in for some reason of an evening in the midst of that marathon!) You could also find various MUDs, as well as Island of Kesmai, a multiplayer CRPG boasting most of the same features as PLATO’s Oubliette in a genuinely persistent world rather than a perpetually regenerated one. CompuServe’s competitor GEnie had Air Warrior, a multiplayer flight simulator with bitmapped 3D graphics and sound effects to rival any of the contemporaneous single-player simulators on personal computers. For the price of $11 per hour, you could participate in grand Air Warrior campaigns that lasted three weeks each and involved hundreds of other subscribers, organizing and flying bombing raids and defending against the enemy’s attacks on their own lines. In 1991, America Online put up Neverwinter Nights,[2]Not the same game as the 2002 Bioware CRPG of the same name. which did for the “Gold Box” line of licensed Dungeons & Dragons CRPGs what MUD had done for Adventure and Air Warrior had done for flight simulators, transporting the single-player game into a persistent multiplayer space.

All of this stuff was more or less incredible in the context of the times. At the same time, though, we mustn’t forget that it was strictly the purview of a privileged elite, made up of those with login credentials for institutional-computing networks or money in their pockets to pay fairly exorbitant hourly fees to feed their gaming habits. So, I’d like to back up now and tell a different story of POMG — one with more of a populist thrust, focusing on what was actually attainable by the majority of people out there, the ones who neither had access to a university’s mainframe nor could afford to spend hundreds of dollars per month on a hobby. Rest assured that the two narratives will meet before all is said and done.



POMG came to everyday digital gaming in the reverse order of the words that make up the acronym: first games were multiplayer, then they went online, and then these online games became persistent. Let’s try to unpack how that happened.

From the very start, many digital games were multiplayer, optionally if not unavoidably so. Spacewar!, the program generally considered the first fully developed graphical videogame, was exclusively multiplayer from its inception in the early 1960s. Ditto Pong, the game that launched Atari a decade later, and with it a slow-building popular craze for electronic games, first in public arcades and later in living rooms. Multiplayer here was not so much down to design intention as technological affordances. Pong was an elaborate analog state machine rather than a full-blown digital computer, relying on decentralized resistors and potentiometers and the like to do its “thinking.” It was more than hard enough just to get a couple of paddles and a ball moving around on the screen of a gadget like this; a computerized opponent was a bridge too far.

Very quickly, however, programmable microprocessors entered the field, changing everyone’s cost-benefit analyses. Building dual controls into an arcade cabinet was expensive, and the end result tended to take up a lot of space. The designers of arcade classics like Asteroids and Galaxian soon realized that they could replace the complications of a human opponent with hordes of computer-controlled enemies, flying in rudimentary, partially randomized patterns. Bulky multiplayer machines thus became rarer and rarer in arcades, replaced by slimmer, more standardized single-player cabinets. After all, if you wanted to compete with your friends in such games, there was still a way to do so: you could each play a round against the computerized enemies and compare your scores afterward.

While all of this was taking shape, the Trinity of 1977 — the Radio Shack TRS-80, Apple II, and Commodore PET — had ushered in the personal-computing era. The games these early microcomputers played were sometimes ports or clones of popular arcade hits, but just as often they were more cerebral, conceptually ambitious affairs where reflexes didn’t play as big — or any — role: flight simulations, adventure games, war and other strategy games. The last were often designed to be played optimally or even exclusively against another human, largely for the same reason Pong had been made that way: artificial intelligence was a hard thing to implement under any circumstances on an 8-bit computer with as little as 16 K of memory, and it only got harder when you were asking said artificial intelligence to formulate a strategy for Operation Barbarossa rather than to move a tennis racket around in front of a bouncing ball. Many strategy-game designers in these early days saw multiplayer options almost as a necessary evil, a stopgap until the computer could fully replace the human player, thus alleviating that eternal problem of the war-gaming hobby on the tabletop: the difficulty of finding other people in one’s neighborhood who were able and willing to play such weighty, complex games.

At least one designer, however, saw multiplayer as a positive advantage rather than a kludge — in fact, as the way the games of the future by all rights ought to be. “When I was a kid, the only times my family spent together that weren’t totally dysfunctional were when we were playing games,” remembered Dani Bunten Berry. From the beginning of her design career in 1979, when she made an auction game called Wheeler Dealers for the Apple II,[3]Wheeler Dealers and all of her other games that are mentioned in this article were credited to Dan Bunten, the name under which she lived until 1992. multiplayer was her priority. In fact, she was willing to go to extreme lengths to make it possible; in addition to a cassette tape containing the software, Wheeler Dealers shipped with a custom-made hardware add-on, the only method she could come up with to let four players bid at once. Such experiments culminated in M.U.L.E., one of the first four games ever published by Electronic Arts, a deeply, determinedly social game of economics and, yes, auctions for Atari and Commodore personal computers that many people, myself included, still consider her unimpeachable masterpiece.

A M.U.L.E. auction in progress.

And yet it was Seven Cities of Gold, her second game for Electronic Arts, that became a big hit. Ironically, it was also the first she had ever made with no multiplayer option whatsoever. She was learning to her chagrin that games meant to be played together on a single personal computer were a hard sell; such machines were typically found in offices and bedrooms, places where people went to isolate themselves, not in living rooms or other spaces where they went to be together. She decided to try another tack, thereby injecting the “online” part of POMG into our discussion.

In 1988, Electronic Arts published Berry’s Modem Wars, a game that seems almost eerily prescient in retrospect, anticipating the ludic zeitgeist of more than a decade later with remarkable accuracy. It was a strategy game played in real time (although not quite a real-time strategy of the resource-gathering and army-building stripe that would later be invented by Dune II and popularized by Warcraft and Command & Conquer). And it was intended to be played online against another human sitting at another computer, connected to yours by the gossamer thread of a peer-to-peer modem hookup over an ordinary telephone line. Like most of Berry’s games, it didn’t sell all that well, being a little too far out in front of the state of her nation’s telecommunications infrastructure.

Nevertheless, she continued to push her agenda of computer games as ways of being entertained together rather than alone over the years that followed. She never did achieve the breakout hit she craved, but she inspired countless other designers with her passion. She died far too young in 1998, just as the world was on the cusp of embracing her vision on a scale that even she could scarcely have imagined. “It is no exaggeration to characterize her as the world’s foremost authority on multiplayer computer games,” said Brian Moriarty when he presented Dani Bunten Berry with the first ever Game Developers Conference Lifetime Achievement Award two months before her death. “Nobody has worked harder to demonstrate how technology can be used to realize one of the noblest of human endeavors: bringing people together. Historians of electronic gaming will find in these eleven boxes [representing her eleven published games] the prototypes of the defining art form of the 21st century.” Let this article and the ones that will follow it, written well into said century, serve as partial proof of the truth of his words.

Danielle Bunten Berry, 1949-1998.

For by the time Moriarty spoke them, other designers had been following the trails she had blazed for quite some time, often with much more commercial success. A good early example is Populous, Peter Molyneux’s strategy game in real time (although, again, not quite a real-time strategy) that was for most of its development cycle strictly a peer-to-peer online multiplayer game, its offline single-player mode being added only during the last few months. An even better, slightly later one is DOOM, John Carmack and John Romero’s game of first-person 3D mayhem, whose star attraction, even more so than its sadistic single-player levels, was the “deathmatch” over a local-area network. Granted, these testosterone-fueled, relentlessly zero-sum contests weren’t quite the same as what Berry was envisioning for gaming’s multiplayer future near the end of her life; she wished passionately for games with a “people orientation,” directed toward “the more mainstream, casual players who are currently coming into the PC market.” Still, as the saying goes, you have to start somewhere.

But there is once more a caveat to state here about access, or rather the lack thereof. Being built for local networks only — i.e., networks that lived entirely within a single building or at most a small complex of them — DOOM deathmatches were out of reach on a day-to-day basis for those who didn’t happen to be students or employees at institutions with well-developed data-processing departments and permissive or oblivious authority figures. Outside of those ivory towers, this was the era of the “LAN party,” when groups of gamers would all lug their computers over to someone’s house, wire them together, and go at it over the course of a day or a weekend. These occasions went on to become treasured memories for many of their participants, but they achieved that status precisely because they were so sporadic and therefore special.

And yet DOOM‘s rise corresponded with the transformation of the Internet from an esoteric tool for the technological elite to the most flexible medium of communication ever placed at the disposal of the great unwashed, thanks to a little invention out of Switzerland called the World Wide Web. What if there was a way to move DOOM and other games like it from a local network onto this one, the mother of all wide-area networks? Instead of deathmatching only with your buddy in the next cubicle, you would be able to play against somebody on another continent if you liked. Now wouldn’t that be cool?

The problem was that local-area networks ran over a protocol known as IPX, while the Internet ran on a completely different one called TCP/IP. Whoever could bridge that gap in a reasonably reliable, user-friendly way stood to become a hero to gamers all over the world.



Jay Cotton discovered DOOM in the same way as many another data-processing professional: when it brought down his network. He was employed at the University of Georgia at the time, and was assigned to figure out why the university’s network kept buckling under unprecedented amounts of spurious traffic. He tracked the cause down to DOOM, the game that half the students on campus seemed to be playing more than half the time. More specifically, the problem was caused by a bug, which was patched out of existence by John Carmack as soon as he was informed. Problem solved. But Cotton stuck around to play, the warden seduced by the inmates of the asylum.

He was soon so much better at the game than anyone else on campus that he was getting a bit bored. Looking for worthier opponents, he stumbled across a program called TCPSetup, written by one Jake Page, which was designed to translate IPX packets into TCP/IP ones and vice versa on the fly, “tricking” DOOM into communicating across the vast Internet. It was cumbersome to use and extremely unreliable, but on a good day it would let you play DOOM over the Internet for brief periods of time at least, an amazing feat by any standard. Cotton would meet other players on an Internet chat channel dedicated to the game, they’d exchange IP addresses, and then they’d have at it — or try to, depending on the whims of the Technology Gods that day.

On August 22, 1994, Cotton received an email from a fellow out of the University of Illinois — yes, PLATO’s old home — whom he’d met and played in this way (and beaten, he was always careful to add). His name was Scott Coleman. “I have some ideas for hacking TCPSetup to make it a little easier. Care to do some testing later?” Coleman wrote. “I’ve already emailed Jake [Page] on this, but he hasn’t responded (might be on vacation or something). If he approves, I’m hoping some of these ideas might make it into the next release of TCPSetup. In the meantime, I want to do some experimenting to see what’s feasible.”

Jake Page never did respond to their queries, so Cotton and Coleman just kept beavering away on their own, eventually rewriting TCPSetup entirely to create iDOOM, a more reliable and far less fiddly implementation of the same concept, with support for three- or four-player deathmatches instead of just one-on-one duels. It took off like a rocket; the pair were bombarded with feature requests, most notably to make iDOOM work with other IPX-only games as well. In January of 1995, they added support for Heretic, one of the most popular of the first wave of so-called “DOOM clones.” They changed their program’s name to “iFrag” to reflect the fact that it was now about more than just DOOM.

Having come this far, Cotton and Coleman soon made the conceptual leap that would transform their software from a useful tool to a way of life for a time for many, many thousands of gamers. Why not add support for more games, they asked themselves, not in a bespoke way as they had been doing to date, but in a more sustainable one, by turning their program into a general-purpose IPX-to-TCP/IP bridge, suitable for use with the dozens of other multiplayer games out there that supported only local-area networks out of the box. And why not make their tool into a community while they were at it, by adding an integrated chat service? In addition to its other functions, the program could offer a list of “servers” hosting games, which you could join at the click of a button; no more trolling for opponents elsewhere on the Internet, then laboriously exchanging IP addresses and meeting times and hoping the other guy followed through. This would be instant-gratification online gaming. It would also provide a foretaste at least of persistent online multiplayer gaming; as people won matches, they would become known commodities in the community, setting up a meta-game, a sporting culture of heroes and zeroes where folks kept track of win-loss records and where everybody clamored to hear the results when two big wheels faced off against one another.

Cotton and Coleman renamed their software for the third time in less than nine months, calling it Kali, a name suggested by Coleman’s Indian-American girlfriend (later his wife). “The Kali avatar is usually depicted with swords in her hands and a necklace of skulls from those she has killed,” says Coleman, “which seemed appropriate for a deathmatch game.” Largely at the behest of Cotton, always the more commercially-minded of the pair, they decided to make Kali shareware, just like DOOM itself: multiplayer sessions would be limited to fifteen minutes at a time until you coughed up a $20 registration fee. Cotton went through the logistics of setting up and running a business in Georgia while Coleman did most of the coding in Illinois. (Rather astonishingly, Cotton and Coleman had still never met one another face to face in 2013, when gaming historian David L. Craddock conducted an interview with them that has been an invaluable source of quotes and information for this article.)

Kali certainly wasn’t the only solution in this space; a commercial service called DWANGO had existed since December of 1994, with the direct backing of John Carmack and John Romero, whose company id Software collected 20 percent of its revenue in return for the endorsement. But DWANGO ran over old-fashioned direct-dial-up connections rather than the Internet, meaning you had to pay long-distance charges to use it if you weren’t lucky enough to live close to one of its host computers. On top of that, it charged $9 for just five hours of access per month, with the fees escalating from there. Kali, by contrast, was available to you forever for as many hours per month as you liked after you plunked down your one-time fee of $20.

So, Kali was popular right from its first release on April 26, 1995. Yet it was still an awkward piece of software for the casual user despite the duo’s best efforts, being tied to MS-DOS, whose support for TCP/IP relied on a creaky edifice of third-party tools. The arrival of Windows 95 was a godsend for Kali, as it was for computer gaming in general, making the hobby accessible in a way it had never been before. The so-called “Kali95” was available by early 1996, and things exploded from there. Kali struck countless gamers with all the force of a revelation; who would have dreamed that it could be so easy to play against another human online? Lloyd Case, for example, wrote in Computer Gaming World magazine that using Kali for the first time was “one of the most profound gaming experiences I’ve had in a long time.” Reminiscing seventeen years later, David L. Craddock described how “using Kali for the first time was like magic. Jumping into a game and playing with other people. It blew my fourteen-year-old mind.” In late 1996, the number of registered Kali users ticked past 50,000, even as quite possibly just as many or more were playing with cracked versions that bypassed the simplistic serial-number-registration process. First-person-shooter deathmatches abounded, but you could also play real-time strategies like Command & Conquer and Warcraft, or even the Links golf simulation. Computer Gaming World gave Kali a special year-end award for “Online-Enabling Technology.”

Kali for Windows 95.

Competitors were rushing in at a breakneck pace by this time, some of them far more conventionally “professional” than Kali, whose origin story was, as we’ve seen, as underground and organic as that of DOOM itself. The most prominent of the venture-capital-funded startups were MPlayer (co-founded by Brian Moriarty of Infocom and LucasArts fame, and employing Dani Bunten Berry as a consultant during the last months of her life) and the Total Entertainment Network, better known as simply TEN. In contrast to Kali’s one-time fee, they, like DWANGO before them, relied on subscription billing: $20 per month for MPlayer, $15 per month for TEN. Despite slick advertising and countless other advantages that Kali lacked, neither would ever come close to overtaking its scruffy older rival, which had price as well as oodles of grass-roots goodwill on its side. Jay Cotton:

It was always my belief that Kali would continue to be successful as long as I never got greedy. I wanted everyone to be so happy with their purchase that they would never hesitate to recommend it to a friend. [I would] never charge more than someone would be readily willing to pay. It also became a selling point that Kali only charged a one-time fee, with free upgrades forever. People really liked this, and it prevented newcomers (TEN, Heat [a service launched in 1997 by Sega of America], MPlayer, etc.) from being able to charge enough to pay for their expensive overheads.

Kali was able to compete with TEN, MPlayer, and Heat because it already had a large established user base (more users equals more fun) and because it was much, much cheaper. These new services wanted to charge a subscription fee, but didn’t provide enough added benefit to justify the added expense.

It was a heady rush indeed, although it would also prove a short-lived one; Kali’s competitors would all be out of business within a year or so of the turn of the millennium. Kali itself stuck around after that, but as a shadow of what it had been, strictly a place for old-timers to reminisce and play the old hits. “I keep it running just out of habit,” said Jay Cotton in 2013. “I make just enough money on website ads to pay for the server.” It still exists today, presumably as a result of the same force of habit.

One half of what Kali and its peers offered was all too obviously ephemeral from the start: as the Internet went mainstream, developers inevitably began building TCP/IP support right into their games, eliminating the need for an external IPX-to-TCP/IP bridge. (For example, Quake, id Software’s much-anticipated follow-up to DOOM, did just this when it finally arrived in 1996.) But the other half of what they offered was community, which may have seemed a more durable sort of benefit. As it happened, though, one clever studio did an end-run around them here as well.



The folks at Blizzard Entertainment, the small studio and publisher that was fast coming to rival id Software for the title of the hottest name in gaming, were enthusiastic supporters of Kali in the beginning, to the point of hand-tweaking Warcraft II, their mega-hit real-time strategy, to run optimally over the service. They were rewarded by seeing it surpass even DOOM to become the most popular game there of all. But as they were polishing their new action-CRPG Diablo for release in 1996, Mike O’Brien, a Blizzard programmer, suggested that they launch their own service that would do everything Kali did in terms of community, albeit for Blizzard’s games alone. And then he additionally suggested that they make it free, gambling that knowledge of its existence would sell enough games for them at retail to offset its maintenance costs. Blizzard’s unofficial motto had long been “Let’s be awesome,” reflecting their determination to sell exactly the games that real hardcore gamers were craving, honed to a perfect finish, and to always give them that little bit extra. What better way to be awesome than by letting their customers effortlessly play and socialize online, and to do so for free?

The idea was given an extra dollop of urgency by the fact that Westwood Games, the maker of Warcraft‘s chief competitor Command & Conquer, had introduced a service called Westwood Chat that could launch people directly into a licensed version of Monopoly. (Shades of Dani Bunten Berry’s cherished childhood memories…) At the moment it supported only Monopoly, a title that appealed to a very different demographic from the hardcore crowd who favored Blizzard’s games, but who knew how long that would last?[4]Westwood Chat would indeed evolve eventually into Westwood Online, with full support for Command & Conquer, but that would happen only after Blizzard had rolled out their own service.

So, when Diablo shipped in the last week of 1996, it included something called Battle.net, a one-click chat and matchmaking service and multiplayer facilitator. Battle.net made everything easier than it had ever been before. It would even automatically patch your copy of the game to the latest version when you logged on, pioneering the “software as a service” model in gaming that has become everyday life in our current age of Steam. “It was so natural,” says Blizzard executive Max Schaefer. “You didn’t think about the fact that you were playing with a dude in Korea and a guy in Israel. It’s really a remarkable thing when you think about it. How often are people casually matched up in different parts of the world?” The answer to that question, of course, was “not very often” in the context of 1997. Today, it’s as normal as computers themselves, thanks to groundbreaking initiatives like this one. Blizzard programmer Jeff Strain:

We believed that in order for it [Battle.net] to really be embraced and adopted, that accessibility had to be there. The real catch for Battle.net was that it was inside-out rather than outside-in. You jumped right into the game. You connected players from within the game experience. You did not alt-tab off into a Web browser to set up your games and have the Web browser try to pass off information or something like that. It was a service designed from Day One to be built into actual games.

The combination of Diablo and Battle.net brought a new, more palpable sort of persistence to online gaming. Players of DOOM or Warcraft II might become known as hotshots on services like Kali, but their reputation conferred no tangible benefit once they entered a game session. A DOOM deathmatch or a Warcraft II battle was a one-and-done event, which everyone started on an equal footing, which everyone would exit again within an hour or so, with nothing but memories and perhaps bragging rights to show for what had transpired.

Diablo, however, was different. Although less narratively and systemically ambitious than many of its recent brethren, it was nevertheless a CRPG, a genre all about building up a character over many gaming sessions. Multiplayer Diablo retained this aspect: the first time you went online, you had to pick one of the three pre-made first-level characters to play, but after that you could keep bringing the same character back to session after session, with all of the skills and loot she had already collected. Suddenly the link between the real people in the chat rooms and their avatars that lived in the game proper was much more concrete. Many found it incredibly compelling. People started to assume the roles of their characters even when they were just hanging out in the chat rooms, started in some very real sense to live the game.

But it wasn’t all sunshine and roses. Battle.net became a breeding ground of the toxic behaviors that have continued to dog online gaming to this day, a social laboratory demonstrating what happens when you take a bunch of hyper-competitive, rambunctious young men and give them carte blanche to have at it any way they wish with virtual swords and spells. The service was soon awash with “griefers,” players who would join others on their adventures, ostensibly as their allies in the dungeon, then literally stab them in the back when they least expected it, killing their characters and running off with all of their hard-won loot. The experience could be downright traumatizing for the victims, who had thought they were joining up with friendly strangers simply to have fun together in a cool new game. “Going online and getting killed was so scarring,” acknowledges David Brevick, Diablo‘s original creator. “Those players are still feeling a little bit apprehensive.”

To make matters worse, many of the griefers were also cheaters. Diablo had been born and bred a single-player game; multiplayer had been a very late addition. This had major ramifications. Diablo stored all the information about the character you played online on your local hard drive rather than the Battle.net server. Learn how to modify this file, and you could create a veritable god for yourself in about ten minutes, instead of the dozens of hours it would take playing the honest way. “Trainers” — programs that could automatically do the necessary hacking for you — spread like wildfire across the Internet. Other folks learned to hack the game’s executable files themselves. Most infamously, they figured out ways to attack other players while they were still in the game’s above-ground town, supposedly a safe space reserved for shopping and healing. Battle.net as a whole took on a siege mentality, as people who wanted to play honorably and honestly learned to lock the masses out with passwords that they exchanged only with trusted friends. This worked after a fashion, but it was also a betrayal of the core premise and advantage of Battle.net, the ability to find a quick pick-up game anytime you wanted one. Yet there was nothing Blizzard could do about it without rewriting the whole game from the ground up. They would eventually do this — but they would call the end result Diablo II. In the meanwhile, it was a case of player beware.

It’s important to understand that, for all that it resembled what would come later all too much from a sociological perspective, multiplayer Diablo was still no more persistent than Moria and Oubliette had been on the old PLATO network: each player’s character was retained from session to session, but nothing about the state of the world. Each world, or instance of the game, could contain a maximum of four human players, and disappeared as soon as the last player left it, leaving as its legacy only the experience points and items its inhabitants had collected from it while it existed. Players could and did kill the demon Diablo, the sole goal of the single-player game, one that usually required ten hours or more of questing to achieve, over and over again in the online version. In this sense, multiplayer Diablo was a completely different game from single-player Diablo, replacing the simple quest narrative of the latter with a social meta-game of character-building and player-versus-player combat.

For lots and lots of people, this was lots and lots of fun; Diablo was hugely popular despite all of the exploits it permitted — indeed, for some players perchance, because of them. It became one of the biggest computer games of the 1990s, bringing online gaming to the masses in a way that even Kali had never managed. Yet there was still a ways to go to reach total persistence, to bring a permanent virtual world to life. Next time, then, we’ll see how mainstream commercial games of the 1990s sought to achieve a degree of persistence that the first MUD could boast of already in 1979. These latest virtual worlds, however, would attempt to do so with all the bells and whistles and audiovisual niceties that a new generation of gamers raised on multimedia and 3D graphics demanded. An old dog in the CRPG space was about to learn a new trick, creating in the process a new gaming acronym that’s even more of a mouthful than POMG.



Did you enjoy this article? If so, please think about pitching in to help me make many more like it. You can pledge any amount you like.


Sources: the books Stay Awhile and Listen Volumes 1 and 2 by David L. Craddock, Masters of Doom by David Kushner, and The Friendly Orange Glow by Brian Dear; Retro Gamer 43, 90, and 103; Computer Gaming World of September 1996 and May 1997; Next Generation of March 1997. Online sources include “The Story of Battle.net” by Wes Fenlon at PC Gamer, Dan Griliopoulos’s collection of interviews about Command & Conquer, Brian Moriarty’s speech honoring Dani Bunten Berry from the 1998 Game Developers Conference, and Jay Cotton’s history of Kali on the DOOM II fan site. Plus some posts on The CRPG Addict, to which I’ve linked in the article proper.

Footnotes

Footnotes
1 The PLATO Moria was a completely different game from the 1983 single-player roguelike that bore the same name.
2 Not the same game as the 2002 Bioware CRPG of the same name.
3 Wheeler Dealers and all of her other games that are mentioned in this article were credited to Dan Bunten, the name under which she lived until 1992.
4 Westwood Chat would indeed evolve eventually into Westwood Online, with full support for Command & Conquer, but that would happen only after Blizzard had rolled out their own service.
 
 

Tags: , , , , , , , , , , , ,

The Next Generation in Graphics, Part 3: Software Meets Hardware

The first finished devices to ship with the 3Dfx Voodoo chipset inside them were not add-on boards for personal computers, but rather standup arcade machines. That venerable segment of the videogames industry was enjoying its last lease on life in the mid-1990s; this was the last era when the graphics of the arcade machines were sufficiently better than those which home computers and consoles could generate as to make it worth getting up off the couch, driving into town, and dropping a quarter or two into a slot to see them. The Voodoo chips now became part and parcel of that, ironically just before they would do much to destroy the arcade market by bringing equally high-quality 3D graphics into homes. For now, though, they wowed players of arcade games like San Francisco Rush: Extreme Racing, Wayne Gretzky’s 3D Hockey, and NFL Blitz.

Still, Gary Tarolli, Scott Sellers, and Ross Smith were most excited by the potential of the add-on-board market. All too well aware of how the chicken-or-the-egg deadlock between game makers and players had doomed their earlier efforts with Pellucid and Media Vision, they launched an all-out charm offensive among game developers long before they had any actual hardware to show them. Smith goes so far as to call “connecting with the developers early on and evangelizing them” the “single most important thing we ever did” — more important, that is to say, than designing the Voodoo chips themselves, impressive as they were. Throughout 1995, somebody from 3Dfx was guaranteed to be present wherever developers got together to talk among themselves. While these evangelizers had no hardware as yet, they did have software simulations running on SGI workstations — simulations which, they promised, duplicated exactly the capabilities the real chips would have when they started arriving in quantity from Taiwan.

Our core trio realized early on that their task must involve software as much as hardware in another, more enduring sense: they had to make it as easy as possible to support the Voodoo chipset. In my previous article, I mentioned how their old employer SGI had created an open-source software library for 3D graphics, known as OpenGL. A team of programmers from 3Dfx now took this as the starting point of a slimmed-down, ultra-optimized MS-DOS library they called GLide; whereas OpenGL sported well over 300 individual function calls, GLide had less than 100. It was fast, it was lightweight, and it was easy to program. They had good reason to be proud of it. Its only drawback was that it would only work with the Voodoo chips — which was not necessarily a drawback at all in the eyes of its creators, given that they hoped and planned to dominate a thriving future market for hardware-accelerated 3D graphics on personal computers.

Yet that domination was by no means assured, for they were far from the only ones developing consumer-oriented 3D chipsets. One other company in particular gave every indication of being on the inside track to widespread acceptance. That company was Rendition, another small, venture-capital-funded startup that was doing all of the same things 3Dfx was doing — only Rendition had gotten started even earlier. It had actually been Rendition who announced a 3D chipset first, and they had been evangelizing it ever since every bit as tirelessly as 3Dfx.

The Voodoo chipset was technologically baroque in comparison to Rendition’s chips, which went under the name of Vérité. This meant that Voodoo should easily outperform them — eventually, once all of the logistics of East Asian chip fabricating had been dealt with and deals had been signed with board makers. In June of 1996, when the first Vérité-powered boards shipped, the Voodoo chipset quite literally didn’t exist as far as consumers were concerned. Those first Vérité boards were made by none other than Creative Labs, the 800-pound gorilla of the home-computer add-on market, maker of the ubiquitous Sound Blaster sound cards and many a “multimedia upgrade kit.” Such a partner must be counted as yet another early coup for Rendition.

The Vérité cards were followed by a flood of others whose slickly aggressive names belied their somewhat workmanlike designs: 3D Labs Permedia, S3 Virge, ATI 3D Rage, Matrox Mystique. And still Voodoo was nowhere.

What was everywhere was confusion; it was all but impossible for the poor, benighted gamer to make heads or tails of the situation. None of these chipsets were compatible with one another at the hardware level in the way that 2D graphics cards were; there were no hardware standards for 3D graphics akin to VGA, that last legacy of IBM’s era of dominance, much less the various SVGA standards defined by the Video Electronic Standards Association (VESA). Given that most action-oriented computer games still ran on MS-DOS, this was a serious problem.

For, being more of a collection of basic function calls than a proper operating system, MS-DOS was not known for its hardware agnosticism. Most of the folks making 3D chips did provide an MS-DOS software package for steering them, similar in concept to 3Dfx’s GLide, if seldom as optimized and elegant. But, just like GLide, such libraries worked only with the chipset for which they had been created. What was sorely needed was an intermediate layer of software to sit between games and the chipset-manufacturer-provided libraries, to automatically translate generic function calls into forms suitable for whatever particular chipset happened to exist on that particular computer. This alone could make it possible for one build of one game to run on multiple 3D chipsets. Yet such a level of hardware abstraction was far beyond the capabilities of bare-bones MS-DOS.

Absent a more reasonable solution, the only choice was to make separate versions of games for each of the various 3D chipsets. And so began the brief-lived, unlamented era of the 3D pack-in game. All of the 3D-hardware manufacturers courted the developers and publishers of popular software-rendered 3D games, dangling before them all sorts of enticements to create special versions that took advantage of their cards, more often than not to be included right in the box with them. Activision’s hugely successful giant-robot-fighting game MechWarrior 2 became the king of the pack-ins, with at least half a dozen different chipset-specific versions floating around, all paid for upfront by the board makers in cold, hard cash. (Whatever else can be said about him, Bobby Kotick has always been able to spot the seams in the gaming market where gold is waiting to be mined.)

It was an absurd, untenable situation; the game or games that came in the box were the only ones that the purchasers of some of the also-ran 3D contenders ever got a chance to play with their new toys. Gamers and chipset makers alike could only hope that, once Windows replaced MS-DOS as the gaming standard, their pain would go away.

In the meanwhile, the games studio that everyone with an interest in the 3D-acceleration sweepstakes was courting most of all was id Software — more specifically, id’s founder and tech guru, gaming’s anointed Master of 3D Algorithms, John Carmack. They all begged him for a version of Quake for their chipset.

And once again, it was Rendition that scored the early coup here. Carmack actually shared some of the Quake source code with them well before either the finished game or the finished Vérité chipset was available for purchase. Programmed by a pair of Rendition’s own staffers working with the advice and support of Carmack and Michael Abrash, the Vérité-rendered version of the game, commonly known as vQuake, came out very shortly after the software-rendered version. Carmack called it “the premier platform for Quake” — truly marketing copy to die for. Gamers too agreed that 3D acceleration made the original’s amazing graphics that much more amazing, while the makers of other 3D chipsets gnashed their teeth and seethed.

Quake with software rendering.

vQuake

Among these, of course, was the tardy 3Dfx. The first Voodoo cards appeared late, seemingly hopelessly so: well into the fall of 1996. Nor did they have the prestige and distribution muscle of a partner like Creative Labs behind them: the first two Voodoo boards rather came from smaller firms by the names of Diamond and Orchid. They sold for $300, putting them well up at the pricey end of the market —  and, unlike all of the competition’s cards, these required you to have another, 2D-graphics card in your computer as well. For all of these reasons, they seemed easy enough to dismiss as overpriced white elephants at first blush. But that impression lasted only until you got a look at them in action. The Voodoo cards came complete with a list of features that none of the competition could come close to matching in the aggregate: bilinear filtering, trilinear MIP-mapping, alpha blending, fog effects, accelerated light sources. If you don’t know what those terms mean, rest assured that they made games look better and play faster than anything else on the market. This was amply demonstrated by those first Voodoo boards’ pack-in title, an otherwise rather undistinguished, typical-of-its-time shooter called Hellbender. In its new incarnation, it suddenly looked stunning.

The Orchid Righteous 3D card, one of the first two to use the Voodoo chipset. (The only consumer category as fond of bro-dude phraseology like “extreme” and “righteous” as the makers of 3D cards was men’s razors.)

The battle lines were drawn between Rendition and 3Dfx. But sadly for the former, it quickly emerged that their chipset had one especially devastating weakness in comparison to its rival: its Z-buffering support left much to be desired. And what, you ask, is Z-buffering? Read on!

One of the non-obvious problems that 3D-graphics systems must solve is the need for objects in the foreground of a scene to realistically obscure those behind them. If, at the rendering stage, we were to simply draw the objects in whatever random order they came to us, we would wind up with a dog’s breakfast of overlapping shapes. We need to have a way of depth-sorting the objects if we want to end up with a coherent, correctly rendered scene.

The most straightforward way of depth-sorting is called the Painter’s Algorithm, because it duplicates the process a human artist usually goes through to paint a picture. Let’s say our artist wants to paint a still life of an apple sitting in front of a basket of other fruits. First she will paint the basket to her satisfaction, then paint the apple right over the top of it. Similarly, when we use a Painter’s Algorithm on the computer, we first sort the whole collection of objects into a hierarchy that begins with those that are farthest from our virtual camera and ends with those closest to it. Only after this has been done do we set about the task of actually drawing them to the screen, in our sorted order from the farthest away to the closest. And so we end up with a correctly rendered image.

But, as so often happens in matters like this, the most logically straightforward way is far from the most efficient way of depth-sorting a 3D scene. When the number of objects involved is few, the Painter’s Algorithm works reasonably well. When the numbers get into the hundreds or thousands, however, it results in much wasted effort, as the computer ends up drawing objects that are completely obscured by other objects in front of them — i.e., objects that don’t really need to be drawn at all. Even more importantly, the process of sorting all of the objects by depth beforehand is painfully time-consuming, a speed bump that stops the rendering process dead until it is completed. Even in the 1990s, when their technology was in a laughably primitive stage compared to today, GPUs tended to emphasize parallel processing — i.e., staying constantly busy with multiple tasks at the same time. The necessity of sorting every object in a scene by depth before even getting properly started on rendering it rather threw all that out the window.

Enter the Z-buffer. Under this approach, every object is rendered right away as soon as it comes down the pipeline, used to build the appropriate part of the raster of colored pixels that, once completed, will be sent to the monitor screen as a single frame. But there comes an additional wrinkle in the form of the Z-buffer itself: a separate, parallel raster containing not the color of each pixel but its distance from the camera. Before the GPU adds an entry to the raster of pixel colors, it compares the distance of that pixel from the camera with the number in that location in the Z-buffer. If the current distance is less than the one already found there, it knows that the pixel in question should be overwritten in the main raster and that the Z-buffer raster should be updated with that pixel’s new distance from the camera. Ditto if the Z-buffer contains a null value, indicating no object has yet been drawn at that pixel. But if the current distance is larger than the (non-null) number already found there, the GPU simply moves on without doing anything more, confident in the knowledge that what it had wanted to draw should actually be hidden by what it has already drawn.

There are plenty of occasions when the same pixel is drawn over twice — or many times — before reaching the screen even under this scheme, but it is nevertheless still vastly more efficient than the Painter’s Algorithm, because it keeps objects flowing through the pipeline steadily, with no hiccups caused by lengthy sorting operations. Z-buffering support was reportedly a last-minute addition to the Vérité chipset, and it showed. Turning depth-sorting on for 100-percent realistic rendering on these chips cut their throughput almost in half; the Voodoo chipset, by contrast, just said, “No worries!,” and kept right on trucking. This was an advantage of titanic proportions. It eventually emerged that the programmers at Rendition had been able to get Quake running acceptably on the Vérité chips only by kludging together their own depth-sorting algorithms in software. With Voodoo, programmers wouldn’t have to waste time with stuff like that.

But surprisingly, the game that blew open the doors for the Voodoo chipset wasn’t Quake or anything else from id. It was rather a little something called Tomb Raider, from the British studio Core Design, a game which used a behind-the-back third-person perspective rather than the more typical first-person view — the better to appreciate its protagonist, the buxom and acrobatic female archaeologist Lara Croft. In addition to Lara’s considerable assets, Tomb Raider attracted gamers with its unprecedentedly huge and wide-open 3D environments. (It will be the subject of my next article, for those interested in reading more about its massive commercial profile and somewhat controversial legacy.)

In November of 1996, when Tomb Raider been out for less than a month, Core put a  Voodoo patch for it up on their website. Gamers were blown away. “It’s a totally new game!” gushed one on Usenet. “It was playable but a little jerky without the patch, but silky smooth to play and beautiful to look at with the patch.” “The level of detail you get with the Voodoo chip is amazing!” enthused another. Or how about this for a ringing testimonial?

I had been playing the regular Tomb Raider on my PC for about two weeks
before I got the patch, with about ten people seeing the game, and not
really saying anything regarding how amazing it was. When I got the
accelerated patch, after about four days, every single person who has
seen the game has been in awe watching the graphics and how
smooth [and] lifelike the movement is. The feel is different, you can see
things much more clearly, it’s just a more enjoyable game now.

Tomb Raider became the biggest hit of the 1996 holiday season, and tens if not hundreds of thousands of Voodoo-based 3D cards joined it under Christmas trees.

Tomb Raider with software rendering.

Tomb Raider with a Voodoo card.

In January of 1997, id released GLQuake, a new version of that game that supported the Voodoo chipset. In telling contrast to the Vérité-powered vQuake, which had been coded by Rendition’s programmers, GLQuake had been taken on by John Carmack as a personal project. The proof was in the pudding; this Quake ran faster and looked better than either of the previous ones. Running on a machine with a 200 MHz Intel Pentium processor and a Voodoo card, GLQuake could manage 70 frames per second, compared to 41 frames for the software-rendered version, whilst appearing much more realistic and less pixelated.

GLQuake

One last stroke of luck put the finishing touch on 3Dfx’s destiny of world domination: the price of memory dropped precipitously, thanks to a number of new RAM-chip factories that came online all at once in East Asia. (The factories had been built largely to feed the memory demands of Windows 95, the straw that was stirring the drink of the entire computer industry.) The Voodoo chipset required 4 MB of memory to operate effectively — an appreciable quantity in those days, and a big reason why the cards that used it tended to cost almost as twice as much as those based on the Vérité chips, despite lacking the added complications and expense of 2D support. But with the drop in memory prices, it suddenly became practical to sell a Voodoo card for under $200. Rendition could also lower their prices somewhat thanks to the memory windfall, of course, but at these lower price points the dollar difference wasn’t as damaging to 3Dfx. After all, the Voodoo cards were universally acknowledged to be the class of the industry. They were surely worth paying a little bit of a premium for. By the middle of 1997, the Voodoo chipset was everywhere, the Vérité one left dead at the side of the road. “If you want full support for a gamut of games, you need to get a 3Dfx card,” wrote Computer Gaming World.

These were heady times at 3Dfx, which had become almost overnight the most hallowed name in hardcore action gaming outside of id Software, all whilst making an order of magnitude more money than id, whose business model under John Carmack was hardly fine-tuned to maximize revenues. In a comment he left recently on this site, reader Captain Kal said that, when it comes to 3D gaming in the late 1990s, “one company springs to my mind without even thinking: 3Dfx. Yes, we also had 3D solutions from ATI, NVIDIA, or even S3, but Voodoo cards created the kind of dedication that I hadn’t seen since the Amiga days.” The comparison strikes me as thoroughly apropos.

3Dfx brought in a high-profile CEO named Greg Ballard, formerly of Warner Music and the videogame giant Capcom, to oversee a smashingly successful initial public offering in June of 1997. He and the three thirty-something founders were the oldest people at the company. “Most of the software engineers were [in their] early twenties, gamers through and through, loved games,” says Scott Sellers. “Would code during the day and play games at night. It was a culture of fun.” Their offices stood at the eighth hole of a golf course in Sunnyvale, California. “We’d sit out there and drink beer,” says Ross Smith. “And you’d have to dodge incoming golf balls a bit. But the culture was great.” Every time he came down for a visit, says their investing angel Gordon Campbell,

they’d show you something new, a new demo, a new mapping technique. There was always something. It was a very creative environment. The work hard and play hard thing, that to me kind of was Silicon Valley. You went out and socialized with your crew and had beer fests and did all that kind of stuff. And a friendly environment where everybody knew everybody and everybody was not in a hierarchy so much as part of the group or the team.

I think the thing that was added here was, it’s the gaming industry. And that was a whole new twist on it. I mean, if you go to the trade shows, you’d have guys that would show up at our booth with Dracula capes and pointed teeth. I mean, it was just crazy.

Gary Tarolli, Scott Sellers, and Greg Ballard do battle with a dangerous houseplant. The 1990s were wild and crazy times, kids…

While the folks at 3Dfx were working hard and playing hard, an enormously consequential advancement in the field of software was on the verge of transforming the computer-games industry. As I noted previously, in 1996 most hardcore action games were still being released for MS-DOS. In 1997, however, that changed in a big way. With the exception of only a few straggling Luddites, game developers switched over to Windows 95 en masse. Quake had been an MS-DOS game; Quake II, which would ship at the end of 1997, ran under Windows. The same held true for the original Tomb Raider and its 1997 sequel, as it did for countless others.

Gaming was made possible on Windows 95 by Microsoft’s DirectX libraries, which finally let programmers do everything in Windows that they had once done in MS-DOS, with only a slight speed penalty if any, all while giving them the welcome luxury of hardware independence. That is to say, all of the fiddly details of disparate video and sound cards and all the rest were abstracted away into Windows device drivers that communicated automatically with DirectX to do the needful. It was an enormous burden lifted off of developers’ shoulders. Ditto gamers, who no longer had to futz about for hours with cryptic “autoexec.bat” and “config.sys” files, searching out the exact combination of arcane incantations that would allow each game they bought to run optimally on their precise machine. One no longer needed to be a tech-head simply to install a game.

In its original release of September 1995, the full DirectX suite consisted of DirectDraw for 2D pixel graphics, DirectSound for sound and music, DirectInput for managing joysticks and other game-centric input devices, and DirectPlay for networked multiplayer gaming. It provided no support for doing 3D graphics. But never fear, Microsoft said: 3D support was coming. Already in February of 1995, they had purchased a British company called RenderMorphics, the creator of Reality Lab, a hardware-agnostic 3D library. As promised, Microsoft added Direct3D to the DirectX collection with the latter’s 2.0 release, in June of 1996.

But, as the noted computer scientist Andrew Tanenbaum once said, “the nice thing about standards is that you have so many to choose from.” For the next several years, Direct3D would compete with another library serving the same purpose: a complete, hardware-agnostic Windows port of SGI’s OpenGL, whose most prominent booster was no less leading a light than John Carmack. Direct3D would largely win out in the end among game developers despite Carmack’s endorsement of its rival, but we need not concern ourselves overmuch with the details of that tempest in a teacup here. Suffice to say that even the most bitter partisans on one side of the divide or the other could usually agree that both Direct3D and OpenGL were vastly preferable to the bad old days of chipset-specific 3D games.

Unfortunately for them, 3Dfx, rather feeling their oats after all of their success, made in response to these developments the first of a series of bad decisions that would cause their time at the top of the 3D-graphics heap to be a relatively short one.

Like all of the others, the Voodoo chipset could be used under Windows with either Direct3D or OpenGL. But there were some features on the Voodoo chips that the current implementations of those libraries didn’t support. 3Dfx was worried, reasonably enough on the face of it, about a “least-common-denominator effect” which would cancel out the very real advantages of their 3D chipset and make one example of the breed more or less as good as any other. However, instead of working with the folks behind Direct3D and OpenGL to get support for the Voodoo chips’ special features into those libraries, they opted to release a Windows version of GLide, and to strongly encourage game developers to keep working with it instead of either of the more hardware-agnostic alternatives. “You don’t want to just have a title 80 percent as good as it could be because your competitors are all going to be at 100 percent,” they said pointedly. They went so far as to start speaking of Voodoo-equipped machines as a whole new platform unto themselves, separate from more plebeian personal computers.

It was the talk and actions of a company that had begun to take its own press releases a bit too much to heart. But for a time 3Dfx got away with it. Developers coded for GLide in addition to or instead of Direct3D or OpenGL, because you really could do a lot more with it and because the cachet of the “certified” 3Dfx logo that using GLide allowed them to put on their boxes really was huge.

In March of 1998, the first cards with a new 3Dfx chipset, known as Voodoo2, began to appear. Voodoo2 boasted twice the overall throughput of its predecessor, and could handle a screen resolution of 800 X 600 instead of just 640 X 480; you could even join two of the new cards together to get even better performance and higher resolutions. This latest chipset only seemed to cement 3Dfx’s position as the class of their field.

The bottom line reflected this. 3Dfx was, in the words of their new CEO Greg Ballard, “a rocket ship.” In 1995, they earned $4 million in revenue; in 1996, $44 million; in 1997, $210 million; and in 1998, their peak year, $450 million. And yet their laser focus on selling the Ferraris of 3D acceleration was blinding Ballard and his colleagues to the potential of 3D Toyotas, where the biggest money of all was waiting to be made.

Over the course of the second half of the 1990s, 3D GPUs went from being exotic pieces of kit known only to hardcore gamers to being just another piece of commodity hardware found in almost all computers. 3Dfx had nothing to do with this significant shift. Instead they all but ignored this so-called “OEM” (“Original Equipment Manufacturer”) side of the GPU equation: chipsets that weren’t the hottest or the sexiest on the market, but that were cheap and easy to solder right onto the motherboards of low-end and mid-range machines bearing such unsexy name plates as Compaq and Packard Bell. Ironically, Gordon Campbell had made a fortune with Chips & Technologies selling just such commodity-grade 2D graphics chipsets. But 3Dfx was obstinately determined to fly above the OEM segment, determined to offer “premium” products only. “It doesn’t matter if 20 million people have one of our competitors’ chips,” said Scott Sellers in 1997. “How many of those people are hardcore gamers? How many of those people are buying games?” “I can guarantee that 100 percent of 3Dfx owners are buying games,” chimed in a self-satisfied-sounding Gary Tarolli.

The obvious question to ask in response was why it should matter to 3Dfx how many games — or what types of games — the users of their chips were buying, as long as they were buying gadgets that contained their chips. While 3Dfx basked in their status as the hardcore gamer’s favorite, other companies were selling many more 3D chips, admittedly at much less of a profit on a chip-per-chip basis, at the OEM end of the market. Among these was a firm known as NVIDIA, which had been founded on the back of a napkin in a Denny’s diner in 1993. NVIDIA’s first attempt to compete head to head with 3Dfx at the high end was underwhelming at best: released well after the Voodoo2 chipset, the RIVA TNT ran so hot that it required a noisy onboard cooling fan, and yet still couldn’t match the Voodoo2’s performance. By that time, however, NVIDIA was already building a lucrative business out of cheaper, simpler chips on the OEM side, even as they were gaining the wisdom they would need to mount a more credible assault on the hardcore-gamer market. In late 1998, 3Dfx finally seemed to be waking up to the fact that they would need to reach beyond the hardcore to continue their rise, when they released a new chipset called Voodoo Banshee which wasn’t quite as powerful as the Voodoo2 chips but could do conventional 2D as well as 3D graphics, meaning its owners would not be forced to buy a second video card just in order to use their computers.

But sadly, they followed this step forward with an absolutely disastrous mistake. You’ll remember that prior to this point 3Dfx had sold their chips only to other companies, who then incorporated them into add-on boards of their own design, in the same way that Intel sold microprocessors to computer makers rather than directly to consumers (aside from the build-your-own-rig hobbyists, that is). This business model had made sense for 3Dfx when they were cash-strapped and hadn’t a hope of building retail-distribution channels equal to those of the established board makers. Now, though, they were flush with cash, and enjoyed far better name recognition than the companies that made the boards which used their chips; even the likes of Creative Labs, who had long since dropped Rendition and were now selling plenty of 3Dfx boards, couldn’t touch them in terms of prestige. Why not cut out all these middlemen by manufacturing their own boards using their own chips and selling them directly to consumers with only the 3Dfx name on the box? They decided to do exactly that with their third state-of-the-art 3D chipset, the predictably named Voodoo3, which was ready in the spring of 1999.

Those famous last words apply: “It seemed like a good idea at the time.” With the benefit of hindsight, we can see all too clearly what a terrible decision it actually was. The move into the board market became, says Scott Sellers, the “anchor” that would drag down the whole company in a rather breathtakingly short span of time: “We started competing with what used to be our own customers” — i.e., the makers of all those earlier Voodoo boards. Then, too, 3Dfx found that the logistics of selling a polished consumer product at retail, from manufacturing to distribution to advertising, were much more complex than they had reckoned with.

Still, they might — just might — have been able to figure it all out and make it work, if only the Voodoo3 chipset had been a bit better. As it was, it was an upgrade to be sure, but not quite as much of one as everyone had been expecting. In fact, some began to point out now that even the Voodoo2 chips hadn’t been that great a leap: they too were better than their predecessors, yes, but that was more down to ever-falling memory prices and ever-improving chip-fabrication technologies than any groundbreaking innovations in their fundamental designs. It seemed that 3Dfx had started to grow complacent some time ago.

NVIDIA saw their opening and made the most of it. They introduced a new line of their own, called the TNT2, which outdid its 3Dfx competitor in at least one key metric: it could do 24-bit color, giving it almost 17 million shades of onscreen nuance, compared to just over 65,000 in the case of Voodoo3. For the first time, 3Dfx’s chips were not the unqualified, undisputed technological leaders. To make matters worse, NVIDIA had been working closely with Microsoft in exactly the way that 3Dfx had never found it in their hearts to do, ensuring that every last feature of their chips was well-supported by the increasingly dominant Direct3D libraries.

And then, as the final nail in the coffin, there were all those third-party board makers 3Dfx had so rudely jilted when they decided to take over that side of the business themselves. These had nowhere left to go but into NVIDIA’s welcoming arms. And needless to say, these business partners spurned were highly motivated to make 3Dfx pay for their betrayal.

NVIDIA was on a roll now. They soon came out with yet another new chipset, the GeForce 256, which had a “Transform & Lighting” (T&L) engine built in, a major conceptual advance. And again, the new technology was accessible right from the start through Direct3D, thanks to NVIDIA’s tight relationship with Microsoft. Meanwhile the 3Dfx chips still needed GLide to perform at their best. With those chips’ sales now plummeting, more and more game developers decided the oddball library just wasn’t worth the trouble anymore. By the end of 1999, a 3Dfx death spiral that absolutely no one had seen coming at the start of the year was already well along. NVIDIA was rapidly sewing up both the high end and the low end, leaving 3Dfx with nothing.

In 2000, NVIDIA continued to go from strength to strength. Their biggest challenger at the hardcore-gamer level that year was not 3Dfx, but rather ATI, who arrived on the scene with a new architecture known as Radeon. 3Dfx attempted to right the ship with a two-pronged approach: a Voodoo4 chipset aimed at the long-neglected budget market, and a Voodoo5 aimed at the high end. Both had potential, but the company was badly strapped for cash by now, and couldn’t afford to give them the launch they deserved. In December of 2000, 3Dfx announced that they had agreed to sell out to NVIDIA, who thought they had spotted some bits and bobs in their more recent chips that they might be able to make use of. And that, as they say, was that.

3Dfx was a brief-burning comet by any standard, a company which did everything right up to the instant when someone somewhere flipped a switch and it suddenly started doing everything wrong instead. But whatever regrets Gary Tarolli, Scott Sellers, and Ross Smith may have about the way it all turned out, they can rest secure in the knowledge that they changed not just gaming but computing in general forever. Their vanquisher NVIDIA had revenues of almost $27 billion last year, on the strength of GPUs which are as far beyond the original Voodoo chips as an F-35 is beyond the Wright Brothers’ flier, which are at the forefront not just of 3D graphics but a whole new trend toward “massively parallel” computing.

And yet even today, the 3Dfx name and logo can still send a little tingle of excitement running down the spines of gamers of a certain age, just as that of the Amiga can among some just slightly older. For a brief few years there, over the course of one of most febrile, chaotic, and yet exciting periods in all of gaming history, having a Voodoo card in your computer meant that you had the best graphics money could buy. Most of us wouldn’t want to go back to the days of needing to constantly tinker with the innards of our computers, of dropping hundreds of dollars on the latest and the greatest and hoping that publishers would still be supporting it in six months, of poring over magazines trying to make sense of long lists of arcane bullet points that seemed like fragments of a particularly esoteric PhD thesis (largely because they originally were). No, we wouldn’t want to go back; those days were kind of ridiculous. But that doesn’t mean we can’t look back and smile at the extraordinary technological progression we were privileged to witness over such a disarmingly short period of time.



Did you enjoy this article? If so, please think about pitching in to help me make many more like it. You can pledge any amount you like.



(Sources: the books Renegades of the Empire: How Three Software Warriors Started a Revolution Behind the Walls of Fortress Microsoft by Michael Drummond, Masters of DOOM: How Two Guys Created an Empire and Transformed Pop Culture by David Kushner, and Principles of Three-Dimensional Computer Animation by Michael O’Rourke. Computer Gaming World of November 1995, January 1996, July 1996, November 1996, December 1996, September 1997, October 1997, November 1997, and April 1998; Next Generation of October 1997 and January 1998; Atomic of June 2003; Game Developer of December 1996/January 1997 and February/March 1997. Online sources include “3Dfx and Voodoo Graphics — The Technologies Within” at The Overclocker, former 3Dfx CEO Greg Ballard’s lecture for Stanford’s Entrepreneurial Thought Leader series, the Computer History Museum’s “oral history” with the founders of 3Dfx, Fabian Sanglard’s reconstruction of the workings of the Vérité chipset and the Voodoo 1 chipset, “Famous Graphics Chips: 3Dfx’s Voodoo” by Dr. Jon Peddie at the IEEE Computer Society’s site, and “A Fallen Titan’s Final Glory” by Joel Hruska at the long-defunct Sudhian Media. Also, the Usenet discussions that followed the release of the 3Dfx patch for Tomb Raider and Nicol Bolas’s crazily detailed reply to the Stack Exchange question “Why Do Game Developer Prefer Windows?”.)

 

Tags: , , , , , , , ,

The Next Generation in Graphics, Part 1: Three Dimensions in Software (or, Quake and Its Discontents)

“Mathematics,” wrote the historian of science Carl Benjamin Boyer many years ago, “is as much an aspect of culture as it is a collection of algorithms.” The same might be said about the mathematical algorithms we choose to prioritize — especially in these modern times, when the right set of formulas can be worth many millions of dollars, can be trade secrets as jealously guarded as the recipes for Coca-Cola or McDonald’s Special Sauce.

We can learn much about the tech zeitgeist from those algorithms the conventional wisdom thinks are most valuable. At the very beginning of the 1990s, when “multimedia” was the buzzword of the age and the future of games was believed to lie with “interactive movies” made out of video clips of real actors, the race was on to develop video codecs: libraries of code able to digitize footage from the analog world and compress it to a fraction of its natural size, thereby making it possible to fit a reasonable quantity of it on CDs and hard drives. This was a period when Apple’s QuickTime was regarded as a killer app in itself, when Philips’s ill-fated CD-i console could be delayed for years by the lack of a way to get video to its screen quickly and attractively.

It is a rule in almost all kinds of engineering that, the more specialized a device is, the more efficiently it can perform the tasks that lie within its limited sphere. This rule holds true as much in computing as anywhere else. So, when software proved able to stretch only so far in the face of the limited general-purpose computing power of the day, some started to build their video codecs into specialized hardware add-ons.

Just a few years later, after the zeitgeist in games had shifted, the whole process repeated itself in a different context.

By the middle years of the decade, with the limitations of working with canned video clips becoming all too plain, interactive movies were beginning to look like a severe case of the emperor’s new clothes. The games industry therefore shifted its hopeful gaze to another approach, one that would prove a much more lasting transformation in the way games were made. This 3D Revolution did have one point of similarity with the mooted and then abandoned meeting of Silicon Valley and Hollywood: it too was driven by algorithms, implemented first in software and then in hardware.

It was different, however, in that the entire industry looked to one man to lead it into its algorithmic 3D future. That man’s name was John Carmack.



Whether they happen to be pixel art hand-drawn by human artists or video footage captured by cameras, 2D graphics already exist on disk before they appear on the monitor screen. And therein lies the source of their limitations. Clever programmers can manipulate them to some extent — pixel art generally more so than digitized video — but the possibilities are bounded by the fundamentally static nature of the source material. 3D graphics, however, are literally drawn by the computer. They can go anywhere and do just about anything. For, while 2D graphics are stored as a concrete grid of pixels, 3D graphics are described using only the abstract language of mathematics — a language able to describe not just a scene but an entire world, assuming you have a powerful enough computer running a good enough algorithm.

Like so many things that get really complicated really quickly, the basic concepts of 3D graphics are disarmingly simple. The process behind them can be divided into two phases: the modeling phase and the rendering, or rasterization, phase.

It all begins with simple two-dimensional shapes of the sort we all remember from middle-school geometry, each defined as a collection of points on a plane and straight lines connecting them together. By combining and arranging these two-dimensional shapes, or surfaces, together in three-dimensional space, we can make solids — or, in the language of computerized 3D graphics, objects.

Here we see how 3D objects can be made ever more more complex by building them out of ever more surfaces. The trade-off is that more complex objects require more computing power to render in a timely fashion.

Once we have a collection of objects, we can put them into a world space, wherever we like and at whatever angle of orientation we like. This world space is laid out as a three-dimensional grid, with its point of origin — i.e., the point where X, Y, and Z are all zero — wherever we wish it to be. In addition to our objects, we also place within it a camera — or, if you like, an observer in our world — at whatever position and angle of orientation we wish. At their simplest, 3D graphics require nothing more at the modeling phase.

We sometimes call the second phase the “rasterization” phase in reference to the orderly two-dimensional grid of pixels which make up the image seen on a monitor screen, which in computer-science parlance is known as a raster. The whole point of this rasterization phase, then, is to make our computer’s monitor a window into our imaginary world from the point of view of our imaginary camera. This entails converting said world’s three dimensions back into our two-dimensional raster of pixels, using the rules of perspective that have been understood by human artists since the Renaissance.

We can think of rasterizing as observing a scene through a window screen. Each square in the mesh is one pixel, which can be exactly one color. The whole process of 3D rendering ultimately comes down to figuring out what each of those colors should be.

The most basic of all 3D graphics are of the “wire-frame” stripe, which attempt to draw only the lines that form the edges of their surfaces. They were seen fairly frequently on microcomputers as far back as the early 1980s, the most iconic example undoubtedly being the classic 1984 space-trading game Elite.

Even in something as simple as Elite, we can begin to see how 3D graphics blur the lines between a purely presentation-level technology and a full-blown world simulation. When we have one enemy spaceship in our sights in Elite, there might be several others above, behind, or below us, which the 3D engine “knows” about but which we may not. Combined with a physics engine and some player and computer agency in the model world (taking here the form of lasers and thrusters), it provides the raw materials for a game. Small wonder that so many game developers came to see 3D graphics as such a natural fit.

But, for all that those wire frames in Elite might have had their novel charm in their day, programmers realized that the aesthetics of 3D graphics had to get better for them to become a viable proposition over the long haul. This realization touched off an algorithmic arms race that is still ongoing to this day. The obvious first step was to paint in the surfaces of each solid in single blocks of color, as the later versions of Elite that were written for 16-bit rather than 8-bit machines often did. It was an improvement in a way, but it still looked jarringly artificial, even against a spartan star field in outer space.

The next way station on the road to a semi-realistic-looking computer-generated world was light sources of varying strengths, positioned in the world with X, Y, and Z coordinates of their own, casting their illumination and shadows realistically on the objects to be found there.

A 3D scene with light sources.

The final step was to add textures, small pictures that were painted onto surfaces in place of uniform blocks of color; think of the pitted paint job of a tired X-Wing fighter or the camouflage of a Sherman tank. Textures introduced an enormous degree of complication at the rasterization stage; it wasn’t easy for 3D engines to make them look believable from a multitude of different lines of sight. That said, believable lighting was almost as complicated. Textures or lighting, or both, were already the fodder for many an academic thesis before microcomputers even existed.

A 3D scene with light sources and textures.

In the more results-focused milieu of commercial game development, where what was possible was determined largely by which types of microprocessors Intel and Motorola were selling the most of in any given year, programmers were forced to choose between compromised visions of the academic ideal. These broke down into two categories, neatly exemplified by the two most profitable computer games of the 1990s. Those games that followed in one or the other’s footsteps came to be known as the “Myst clones” and the “DOOM clones.” They could hardly have been more dissimilar in personality, yet they were both symbols of a burgeoning 3D revolution.

The Myst clones got their name from a game developed by Cyan Studios and published by Brøderbund in September of 1993, which went on to sell at least 6 million copies as a boxed retail product and quite likely millions more as a pack-in of one description or another. Myst and the many games that copied its approach tended to be, as even their most strident detractors had to admit, rather beautiful to look at. This was because they didn’t attempt to render their 3D imagery in real time; their rendering was instead done beforehand, often on beefy workstation-class machines, then captured as finished rasters of pixels on disk. Given that they worked with graphics that needed to be rendered only once and could be allowed to take hours to do so if necessary, the creators of games like this could pull out all the stops in terms of textures, lighting, and the sheer number and complexity of the 3D solids that made up their worlds.

These games’ disadvantage — a pretty darn massive one in the opinion of many players — was that their scope of interactive potential was as sharply limited in its way as that of all those interactive movies built around canned video clips that the industry was slowly giving up on. They could present their worlds to their players only as a collection of pre-rendered nodes to be jumped between, could do nothing on the fly. These limitations led most of their designers to build their gameplay around set-piece puzzles found in otherwise static, non-interactive environments, which most players soon started to find a bit boring. Although the genre had its contemplative pleasures and its dedicated aficionados who appreciated them, its appeal as anything other than a tech demo — the basis on which the original Myst was primarily sold — turned out to be the very definition of niche, as the publishers of Myst clones belatedly learned to their dismay. The harsh reality became undeniable once Riven, the much-anticipated, sumptuously beautiful sequel to Myst, “only” sold 1.5 million copies when it finally appeared four years after its hallowed predecessor. With the exception only of Titanic: Adventure out of Time, which owed its fluke success to a certain James Cameron movie with which it happened to share a name and a setting, no other game of this style ever cracked half a million in unit sales. The genre has been off the mainstream radar for decades now.

The DOOM clones, on the other hand, have proved a far more enduring fixture of mainstream gaming. They took their name, of course, from the landmark game of first-person carnage which the energetic young men of id Software released just a couple of months after Myst reached store shelves. John Carmack, the mastermind of the DOOM engine, managed to present a dynamic, seamless, apparently 3D world in place of the static nodes of Myst, and managed to do it in real time, even on a fairly plebeian consumer-grade computer. He did so first of all by being a genius programmer, able to squeeze every last drop out of the limited hardware at his disposal. And then, when even that wasn’t enough to get the job done, he threw out feature after feature that the academics whose papers he had pored over insisted was essential for any “real” 3D engine. His motto was, if you can’t get it done honestly, cheat, by hard-coding assumptions about the world into your algorithms and simply not letting the player — or the level designer — violate them. The end result was no Myst-like archetype of beauty in still screenshots. It pasted 2D sprites into its world whenever there wasn’t horsepower enough to do real modeling, had an understanding of light and its properties that is most kindly described as rudimentary, and couldn’t even handle sloping floors or ceilings, or walls that weren’t perfectly vertical. Heck, it didn’t even let you look up or down.

And absolutely none of that mattered. DOOM may have looked a bit crude in freeze-frame, but millions of gamers found it awe-inspiring to behold in motion. Indeed, many of them thought that Carmack’s engine, combined with John Romero and Sandy Petersen’s devious level designs, gave them the most fun they’d ever had sitting behind a computer. This was immersion of a level they’d barely imagined possible, the perfect demonstration of the real potential of 3D graphics — even if it actually was, as John Carmack would be the first to admit, only 2.5D at best. No matter; DOOM felt like real 3D, and that was enough.

A hit game will always attract imitators, and a massive hit will attract legions of them. Accordingly, the market was soon flooded with, if anything, even more DOOM clones than Myst clones, all running in similar 2.5D engines, the product of both intense reverse engineering of DOOM itself and Carmack’s habit of talking freely about how he made the magic happen to pretty much anyone who asked him, no matter how much his colleagues at id begged him not to. “Programming is not a zero-sum game,” he said. “Teaching something to a fellow programmer doesn’t take it away from you. I’m happy to share what I can because I’m in it for the love of programming.” Carmack was elevated to veritable godhood, the prophet on the 3D mountaintop passing down whatever scraps of wisdom he deigned to share with the lesser mortals below.

Seen in retrospect, the DOOM clones are, like the Myst clones, a fairly anonymous lot for the most part, doubling down on transgressive ultra-violence instead of majestic isolation, but equally failing to capture a certain ineffable something that lay beyond the nuts and bolts of their inspiration’s technology. The most important difference between the Myst and DOOM clones came down to the filthy lucre of dollar and unit sales: whereas Myst‘s coattails proved largely illusory, producing few other hits, DOOM‘s were anything but. Most people who had bought Myst, it seemed, were satisfied with that single purchase; people who bought DOOM were left wanting more first-person mayhem, even if it wasn’t quite up to the same standard.

The one DOOM clone that came closest to replacing DOOM itself in the hearts of gamers was known as Duke Nukem 3D. Perhaps that isn’t surprising, given its pedigree: it was a product of 3D Realms, the rebranded incarnation of Scott Miller’s Apogee Software. Whilst trading under the earlier name, Miller had pioneered the episodic shareware model of game distribution, a way of escaping the heavy-handed group-think of the major boxed-game publishers and their tediously high-concept interactive movies in favor of games that were exponentially cheaper to develop, but also rawer, more visceral, more in line with what the teenage and twenty-something males who still constituted the large majority of dedicated gamers were actually jonesing to play. Miller had discovered the young men of id when they were still working for a disk magazine in Shreveport, Louisiana. He had then convinced them to move to his own glossier, better-connected hometown of Dallas, Texas, and distributed their proto-DOOM shooter Wolfenstein 3D to great success. His protégées had elected to strike out on their own when the time came to release DOOM, but it’s fair to say that that game would probably never have come to exist at all if not for their shareware Svengali. And even if it had, it probably wouldn’t have made them so much money; Jay Wilbur, id’s own tireless guerilla marketer, learned most of his tricks from watching Scott Miller.

Still a man with a keen sense of what his customers really wanted, Miller re-branded Apogee as 3D Realms as a way of signifying its continuing relevance amidst the 3D revolution that took the games industry by storm after DOOM. Then he, his junior partner George Broussard, and 3D Realms’s technical mastermind Ken Silverman set about making a DOOM-like engine of their own, known as Build, which they could sell to other developers who wanted to get up and running quickly. And they used the same engine to make a game of their own, which would turn out to be the most memorable of all those built with Build.

Duke Nukem 3D‘s secret weapon was one of the few boxes in the rubric of mainstream gaming success that DOOM had failed to tick off: a memorable character to serve as both star and mascot. First conceived several years earlier for a pair of Apogee 2D platformers, Duke Nukem was Joseph Lieberman’s worst nightmare, an unrepentant gangster with equally insatiable appetites for bombs and boobies, a fellow who “thinks the Bureau of Alcohol, Tobacco, and Firearms is a convenience store,” as his advertising trumpeted. His latest game combined some of the best, tightest level design yet seen outside of DOOM with a festival of adolescent transgression, from toilet water that served as health potions to strippers who would flash their pixelated breasts at you for the price of a dollar bill. The whole thing was topped off with the truly over-the-top quips of Duke himself: “I’m gonna rip off your head and shit down your neck!”; “Your face? Your ass? What’s the difference?” It was an unbeatable combination, proof positive that Miller’s ability to read his market was undimmed. Released in January of 1996, relatively late in the day for this generation of 3D — or rather 2.5D — technology, Duke Nukem 3D became by some reports the best-selling single computer game of that entire year. It is still remembered with warm nostalgia today by countless middle-aged men who would never want their own children to play a game like this. And so the cycle of life continues…

In a porno shop, shooting it out with policemen who are literally pigs…

Duke Nukem 3D was a triumph of design and attitude rather than technology; in keeping with most of the DOOM clones, the Build engine’s technical innovations over its inspiration were fairly modest. John Carmack scoffed that his old friends’ creation looked like it was “held together with bubble gum.”

The game that did push the technology envelope farthest, albeit without quite managing to escape the ghetto of the DOOM clones, was also a sign in another way of how quickly DOOM was changing the industry: rather than stemming from scruffy veterans of the shareware scene like id and 3D Realms, it came from the heart of the industry’s old-money establishment — from no less respectable and well-financed an entity than George Lucas’s very own games studio.

LucasArts’s Dark Forces was a shooter set in the Star Wars universe, which disappointed everyone right out of the gate with the news that it was not going to let you fight with a light saber. The developers had taken a hard look at it, they said, but concluded in the end that it just wasn’t possible to pull off satisfactorily within the hardware specifications they had to meet. This failing was especially ironic in light of the fact that they had chosen to name their new 2.5D engine “Jedi.” But they partially atoned for it by making the Jedi engine capable of hosting unprecedentedly enormous levels — not just horizontally so, but vertically as well. Dark Forces was full of yawning drop-offs and cavernous open spaces, the likes which you never saw in DOOM — or Duke Nukem 3D, for that matter, despite its release date of almost a year after Dark Forces. Even more importantly, Dark Forces felt like Star Wars, right from the moment that John Williams’s stirring theme song played over stage-setting text which scrolled away into the frame rather than across it. Although they weren’t allowed to make any of the movies’ characters their game’s star, LucasArts created a serviceable if slightly generic stand-in named Kyle Katarn, then sent him off on vertigo-inducing chases through huge levels stuffed to the gills with storm troopers in urgent need of remedial gunnery training, just like in the movies. Although Dark Forces toned down the violence that so many other DOOM clones were making such a selling point out of — there was no blood whatsoever on display here, just as there had not been in the movies — it compensated by giving gamers the chance to live out some of their most treasured childhood media memories, at a time when there were no new non-interactive Star Wars experiences to be had.

Unfortunately, LucasArts’s design instincts weren’t quite on a par with their presentation and technology. Dark Forces‘s levels were horribly confusing, providing little guidance about what to do or where to go in spaces whose sheer three-dimensional size and scope made the two-dimensional auto-map all but useless. Almost everyone who goes back to play the game today tends to agree that it just isn’t as much fun as it ought to be. At the time, though, the Star Wars connection and its technical innovations were enough to make Dark Forces a hit almost the equal of DOOM and Duke Nukem 3D. Even John Carmack made a point of praising LucasArts for what they had managed to pull off on hardware not much better than that demanded by DOOM.

Yet everyone seemed to be waiting on Carmack himself, the industry’s anointed Master of 3D Algorithms, to initiate the real technological paradigm shift. It was obvious what that must entail: an actual, totally non-fake rendered-on-the-fly first-person 3D engine, without all of the compromises that had marked DOOM and its imitators. Such engines weren’t entirely unheard of; the Boston studio Looking Glass Technologies had been working with them for five years, employing them in such innovative, immersive games as Ultima Underworld and System Shock. But those games were qualitatively different from DOOM and its clones: slower, more complex, more cerebral. The mainstream wanted a game that played just as quickly and violently and viscerally as DOOM, but that did it in uncompromising real 3D. With computers getting faster every year and with a genius like John Carmack to hand, it ought to be possible.

And so Carmack duly went to work on just such an engine, for a game that was to be called Quake. His ever-excitable level designer John Romero, who had the looks and personality to be the rock star gaming had been craving for years, was all in with bells on. “The next game is going to blow DOOM all to hell,” he told his legions of adoring fans. “DOOM totally sucks in comparison to our next game! Quake is going to be a bigger step over DOOM than DOOM was over Wolf 3D.” Drunk on success and adulation, he said that Quake would be more than just a game: “It will be a movement.” (Whatever that meant!) The drumbeat of excitement building outside of id almost seemed to justify his hyperbole; from all the way across the Atlantic, the British magazine PC Zone declared that the upcoming Quake would be “the most important PC game ever made.” The soundtrack alone was to be a significant milestone in the incorporation of gaming into mainstream pop culture, being the work of Trent Reznor and his enormously popular industrial-rock band Nine Inch Nails. Such a collaboration would have been unthinkable just a few years earlier.

While Romero was enjoying life as gaming’s own preeminent rock star and waiting for Carmack to get far enough along on the Quake engine to give him something to do, Carmack was living like a monk, working from 4 PM to 4 AM every day. In another sign of just how quickly id had moved up in the world, he had found himself an unexpectedly well-credentialed programming partner. Michael Abrash was one of the establishment’s star programmers, who had written a ton of magazine articles and two highly regarded technical tomes on assembly-language and graphics programming and was now a part of Microsoft’s Windows NT team. When Carmack, who had cut his teeth on Abrash’s writings, invited him out of the blue to come to Dallas and do Quake with him, Bill Gates himself tried to dissuade his employee. “You might not like it down there,” he warned. Abrash was, after all, pushing 40, a staid sort with an almost academic demeanor, while id was a nest of hyperactive arrested adolescence on a permanent sugar high. But he went anyway, because he was pretty sure Carmack was a genius, and because Carmack seemed to Abrash a bit lonely, working all night every night with only his computer for company. Abrash thought he saw in Quake a first glimmer of a new form of virtual existence that companies like Meta are still chasing eagerly today: “a pretty complicated, online, networked universe,” all in glorious embodied 3D. “We do Quake, other companies do other games, people start building worlds with our format and engine and tools, and these worlds can be glommed together via doorways from one to another. To me this sounds like a recipe for the first real cyberspace, which I believe will happen the way a real space station or habitat probably would — by accretion.”

He may not have come down if he had known precisely what he was getting into; he would later compare making Quake to “being strapped onto a rocket during takeoff in the middle of a hurricane.” The project proved a tumultuous, exhausting struggle that very nearly broke id as a cohesive company, even as the money from DOOM was continuing to roll in. (id’s annual revenues reached $15.6 million in 1995, a very impressive figure for what was still a relatively tiny company, with a staff numbering only a few dozen.)

Romero envisioned a game that would be as innovative in terms of gameplay as technology, that would be built largely around sword-fighting and other forms of hand-to-hand combat rather than gun play — the same style of combat that LucasArts had decided was too impractical for Dark Forces. Some of his early descriptions make Quake sound more like a full-fledged CRPG in the offing than another straightforward action game. But it just wouldn’t come together, according to some of Romero’s colleagues because he failed to communicate his expectations to them, rather leading them to suspect that even he wasn’t quite sure what he was trying to make.

Carmack finally stepped in and ordered his design team to make Quake essentially a more graphically impressive DOOM. Romero accepted the decision outwardly, but seethed inwardly at this breach of longstanding id etiquette; Carmack had always made the engines, then given Romero free rein to turn them into games. Romero largely checked out, opening a door that ambitious newcomers like American McGee and Tim Willits, who had come up through the thriving DOOM modding community, didn’t hesitate to push through. The offices of id had always been as hyper-competitive as a DOOM deathmatch, but now the atmosphere was becoming a toxic stew of buried resentments.

In a misguided attempt to fix the bad vibes, Carmack, whose understanding of human nature was as shallow as his understanding of computer graphics was deep, announced one day that he had ordered a construction crew in to knock down all of the walls, so that everybody could work together from a single “war room.” One for all and all for one, and all that. The offices of the most profitable games studio in the world were transformed into a dystopian setting perfect for a DOOM clone, as described by a wide-eyed reporter from Wired magazine who came for a visit: “a maze of drywall and plastic sheeting, with plaster dust everywhere, loose acoustic tiles, and cables dangling from the ceiling. Almost every item not directly related to the completion of Quake was gone. The only privacy to be found was between the padded earpieces of headphones.”

Wired magazine’s August 1996 cover, showing John Carmack flanked by John Romero and Adrian Carmack, marked the end of an era. By the time it appeared on newsstands, Romero had already been fired.

Needless to say, it didn’t have the effect Carmack had hoped for. In his book-length history of id’s early life and times, journalist David Kushner paints a jittery, unnerving picture of the final months of Quake‘s development: they “became a blur of silent and intense all-nighters, punctuated by the occasional crash of a keyboard against a wall. The construction crew had turned the office into a heap. The guys were taking their frustrations out by hurling computer parts into the drywall like knives.” Michael Abrash is more succinct: “A month before shipping, we were sick to death of working on Quake.” And level designer Sandy Petersen, the old man of the group, who did his best to keep his head down and stay out of the intra-office cold war, is even more so: “[Quake] was not fun to do.”

Quake was finally finished in June of 1996. It would prove a transitional game in more ways than one, caught between where games had recently been and where they were going. Still staying true to that odd spirit of hacker idealism that coexisted with his lust for ever faster Ferraris, Carmack insisted that Quake be made available as shareware, so that people could try it out before plunking down its full price. The game accordingly got a confusing, staggered release, much to the chagrin of its official publisher GT Interactive. To kick things off, the first eight levels went up online. Shortly after, there appeared in stores a $10 CD of the full game that had to be unlocked by paying id an additional $50 in order to play beyond the eighth level. Only after that, in August of 1996, did the game appear in a conventional retail edition.

Predictably enough, it all turned into a bit of a fiasco. Crackers quickly reverse-engineered the algorithms used for generating the unlocking codes, which were markedly less sophisticated than the ones used to generate the 3D graphics on the disc. As a result, hundreds of thousands of people were able to get the entirety of the most hotly anticipated game of the year for $10. Meanwhile even many of those unwilling or unable to crack their shareware copies decided that eight levels was enough for them, especially given that the unregistered version could be used for multiplayer deathmatches. Carmack’s misplaced idealism cost id and GT Interactive millions, poisoning relations between them; the two companies soon parted ways.

So, the era of shareware as an underground pipeline of cutting-edge games came to an end with Quake. From now on, id would concentrate on boxed games selling for full price, as would all of their fellow survivors from that wild and woolly time. Gaming’s underground had become its establishment.

But its distribution model wasn’t the only sense in which Quake was as much a throwback as a step forward. It held fast as well to Carmack’s disinterest in the fictional context of id’s games, as illustrated by his famous claim that the story behind a game was no more important than the story behind a porn movie. It would be blatantly incorrect to claim that the DOOM clones which flooded the market between 1994 and 1996 represented some great exploding of the potential of interactive narrative, but they had begun to show some interest, if not precisely in elaborate set-piece storytelling in the way of adventure games, at least in the appeal of setting and texture. Dark Forces had been a pioneer in this respect, what with its between-levels cut scenes, its relatively fleshed-out main character, and most of all its environments that really did look and feel like the Star Wars films, from their brutalist architecture to John Williams’s unmistakable score. Even Duke Nukem 3D had the character of Duke, plus a distinctively seedy, neon-soaked post-apocalyptic Los Angeles for him to run around in. No one would accuse it of being an overly mature aesthetic vision, but it certainly was a unified one.

Quake, on the other hand,  displayed all the signs of its fractious process of creation, of half a dozen wayward designers all pulling in different directions. From a central hub, you took “slipgates” into alternate dimensions that contained a little bit of everything on the designers’ not-overly-discriminating pop-culture radar, from zombie flicks to Dungeons & Dragons, from Jaws to H.P. Lovecraft, from The Terminator to heavy-metal music, and so wound up not making much of a distinct impression at all.

Most creative works are stamped with the mood of the people who created them, no matter how hard the project managers try to separate the art from the artists. With its color palette dominated by shocks of orange and red, DOOM had almost literally burst off the monitor screen with the edgy joie de vivre of a group of young men whom nobody had expected to amount to much of anything, who suddenly found themselves on the verge of remaking the business of games in their own unkempt image. Quake felt tired by contrast. Even its attempts to blow past the barriers of good taste seemed more obligatory than inspired; the Satanic symbolism, elaborate torture devices, severed heads, and other forms of gore were outdone by other games that were already pushing the envelope even further. This game felt almost somber — not an emotion anyone had ever before associated with id. Its levels were slower and emptier than those of DOOM, with a color palette full of mournful browns and other earth tones. Even the much-vaunted soundtrack wound up rather underwhelming. It was bereft of the melodic hooks that had made Nine Inch Nails’s previous output more palatable for radio listeners than that of most other “extreme” bands; it was more an exercise in sound design than music composition. One couldn’t help but suspect that Trent Reznor had held back all of his good material for his band’s next real record.

At its worst, Quake felt like a tech demo waiting for someone to turn it into an actual game, proving that John Carmack needed John Romero as badly as Romero needed him. But that once-fruitful relationship was never to be rehabilitated: Carmack fired Romero within days of finishing Quake. The two would never work together again.

It was truly the end of an era at id. Sandy Petersen was soon let go as well, Michael Abrash went back to the comfortable bosom of Microsoft, and Jay Wilbur quit for the best of all possible reasons: because his son asked him, “How come all the other daddies go to the baseball games and you never do?” All of them left as exhausted as Quake looks and feels.

Of course, there was nary a hint of Quake‘s infelicities to be found in the press coverage that greeted its release. Even more so than most media industries, the games industry has always run on enthusiasm, and it had no desire at this particular juncture to eat its own by pointing out the flaws in the most important PC game ever made. The coverage in the magazines was marked by a cloying fan-boy fawning that was becoming ever more sadly prominent in gamer culture. “We are not even worthy to lick your toenails free of grit and fluffy sock detritus,” PC Zone wrote in a public letter to id. “We genuflect deeply and offer our bare chests for you to stab with a pair of scissors.” (Eww! A sense of proportion is as badly lacking as a sense of self-respect…) Even the usually sober-minded (by gaming-journalism standards) Computer Gaming World got a little bit creepy: “Describing Quake is like talking about sex. It must be experienced to be fully appreciated.”

Still, I would be a poor historian indeed if I called all the hyperbole of 1996 entirely unjustified. The fact is that the passage of time has tended to emphasize Quake‘s weaknesses, which are mostly in the realm of design and aesthetics, whilst obscuring its contemporary strengths, which were in the realm of technology. Although not quite the first game to graft a true 3D engine onto ultra-fast-action gameplay — Interplay’s Descent beat it to the market by more than a year — it certainly did so more flexibly and credibly than anything else to date, even if Carmack still wasn’t above cheating a bit when push came to shove. (By no means is the Quake engine entirely free of tricksy 2D sprites in places where proper 3D models are just too expensive to render.)

Nevertheless, it’s difficult to fully convey today just how revolutionary the granular details of Quake seemed in 1996: the way you could look up and down and all around you with complete freedom; the way its physics engine made guns kick so that you could almost feel it in your mouse hand; the way you could dive into water and experience the visceral sensation of actually swimming; the way the wood paneling of its walls glinted realistically under the overhead lighting. Such things are commonplace today, but Quake paved the way. Most of the complaints I’ve raised about it could be mitigated by the simple expedient of not even bothering with the lackluster single-player campaign, of just playing it with your mates in deathmatch.

But even if you preferred to play alone, Quake was a sign of better things to come. “It goes beyond the game and more into the engine and the possibilities,” says Rob Smith, who watched the Quake mania come and go as the editor of PC Gamer magazine. “Quake presented options to countless designers. The game itself doesn’t make many ‘all-time’ lists, but its impact [was] as a game changer for 3D gaming, [an] engine that allowed other game makers to express themselves.” For with the industry’s Master of 3D Algorithms John Carmack having shown what was possible and talking as freely as ever about how he had achieved it, with Michael Abrash soon to write an entire book about how he and Carmack had made the magic happen, more games of this type, ready and able to harness the technology of true 3D to more exciting designs, couldn’t be far behind. “We’ve pretty much decided that our niche is in first-person futuristic action games,” said John Carmack. “We stumble when we get away from the techno stuff.” The industry was settling into a model that would remain in place for years to come: id would show what was possible with the technology of 3D graphics, then leave it to other developers to bend it in more interesting directions.

Soon enough, then, titles like Jedi Knight and Half-Life would push the genre once known as DOOM clones, now trading under the more sustainable name of the first-person shooter, in more sophisticated directions in terms of storytelling and atmosphere, without losing the essence of what made their progenitors so much fun. They will doubtless feature in future articles.

Next time, however, I want to continue to focus on the technology, as we turn to another way in which Quake was a rough draft for a better gaming future: months after its initial release, it became one of the first games to display the potential of hardware acceleration for 3D graphics, marking the beginning of a whole new segment of the microcomputer industry, one worth many billions of dollars today.



Did you enjoy this article? If so, please think about pitching in to help me make many more like it. You can pledge any amount you like.



(Sources: the books Rocket Jump: Quake and the Golden Age of First-Person Shooters by David L. Craddock, The Graphics Programming Black Book by Michael Abrash, Masters of DOOM: How Two Guys Created an Empire and Transformed Pop Culture by David Kushner, Dungeons and Dreamers: The Rise of Computer Game Culture from Geek to Chic by Brad King and John Borland, Principles of Three-Dimensional Computer Animation by Michael O’Rourke, and Computer Graphics from Scratch: A Programmer’s Introduction by Gabriel Gambetta. PC Zone of May 1996; Computer Gaming World of July 1996 and October 1996; Wired of August 1996 and January 2010. Online sources include Michael Abrash’s “Ramblings in Realtime” for Blue’s News.

Quake is available as a digital purchase at GOG.com, as is Star Wars: Dark Forces. Duke Nukem 3D can be found on Steam.)

 
 

Tags: , , , , , , ,

The Shareware Scene, Part 5: Narratives of DOOM

Let me begin today by restating the obvious: DOOM was very, very popular, probably the most popular computer game to date.

That “probably” has to stand there because DOOM‘s unusual distribution model makes quantifying its popularity frustratingly difficult. It’s been estimated that id sold 2 to 3 million copies of the shareware episodes of the original DOOM. The boxed-retail-only DOOM II may have sold a similar quantity; it reportedly became the third best-selling boxed computer game of the 1990s. But these numbers, impressive as they are in their own right, leave out not only the ever-present reality of piracy but also the free episode of DOOM, which was packaged and distributed in such an unprecedented variety of ways all over the world. Players of it likely numbered well into the eight digits.

Yet if the precise numbers associated with the game’s success are slippery, the cultural impact of the game is easier to get a grip on. The release of DOOM marks the biggest single sea change in the history of computer gaming. It didn’t change gaming instantly, mind you — a contemporaneous observer could be forgiven for assuming it was still largely business as usual a year or even two years after DOOM‘s release — but it did change it forever.

I should admit here and now that I’m not entirely comfortable with the changes DOOM brought to gaming. In fact, for a long time, when I was asked when I thought I might bring this historical project to a conclusion, I pointed to the arrival of DOOM as perhaps the most logical place to hang it up. I trust that most of you will be pleased to hear that I no longer feel so inclined, but I do recognize that my feelings about DOOM are, at best, conflicted. I can’t help but see it as at least partially responsible for a certain coarsening in the culture of gaming that followed it. I can muster respect for the id boys’ accomplishment, but no love. Hopefully the former will be enough to give the game its due.

As the title of this article alludes, there are many possible narratives to spin about DOOM‘s impact. Sometimes the threads are contradictory — sometimes even self-contradictory. Nevertheless, let’s take this opportunity to follow a few of them to wherever they lead us as we wrap up this series on the shareware movement and the monster it spawned.


3D 4EVA!

The least controversial, most incontrovertible aspect of DOOM‘s impact is its influence on the technology of games. It was nothing less than the coming-out party for 3D graphics as a near-universal tool — this despite the fact that 3D graphics had been around in some genres, most notably vehicular simulations, almost as long as microcomputer games themselves had been around, and despite the fact that DOOM itself was far from a complete implementation of a 3D environment. (John Carmack wouldn’t get all the way to that goal until 1996’s Quake, the id boys’ anointed successor to DOOM.) As we’ve seen already, Blue Sky Productions’s Ultima Underworld actually offered the complete 3D implementation which DOOM lacked twenty months before the latter’s arrival.

But as I also noted earlier, Ultima Underworld was complex, a little esoteric, hard to come to terms with at first sight. DOOM, on the other hand, took what the id boys had started with Wolfenstein 3D, added just enough additional complexity to make it into a more satisfying game over the long haul, topped it off with superb level design that took full advantage of all the new affordances, and rammed it down the throat of the gaming mainstream with all the force of one of its coveted rocket launchers. The industry never looked back. By the end of the decade, it would be hard to find a big boxed game that didn’t use 3D graphics.

Many if not all of these applications of 3D were more than warranted: the simple fact is that 3D lets you do things in games that aren’t possible any other way. Other forms of graphics consist at bottom of fixed, discrete patterns of colored pixels. These patterns can be moved about the screen — think of the sprites in a classic 2D videogame, such as Nintendo’s Super Mario Bros. or id’s Commander Keen — but their forms cannot be altered with any great degree of flexibility. And this in turn limits the degree to which the world of a game can become an embodied, living place of emergent interactions; it does no good to simulate something in the world model if you can’t represent it on the player’s screen.

3D graphics, on the other hand, are stored not as pixels but as a sort of architectural plan of an imaginary 3D space, expressed in the language of mathematics. The computer then extrapolates from said plan to render the individual pixels on the fly in response to the player’s actions. In other words, the world and the representation of the world are stored as one in the computer’s memory. This means that things can happen there which no artist ever anticipated. 3D allowed game makers to move beyond hand-crafted fictions and set-piece puzzles to begin building virtual realities in earnest. Not for nothing did many people refer to DOOM-like games in the time before the term “first-person shooter” was invented as “virtual-reality games.”

Ironically, others showed more interest than the id boys themselves in probing the frontiers of formal possibility thus opened. While id continued to focus purely on ballistics and virtual violence in their extended series of Quake games after making DOOM, Looking Glass Technologies — the studio which had previously been known as Blue Sky Productions — worked many of the innovations of Ultima Underworld and DOOM alike into more complex virtual worlds in games like System Shock and Thief. Nevertheless, DOOM was the proof of concept, the game which demonstrated indubitably to everyone that 3D graphics could provide amazing experiences which weren’t possible any other way.

From the standpoint of the people making the games, 3D graphics had another massive advantage: they were also cheaper than the alternative. When DOOM first appeared in December of 1993, the industry was facing a budgetary catch-22 with no obvious solution. Hiring armies of artists to hand-paint every screen in a game was expensive; renting or building a sound stage, then hiring directors and camera people and dozens of actors to provide hours of full-motion-video footage was even more so. Players expected ever bigger, richer, longer games, which was intensely problematic when every single element in their worlds had to be drawn or filmed by hand. Sales were increasing at a steady clip by 1993, but they weren’t increasing quickly enough to offset the spiraling costs of production. Even major publishers like Sierra were beginning to post ugly losses on their bottom lines despite their increasing gross revenues.

3D graphics had the potential to fix all that, practically at a stroke. A 3D world is, almost by definition, a collection of interchangeable parts. Consider a simple item of furniture, like, say, a desk. In a 2D world, every desk must be laboriously hand-drawn by an artist in the same way that a traditional carpenter planes and joins the wood for such a thing in a workshop. But in a 3D world, the data constituting the basic form of “desk” can be inserted in a matter of seconds; desks can now make their way into games with the same alacrity with which they roll off of an IKEA production line. But you say that you don’t want every desk in your world to look exactly the same? Very well; it takes just a few keystrokes to change the color or wood grain or even the size of your desk, or to add or take away a drawer. We can arrive at endless individual implementations of “desk” from our Platonic ideal with surprising speed. Small wonder that, when the established industry was done marveling at DOOM‘s achievements in terms of gameplay, the thing they kept coming back to over and over was its astronomical profit margins. 3D graphics provided a way to make games make money again.

So, 3D offered worlds with vastly more emergent potential, made at a greatly reduced cost. There had to be a catch, right?

Alas, there was indeed. In many contexts, 3D graphics were right on the edge of what a typical computer could do at all in the mid-1990s, much less do with any sort of aesthetic appeal. Gamers would have to accept jagged edges, tearing textures, and a generalized visual crudity in 3D games for quite some time to come. A freeze-frame visual comparison with the games the industry had been making immediately before the 3D revolution did the new ones no favors: the games coming out of studios like Sierra and LucasArts had become genuinely beautiful by the early 1990s, thanks to those companies’ rooms full of dedicated pixel artists. It would take a considerable amount of time before 3D games would look anywhere near this nice. One can certainly argue that 3D was in some fairly fundamental sense necessary for the continuing evolution of game design, that this period of ugliness was one that the industry simply needed to plow through in order to emerge on the other side with a whole new universe of visual and emergent possibility to hand. Still, people mired in the middle of it could be forgiven for asking whether, from the evidence of screenshots alone, gaming technology wasn’t regressing rather than progressing.

But be that as it may, the 3D revolution ushered in by DOOM was here to stay. People would just have to get used to the visual crudity for the time being, and trust that eventually things would start to look better again.


Playing to the Base

There’s an eternal question in political and commercial marketing alike: do you play to the base, or do you try to reach out to a broader spectrum of people? The former may be safer, but raises the question of how many more followers you can collect from the same narrow slice of the population; the latter tempts you with the prospect of countless virgin souls waiting to embrace you, but is far riskier, with immense potential to backfire spectacularly if you don’t get the message and tone just right. This was the dichotomy confronting the boxed-games industry in the early 1990s.

By 1993, the conventional wisdom inside the industry had settled on the belief that outreach was the way forward. This dream of reaching a broader swath of people, of becoming as commonplace in living rooms as prime-time dramas and sitcoms, was inextricably bound up with the technology of CD-ROM, what with its potential to put footage of real human actors into games alongside spoken dialog and orchestral soundtracks. “What we think of today as a computer or a videogame system,” wrote Ken Williams of Sierra that year, “will someday assume a much broader role in our homes. I foresee a day when there is one home-entertainment device which combines the functions of a CD-audio player, VCR, videogame system, and computer.”

And then along came DOOM with its stereotypically adolescent-male orientation, along with sales numbers that threatened to turn the conventional wisdom about how well the industry could continue to feed off the same old demographic on its head. About six months after DOOM‘s release, when the powers that were were just beginning to grapple with its success and what it meant to each and every one of them, Alexander Antoniades, a founding editor of the new Game Developer magazine, more fully articulated the dream of outreach, as well as some of the doubts that were already beginning to plague it.

The potential of CD-ROM is tremendous because it is viewed as a superset not [a] subset of the existing computer-games industry. Everyone’s hoping that non-technical people who would never buy an Ultima, flight simulator, or DOOM will be willing to buy a CD-ROM game designed to appeal to a wider audience — changing the computer into [an] interactive VCR. If these technical neophytes’ first experience is a bad one, for $60 a disc, they’re not going to continue making the same mistake.

It will be this next year, as these consumers make their first CD-ROM purchases, that will determine the shape of the industry. If CD-ROM games are able to vary more in subject matter than traditional computer games, retain their platform independence, and capture new demographics, they will attain the status of a new platform [in themselves]. If not, they will just be another means to get product to market and will be just another label on the side of a box.

The next couple of years did indeed become a de-facto contest between these two ideas of gaming’s future. At first, the outreach camp could point to some notable successes on a scale similar to that of DOOM: The 7th Guest sold over 2 million copies, Myst sold an extraordinary 6 million or more. Yet the reality slowly dawned that most of those outside the traditional gaming demographic who purchased those games regarded them as little more than curiosities; most evidence would seem to indicate that they were never seriously played to a degree commensurate with their sales. Meanwhile the many similar titles which the industry rushed out in the wake of these success stories almost invariably became commercial disappointments.

The problems inherent in these multimedia-heavy “interactive movies” weren’t hard to see even at the time. In the same piece from which I quoted above, Alexander Antoniades noted that too many CD-ROM productions were “the equivalent of Pong games with captured video images of professional tennis players and CD-quality sounds of bouncing balls.” For various reasons — the limitations inherent in mixing and matching canned video clips; the core limitations of the software and hardware technology; perhaps simply a failure of imagination — the makers of too many of these extravaganzas never devised new modes of gameplay to complement their new modes of presentation. Instead they seemed to believe that the latter alone ought to be enough. Too often, these games fell back on rote set-piece puzzle-solving — an inherently niche activity even if done more creatively than we often saw in these games — for lack of any better ideas for making the “interactive” in interactive movies a reality. The proverbial everyday person firing up the computer-cum-stereo-cum-VCR at the end of a long workday wasn’t going to do so in order to watch a badly acted movie gated with frustrating logic puzzles.

While the multimedia came first with these productions, games of the DOOM school flipped that script. As the years went on and they too started to ship on the now-ubiquitous medium of CD-ROM, they too picked up cut scenes and spoken dialog, but they never suffered the identity crisis of their rivals; they knew that they were games first and foremost, and knew exactly what forms their interactivity should take. And most importantly from the point of view of the industry, these games sold. Post-1996 or so, high-concept interactive movies were out, as was most serious talk of outreach to new demographics. Visceral 3D action games were in, along with a doubling-down on the base.

To blame the industry’s retrenchment — its return to the demographically tried-and-true — entirely on DOOM is a stretch. Yet DOOM was a hugely important factor, standing as it did as a living proof of just how well the traditional core values of gaming could pay. The popularity of DOOM, combined with the exercise in diminishing commercial returns that interactive movies became, did much to push the industry down the path of retrenchment.

The minor tragedy in all this was not so much the end of interactive movies, given what intensely problematic endeavors they so clearly were, but rather that the latest games’ vision proved to be so circumscribed in terms of fiction, theme, and mechanics alike. By late in the decade, they had brought the boxed industry to a place of dismaying homogeneity; the values of the id boys had become the values of computer gaming writ large. Game fictions almost universally drew from the same shallow well of sci-fi action flicks and Dungeons & Dragons, with perhaps an occasional detour into military simulation. A shocking proportion of the new games being released fell into one of just two narrow gameplay genres: the first-person shooter and the real-time-strategy game.

These fictional and ludic genres are not, I hasten to note, illegitimate in themselves; I’ve enjoyed plenty of games in all of them. But one craves a little diversity, a more vibrant set of possibilities to choose from when wandering into one’s local software store. It would take a new outsider movement coupled with the rise of convenient digital distribution in the new millennium to finally make good on that early-1990s dream of making games for everyone. (How fitting that shaking loose the stranglehold of DOOM‘s progeny would require the exploitation of another alternative form of distribution, just as the id boys exploited the shareware model…)


The Murder Simulator

DOOM was mentioned occasionally in a vaguely disapproving way by mainstream media outlets immediately after its release, but largely escaped the ire of the politicians who were going after games like Night Trap and Mortal Kombat at the time; this was probably because its status as a computer rather than a console game led to its being played in bedrooms rather than living rooms, free from the prying eyes of concerned adults. It didn’t become the subject of a full-blown moral panic until weirdly late in its history.

On April 20, 1999, Eric Harris and Dylan Klebold, a pair of students at Columbine High School in the Colorado town of the same name, walked into their school armed to the teeth with knives, explosives, and automatic weapons. They proceeded to kill 13 students and teachers and to injure 24 more before turning their guns on themselves. The day after the massacre, an Internet gaming news site called Blue’s News posted a message that “several readers have written in reporting having seen televised news reports showing the DOOM logo on something visible through clear bags containing materials said to be related to the suspected shooters. There is no word yet of what connection anyone is drawing between these materials and this case.” The word would come soon enough.

It turned out that Harris and Klebold had been great devotees of the game, not only as players but as creators of their own levels. “It’s going to be just like DOOM,” wrote Harris in his diary just before the massacre. “I must not be sidetracked by my feelings of sympathy. I will force myself to believe that everyone is just a monster from DOOM.” He chose his prize shotgun because it looked like one found in the game. On the surveillance tapes that recorded the horror in real time, the weapons-festooned boys pranced and preened as if they were consciously imitating the game they loved so much. Weapons experts noted that they seemed to have adopted their approach to shooting from what worked in DOOM. (In this case, of course, that was a wonderful thing, in that it kept them from killing anywhere close to the number of people they might otherwise have with the armaments at their disposal.)

There followed a storm of controversy over videogame content, with DOOM and the genre it had spawned squarely at its center. Journalists turned their attention to the FPS subculture for the first time, and discovered that more recent games like Duke Nukem 3D — the Columbine shooters’ other favorite game, a creation of Scott Miller’s old Apogee Software, now trading under the name of 3D Realms — made DOOM‘s blood and gore look downright tame. Senator Joseph Lieberman, a longstanding critic of videogames, beat the drum for legislation, and the name of DOOM even crossed the lips of President Bill Clinton. “My hope,” he said, “[is] to persuade the nation’s top cultural producers to call a cease-fire in the virtual arms race, to stop the release of ultra-violent videogames such as DOOM. Several of the school gunmen murderously mimicked [it] down to the choice of weapons and apparel.”

When one digs into the subject, one can’t help but note how the early life stories of John Carmack and John Romero bear some eerie similarities with those of Eric Harris and Dylan Klebold. The two Johns as well were angry kids who found it hard to fit in with their peers, who engaged in petty crime and found solace in action movies, heavy-metal music, and computer games. Indeed, a big part of the appeal of DOOM for its most committed fans was the sense that it had been made by people just like them, people who were coming from the same place. What caused Harris and Klebold, alone among the millions like them, to exorcise their anger and aggression in such a horrifying way? It’s a question that we can’t begin to answer. We can only say that, unfair though it may be, perceptions of DOOM outside the insular subculture of FPS fandom must always bear the taint of its connection with a mass murder.

And yet the public controversy over DOOM and its progeny resulted in little concrete change in the end. Lieberman’s proposed legislation died on the vine after the industry fecklessly promised to do a better job with content warnings, and the newspaper pundits moved on to other outrages. Forget talk of free speech; there was too much money in these types of games for them to go away. Just ten months after Columbine, Activision released Soldier of Fortune, which made a selling point of dismembered bodies and screams of pain so realistic that one reviewer claimed they left his dog a nervous wreck cowering in a corner. After the requisite wave of condemnation, the mainstream media forgot about it too.

Violence in games didn’t begin with DOOM or even Wolfenstein 3D, but it was certainly amplified and glorified by those games and the subculture they wrought. While a player may very well run up a huge body count in, say, a classic arcade game or an old-school CRPG, the violence there is so abstract as to be little more than a game mechanic. But in DOOM — and even more so in the games that followed it — experiential violence is a core part of the appeal. One revels in killing not just because of the new high score or character experience level one gets out of it, but for the thrill of killing itself, as depicted in such a visceral, embodied way. This does strike me as a fundamental qualitative shift from most of the games that came before.

Yet it’s very difficult to have a reasonable discussion on said violence’s implications, simply because opinions have become so hardened on the subject. To express concern on any level is to invite association with the likes of Joe Lieberman, a thoroughly conventional thinker with a knack for embracing the most flawed of all conventional wisdoms on every single issue, who apparently was never fortunate enough to have a social-science professor drill the fact that correlation isn’t causation into his head.

Make no mistake: the gamers who scoff at the politicians’ hand-wringing have a point. Harris and Klebold probably were drawn to games like DOOM and Duke Nukem 3D because they already had violent fantasies, rather than having said fantasies inculcated by the games they happened to play. In a best-case scenario, we can even imagine other potential mass murderers channeling their aggression into a game rather than taking it out on real people, in much the same way that easy access to pornography may be a cause of the dramatic decline in incidents of rape and sexual violence in most Western countries since the rise of the World Wide Web.

That said, I for one am also willing to entertain the notion that spending hours every day killing things in the most brutal, visceral manner imaginable inside an embodied virtual space may have some negative effects on some personalities. Something John Carmack said about the subject in a fairly recent interview strikes me as alarmingly fallacious:

In later games and later times, when games [came complete with] moral ambiguity or actual negativity about what you’re doing, I always felt good about the decision that in DOOM, you’re fighting demons. There’s no gray area here. It is black and white. You’re the good guys, they’re the bad guys, and everything that you’re doing to them is fully deserved.

In reality, though, the danger which games like DOOM may present, especially in the polarized societies many of us live in in our current troubled times, is not that they ask us to revel in our moral ambiguity, much less our pure evil. It’s rather the way they’re able to convince us that the Others whom we’re killing “fully deserve” the violence we visit upon them because “they’re the bad guys.” (Recall those chilling words from Eric Harris’s diary, about convincing himself that his teachers and classmates are really just monsters…) This tendency is arguably less insidious when the bad guys in question are ridiculously over-the-top demons from Hell than when they’re soldiers who just happen to be wearing a different uniform, one which they may quite possibly have had no other choice but to don. Nevertheless, DOOM started something which games like the interminable Call of Duty franchise were only too happy to run with.

I personally would like to see less violence rather than more in games, all things being equal, and would like to see more games about building things up rather than tearing them down, fun though the latter can be on occasion. It strikes me that the disturbing association of some strands of gamer culture with some of the more hateful political movements of our times may not be entirely accidental, and that some of the root causes may stretch all the way back to DOOM — which is not to say that it’s wrong for any given individual to play DOOM or even Call of Duty. It’s only to say that the likes of GamerGate may be yet another weirdly attenuated part of DOOM‘s endlessly multi-faceted legacy.


Creative Destruction?

In other ways, though, the DOOM community actually was — and is — a community of creation rather than destruction. (I did say these narratives of DOOM wouldn’t be cut-and-dried, didn’t I?)

John Carmack, by his own account alone among the id boys, was inspired rather than dismayed by the modding scene that sprang up around Wolfenstein 3D — so much so that, rather than taking steps to make such things more difficult in DOOM, he did just the opposite: he separated the level data from the game engine much more completely than had been the case with Wolfenstein 3D, thus making it possible to distribute new DOOM levels completely legally, and released documentation of the WAD format in which the levels were stored on the same day that id released the game itself.

The origins of his generosity hearken back once again to this idea that the people who made DOOM weren’t so very different from the people who played it. One of Carmack’s formative experiences as a hacker was his exploration of Ultima II on his first Apple II. Carmack:

To go ahead and hack things to turn trees into chests or modify my gold or whatever… I loved that. The ability to go several steps further and release actual source code, make it easy to modify things, to let future generations get what I wished I had had a decade earlier—I think that’s been a really good thing. To this day I run into people all the time that say, whether it was Doom, or maybe even more so Quake later on, that that openness and that ability to get into the guts of things was what got them into the industry or into technology. A lot of people who are really significant people in significant places still have good things to say about that.

Carmack speaks of “a decade-long fight inside id about how open we should be with the technology and the modifiability.” The others questioned this commitment to what Carmack called “open gaming” more skeptically than ever when some companies started scooping up some of the thousands of fan-made levels, plopping them onto CDs, and selling them without paying a cent to id. But in the long run, the commitment to openness kept DOOM alive; rather than a mere computer game, it became a veritable cottage industry of its own. Plenty of people played literally nothing else for months or even years at a stretch.

The debate inside id raged more than ever in 1997, when Carmack insisted on releasing the complete original source code to DOOM. (He had done the same for the Wolfenstein 3D code two years before.) As he alludes above, the DOOM code became a touchstone for an up-and-coming generation of game programmers, even as many future game designers cut their teeth and made early names for themselves by creating custom levels to run within the engine. And, inevitably, the release of the source code led to a flurry of ports to every imaginable platform: “Everything that has a 32-bit [or better] processor has had DOOM run on it,” says Carmack with justifiable pride. Today you can play DOOM on digital cameras, printers, and even thermostats, and do so if you like in hobbyist-created levels that coax the engine into entirely new modes of play that the id boys never even began to conceive of.

This narrative of DOOM bears a distinct similarity to that of another community of creation with which I happen to be much better acquainted: the post-Infocom interactive-fiction community that arose at about the same time that the original DOOM was taking the world by storm. Like the DOOM people, the interactive-fiction people built upon a beloved company’s well-nigh timeless software engineering; like them, they eventually stretched that engine in all sorts of unanticipated directions, and are still doing it to this day. A comparison between the cerebral text adventures of Infocom and the frenetic shooters of id might seem incongruous at first blush, but there you are. Long may their separate communities of love and craft continue to thrive.



As you have doubtless gathered by now, the legacy of DOOM is a complicated one that’s almost uniquely resistant to simplification. Every statement has a qualifier; every yang has a yin. This can be frustrating for a writer; it’s in the nature of us as a breed to want straightforward causes and effects. The desire for them may lead one to make trends that were obscure at best to the people living through them seem more obvious than they really were. Therefore allow me to reiterate that the new gaming order which DOOM created wouldn’t become undeniable to everyone until fully three or four years after its release. A reader recently emailed me the argument that 1996 was actually the best year ever for adventure games, the genre which, according to some oversimplified histories, DOOM and games like it killed at a stroke — and darned if he didn’t make a pretty good case for it.

So, while I’m afraid I’ll never be much of a gibber and/or fragger, we should continue to have much to talk about. Onward, then, into the new order. I dare say that from the perspective of the boots on the ground it will continue to look much like the old one for quite some time to come. And after that? Well, we’ll take it as it comes. I won’t be mooting any more stopping dates.

(Sources: the books The Complete Wargames Handbook (2000 edition) by James F. Dunnigan, Masters of Doom by David Kushner, Game Engine Black Book: DOOM by Fabien Sanglard, Principles of Three-Dimensional Computer Animation by Michael O’Rourke, and Columbine by Dave Cullen; Retro Gamer 75; Game Developer of June 1994; Chris Kohler’s interview with John Carmack for Wired. And a special thanks to Alex Sarosi, a.k.a. Lt. Nitpicker, for his valuable email correspondence on the legacy of DOOM, as well as to Josh Martin for pointing out in a timely comment to the last article the delightful fact that DOOM can now be run on a thermostat.)

 

Tags: , , , ,

The Shareware Scene, Part 4: DOOM

The full extent of Wolfenstein 3D‘s popularity during 1992 and 1993 is difficult to quantify with any precision due to the peculiarities of the shareware distribution model. But the one thing we can say for sure is that it was enormously popular by any standard. Apogee sold roughly 200,000 copies of the paid episodes, yet that number hardly begins to express the game’s real reach. Most people who acquired the free episode were content with it alone, or couldn’t afford to buy the other installments, or had friends who had bought them already and were happy to share. It thus seems reasonable to assume that the total number of Wolfenstein 3D players reached well into seven digits, putting the game’s exposure on a par with The 7th Guest, the boxed industry’s biggest hit of 1993, the game generally agreed to have put CD-ROM on the map. And yet Wolfenstein 3D‘s impact would prove even more earthshaking than that of The 7th Guest in the long run.

One telling sign of its influence — and of the way that it was just a fundamentally different type of game than The 7th Guest, that stately multimedia showpiece — is the modding scene that sprang up around it. The game’s levels were stored in a rather easily decipherable format: the “WAD” file, standing for “Where’s All the Data?” Enterprising hackers were soon writing and distributing their own level editors, along with custom levels. (The most popular of them all filled the corridors of the Nazi headquarters with facsimiles of the sickly sweet, thuddingly unclever, unbelievably grating children’s-television character Barney the Dinosaur and let you take out your frustrations with an automatic weapon.) The id boys debated fiercely among themselves whether they should crack down on the modders, but John Carmack, who had read Steven Levy’s landmark book Hackers at an impressionable age and thoroughly absorbed its heroes’ ethos of openness and transparency, insisted that people be allowed to do whatever they wished with his creation. And when Carmack put his foot down, he always got his way; at the end of the day, he was the one irreplaceable member of the id collective, and every one of the others knew it.

With Wolfenstein 3D‘s popularity soaring, the id boys started eyeing the territory of the boxed publishers greedily. They struck a deal with a company called FormGen to release a seventh, lengthier installment of the game exclusively as a boxed retail product; it appeared under the name of Spear of Destiny in September of 1992. Thus readers of magazines like Computer Gaming World could scratch their heads over two separate luridly violent full-page advertisements for Wolfenstein 3D games, each with a different publisher’s name at the bottom. Spear of Destiny sold at least 100,000 copies at retail, both to hardcore Wolfenstein 3D addicts who couldn’t get enough and to many others, isolated from the typical means of shareware distribution, who came upon the game for the first time in this form.

Even Nintendo came calling with hat in hand, just a couple of years after summarily rejecting id’s offer to make a version of Super Mario Bros. 3 that ran on computers. The id boys now heeded Nintendo’s plea to port Wolfenstein 3D to the new Super Nintendo Entertainment System, whilst also grudgingly agreeing to abide by the dictates of Nintendo’s infamously strict censors. They had no idea what they had signed up for. Before they were through, Nintendo demanded that they replace blood with sweat, guard dogs with mutant rats, and Adolf Hitler, the game’s inevitable final boss, with a generic villain named the “Staatmeister.” They hated this bowdlerization with a passion, but, having agreed to do the port, they duly saw it through, muttering “Never again!” to themselves all the while. And indeed, when they were finished they took a mutual vow never to work with Nintendo again. Who needed them? The world was id’s oyster.

By now, 1992 was drawing  to a close, and they all felt it was high time that they moved on to the next new thing. For everyone at id, and most especially John Carmack, was beginning to look upon Wolfenstein 3D with a decidedly jaundiced eye.


The dirty little secret that was occluded by Wolfenstein 3D‘s immense success was that it wasn’t all that great a game once it was stripped of its novelty value. Its engine was just too basic to allow for compelling level design. You glided through its corridors as if you were on a branching tram line running past a series of fairground shooting galleries, trying to shoot the Nazis who popped up before they could shoot you. The lack of any sort of in-game map meant that you didn’t even know where you were most of the time; you just kept moving around shooting Nazis until you stumbled upon the elevator to the next level. Anyone who made it through seven episodes of this — and make no mistake, there were plenty of players who did — either had an awful lot of aggression to vent or really, really loved the unprecedented look and style of the game. The levels were even boring for their designers. John Romero:

Tom [Hall] and I [designed] levels [for Wolfenstein 3D] fast. Making those levels was the most boring shit ever because they were so simple. Tom was so bored; I kept on bugging him to do it. I told him about Scott Miller’s 300ZX and George Broussard’s Acura NSX. We needed cool cars too! Whenever he got distracted, I’d tell him, “Dude, NSX! NSX!”

Tom Hall had it doubly hard. The fact was, the ultra-violence of Wolfenstein 3D just wasn’t really his thing. He preferred worlds of candy-apple red, not bloody scarlet; of precocious kids and cuddly robots, not rabid vigilantes and sadistic Nazis. Still, he was nothing if not a team player. John Romero and Adrian Carmack had gone along with him for Commander Keen, so it was only fair that he humored them with Wolfenstein 3D. But now, he thought, all of that business was finally over, and they could all start thinking about making a third Commander Keen trilogy.

Poor Tom. It took a sweetly naïve nature like his to believe that the other id boys would be willing to go back to the innocent fun of their Nintendo pastiches. Wolfenstein 3D was a different beast entirely than Commander Keen. It wasn’t remarkable just for being as good as something someone else had already done; it was like nothing anyone had ever done before. And they owned this new thing, had it all to themselves. Hall’s third Commander Keen trilogy just wasn’t in the cards — not even when he offered to do it in 3D, using an updated version of the Wolfenstein 3D engine. Cute and whimsical was id’s yesterday; gritty and bloody was their today and, if they had anything to say about it, their tomorrow as well.

Digging into their less-than-bulging bag of pop-culture reference points, the id boys pulled out the Alien film franchise. What a 3D game those movies would make! Running through a labyrinth of claustrophobic corridors, shooting aliens… that would be amazing! On further reflection, though, no one wanted the hassle that would come with trying to live up to an official license, even assuming such a thing was possible; id was still an underground insurgency at heart, bereft of lawyers and Hollywood contacts. Their thinking moved toward creating a similar effect via a different story line.

The id boys had a long-running tabletop Dungeon & Dragons campaign involving demons who spilled over from their infernal plane of existence into the so-called “Prime Material Plane” of everyday fantasy. What if they did something like that, only in a science-fiction context? Demons in space! It would be perfect! It was actually John Carmack, normally the id boy least engaged by these sorts of discussions, who proposed the name. In a scene from the 1986 Martin Scorsese movie The Color of Money, a young pool shark played by Tom Cruise struts into a bar carrying what looks like a clarinet case. “What you got in there?” asks his eventual patsy with an intimidating scowl. As our hero opens the case to reveal his pool cue, he flashes a 100-kilowatt Tom Cruise smile and says a single word: “Doom.”

Once again, Tom Hall tried to be supportive and make the best of it. He still held the official role of world-builder for id’s fictions. So, he went to work for some weeks, emerging at last with the most comprehensive design document which anyone at id had ever written, appropriately entitled The DOOM Bible. It offered plenty of opportunity for gunplay, but it also told an earnest story, in which you, as an astronaut trapped aboard a space station under assault by mysterious aliens, gradually learned to your horror that they were literal demons out of Hell, escaping into our dimension through a rift in the fabric of space-time. It was full of goals to advance and problems to solve beyond that of mowing down hordes of monsters, with a plot that evolved as you played. The history of gaming would have been markedly different, at least in the short term, if the other id boys had been interested in pursuing Hall’s path of complex storytelling within a richly simulated embodied virtual reality.

As it was, though, Hall’s ambitions landed with a resounding thud. Granted, there were all sorts of valid practical reasons for his friends to be skeptical. It was true enough that to go from the pseudo-3D engine of Wolfenstein 3D to one capable of supporting the type of complex puzzles and situations envisioned by Hall, and to get it all to run at an acceptable speed on everyday hardware, might be an insurmountable challenge even for a wizard like John Carmack. And yet the fact remains that the problem was at least as much one of motivation as one of technology. The other id boys just didn’t care about the sort of things that had Tom Hall so juiced. It again came down to John Carmack, normally the least articulate member of the group, to articulate their objections. “Story in a game,” he said, “is like story in a porn movie. It’s expected to be there, but it’s not that important.”

Tom Hall held out for several more months, but he just couldn’t convince himself to get fully onboard with the game his friends wanted to make. His relationship with the others went from bad to worse, until finally, in August of 1993, the others asked him to leave: “Obviously this isn’t working out.” By that time, DOOM was easily the most hotly anticipated game in the world, and nobody cared that it wouldn’t have a complicated story. “DOOM means two things,” said John Carmack. “Demons and shotguns!” And most of its fans wouldn’t have it any other way, then or now.


Tom Hall doesn’t look very happy about working on DOOM. Note the computer he works with: a NeXT workstation rather than an MS-DOS machine. John Carmack switched virtually all development to these $10,000 machines in the wake of Wolfenstein 3D‘s success, despite their tiny market footprint. The fact that the DOOM code was thus designed to be cross-platform from the beginning was undoubtedly a factor in the plethora of ports that appeared during and after its commercial heyday — that in fact still continue to appear today any time a new platform reaches a critical mass.

Making DOOM wound up requiring more than three times as many man-hours as anything the id boys had ever done before. It absorbed their every waking hour from January of 1993 to December of that year. Early on in that period, they decided that they wouldn’t be publishing it through Apogee. Cracks in the relationship between the id boys and Scott Miller had started forming around the latter’s business practices, which were scrupulously honest but also chaotic in that way dismayingly typical of a fast-growing business helmed by a first-time entrepreneur. Reports kept reaching id of people who wanted to buy Wolfenstein 3D, but couldn’t get through on the phone, or who managed to give Apogee their order only to have it never fulfilled.

But those complaints were perhaps just a convenient excuse. The reality was that the id boys just didn’t feel that they needed Apogee anymore. They had huge name recognition of their own now and plenty of money coming in to spend on advertising and promotion, and they could upload their new game to the major online services just as easily as Scott Miller could. Why keep giving him half of their money? Miller, for his part, handled the loss of his cash cow with graceful aplomb. He saw it as just business, nothing personal. “I would have done the same thing in their shoes,” he would frequently say in later interviews. He even hired Tom Hall to work at Apogee after the id boys cast him adrift in the foreign environs of Dallas.

Jay Wilbur now stepped into Miller’s old role for id. He prowled the commercial online services, the major bulletin-board systems, and the early Internet for hours each day, stoking the flames of anticipation here, answering questions there.

And there were lots of questions, for DOOM was actually about a bit more than demons and shotguns: it was also about technology. Whatever else it might become, DOOM was to be a showcase for the latest engine from John Carmack, a young man who was swiftly making a name for himself as the best game programmer in the world. With DOOM, he allowed himself to set the floor considerably higher in terms of system requirements than he had for Wolfenstein 3D.

System requirements have always been a moving target for any game developer. Push too hard, and you may end up releasing a game that almost no one can play; stay too conservative, and you may release something that looks like yesterday’s news. Striking precisely the right point on this continuum requires knowing your customers. The Apogee shareware demographic didn’t typically have cutting-edge computers; they tended to be younger and a bit less affluent than those buying the big boxed games. Thus id had made it possible to run Wolfenstein 3D on a two-generations-behind 80286-based machine with just 640 K of memory. The marked limitations of its pseudo-3D engine sprang as much from the limitations of such hardware as it did from John Carmack’s philosophy that, any time it came down to a contest between fidelity to the real world and speed, the latter should win.

He still held to that philosophy as firmly as ever when he moved on to DOOM, but the slow progression of the market’s trailing edge did give him more to work with: he designed DOOM for at least an 80386-based computer — 80486 recommended — with at least 4 MB of memory. He was able to ignore that bane of a generation of programmers, MS-DOS’s inability to seamlessly address memory beyond 640 K, by using a relatively new piece of software technology called a “DOS extender,” which built upon Microsoft’s recent memory-management innovations for their MS-DOS-hosted versions of Windows. DOS/4GW was included in the latest versions of what had heretofore been something of an also-ran in the compiler sweepstakes: the C compiler made by a small Canadian company known as Watcom. Carmack chose the Watcom compiler because of DOS/4GW; DOOM would quite literally have been impossible without it. In the aftermath of DOOM‘s prominent use of it, Watcom’s would become the C compiler of choice for game development, right through the remaining years of the MS-DOS-gaming era.

Rational Systems, the makers of DOS/4GW, were clever enough to stipulate in their licensing terms that the blurb above must appear whenever a program using it was started. Thus DOOM served as a prominent advertisement for the new software technology as it exploded across the world of computing in 1994. Soon you would have to look far and wide to find a game that didn’t mention DOS/4GW at startup.

Thanks not only to these new affordances but also — most of all, really — to John Carmack’s continuing evolution as a programmer, the DOOM engine advanced beyond that of Wolfenstein 3D in several important ways. Ironically, his work on the detested censored version of Wolfenstein 3D for the Super NES, a platform designed with 2D sprite-based games in mind rather than 3D graphics, had led him to discover a lightning-fast new way of sorting through visible surfaces, known as binary space partitioning, in a doctoral thesis by one Bruce Naylor. It had a well-nigh revelatory effect on the new engine’s capabilities.

That said, the new engine did remain caught, like its predecessor, in a liminal space between 2D and true 3D; it was just that it moved significantly further on the continuum toward the latter. No longer must everything and everyone exist on the same flat horizontal plane; you could now climb stairs and walk onto desks and daises. And walls must no longer all be at right angles to one another, meaning the world needed no longer resemble one of those steel-ball mazes children used to play with.

The DOOM level editor was a much more complicated tool than its Wolfenstein 3D equivalent, reflecting the enhanced capabilities of John Carmack’s latest engine. Most notably, the designer now had variable height at his disposal.

On the other hand, walls must still all be exactly vertical, and floors and ceilings must all be exactly horizontal; DOOM allowed stairs but not hills or ramps. These restrictions made it possible to map textures onto the environment without the ugly discontinuities that had plagued Blue Sky Productions’s earlier but more “honest” 3D game Ultima Underworld. DOOM makes such a useful study in game engineering because it so vividly illustrates that faking it convincingly for the sake of the player is better than simulating things which delight only the programmer of the virtual world. Its engine is perfect for the game it wants to be.

In a telling sign of John Carmack’s march toward a more complete 3D engine, the monsters in DOOM were sculpted as three-dimensional physical models by Adrian Carmack and Greg Punchatz, an artist hired just for the task. (The former is shown above.) The id boys then took snapshots of the models from eight separate angles for insertion into the game.

The value of the simple addition of height to the equation was revealed subtly — admittedly not an adverb often associated with DOOM! — as soon as you started the game. Instead of gliding smoothly about like a tram, your view now bobbed with uncanny verisimilitude as you ran about. You might never consciously notice the effect, but it made a huge difference to your feeling of really being in the world; if you tried to go back to Wolfenstein 3D after playing DOOM, you immediately had the feeling that something was somehow off.

But the introduction of varying height was most important for what it meant in terms of the game’s tactical possibilities. Now monsters could stand on balconies and shoot fireballs down at you, or you could do the same to them. Instead of a straightforward shooting gallery, the world of DOOM became a devious place of traps and ambushes. Carmack’s latest engine also supported variable levels of lighting for the first time, which opened up a whole new realm of both dramatic and tactical possibility in itself; entering an unexplored pitch-dark room could be, to say the least, an intimidating prospect.

This outdoor scene nicely showcases some of the engine’s capabilities. Note the fireball flying toward you. It’s implemented as a physical object in the world like any other.

In addition, the new engine dramatically improved upon the nearly non-existent degree of physics simulation in Wolfenstein 3D. Weight and momentum were implemented; even bullets were simulated as physical objects in the world. A stereo soundscape was implemented as well; in addition to being unnerving as all get-out, it could become another vital tactical tool. Meanwhile the artificial intelligence of the monsters, while still fairly rudimentary, advanced significantly over that of Wolfenstein 3D. It was even possible to lure two monsters into fighting each other instead of you.

John Carmack also added a modicum of support for doing things other than killing monsters, although to nowhere near the degree once envisioned by Tom Hall. The engine could be used to present simple set-piece interactions, such as locked doors and keys, switches and levers for manipulating parts of the environment: platforms could move up and down, bridges could extend and retract. And in recognition of this added level of complexity, which could suddenly make the details of the geography and your precise position within it truly relevant, the engine offered a well-done auto-map for keeping track of those things.


The DOOM automap, an impressive technical achievement in itself.

Of course, none of these new affordances would matter without level designs that took advantage of them. The original plan was for Tom Hall and John Romero to create the levels. But, as we’ve seen, Hall just couldn’t seem to hit the mark that the id boys were aiming for. After finally dismissing him, they realized that Romero still needed help to shoulder the design burden. It arrived from a most unlikely source — from a fellow far removed from the rest of the id boys in age, experience, and temperament.

Sandy Petersen was already a cult hero in certain circles for having created a tabletop RPG called Call of Cthulhu in 1981. Based on the works of the horror writer H.P. Lovecraft, it was the first RPG ever to convincingly transcend the kill-monsters-to-level-up-so-you-can-kill-bigger-monsters dynamic of Dungeons & Dragons. But Call of Cthulhu remained a cult game even when the tabletop-RPG boom was at its height, and by the early 1990s Petersen was serving as an in-house design consultant at the computer-game publisher MicroProse. Unhappy in this role, he sent his résumé to the upstart id.

The résumé was greeted with considerable skepticism. It’s doubtful whether any of the id boys fully grasped the significance of Petersen’s achievement with Call of Cthulhu; while they were hardcore tabletop-RPG players, they were perfectly happy with the traditional power-gaming approach of Dungeons & Dragons, thank you very much. Still, the résumé was more impressive than any other they had received, and they did urgently need a level designer… they called him in for an interview.

Their initial skepticism wasn’t lessened by the man himself. Petersen was pudgy and balding, looking even older than his already ancient 38 years, coming across rather like a genial university professor. And he was a devout Mormon to boot, washed up among this tribe of atheists and nihilists. Surely it could never work out.

Nevertheless, they decided to grant him the favor of a test before they rejected him; he had, after all, flown all the way from Baltimore to Dallas just to meet with them. They gave him a brief introduction to the DOOM engine and its level editor, and asked him to throw something together for them. Within minutes, Petersen produced a cunningly dramatic trap room, featuring lights that suddenly winked out when the player entered and a demon waiting in ambush behind a hidden door. He was hired.

Romero and Petersen proved to complement each other very well, with individual design aesthetics that reflected their personalities. Romero favored straight-up carnage — the more demon blood the better — while Petersen evinced a subtler, more cerebral approach in levels that could almost have a puzzle-like feel, where charging in with shotgun blazing was usually not the best tactic. Together the two approaches gave the game a nice balance.

Indeed, superb level design became DOOM‘s secret weapon, one that has allowed it to remain relevant to this day, when its degree of gore and violence seems humdrum, its pixels look as big as houses, and the limitations of its engine seem downright absurd. (You can’t even look up or down, for Pete’s sake. Nor is there a “jump” command, meaning that your brawny super-soldier can be stopped in his tracks by an inconveniently high curb.)

It’s disarmingly easy to underestimate DOOM today on your first encounter with it, simply because its visual aesthetic seems so tossed-off, so hopelessly juvenile; it’s the same crude mixture of action movies, heavy-metal album covers, and affected adolescent nihilism that defined the underground game-cracking scene of the 1980s. And yet behind it all is a game design that oozes as much thought and care as it does blood. These levels were obsessed over by their designers, and then, just as importantly, extensively critiqued by the other id boys and their immediate hangers-on, who weren’t inclined to pull their punches. Whatever your opinion of DOOM as a whole and/or the changes it wrought to the culture of gaming — I for one have thoroughly mixed feelings at best on both of those subjects — one cannot deny that it’s a veritable clinic of clever level design. In this sense, it still offers lessons for today’s game developers, whether they happen to be working inside or outside of the genre it came to define.


Subtle DOOM isn’t…

DOOM‘s other, not-so-secret weapon went by the name of “deathmatch.”

There had been significant experimentation with networked gaming on personal computers in the past: the legendary designer Dani Bunten Berry had spent the last half-decade making action-strategy games that were primarily or exclusively intended to be played by two humans connected via modem; Peter Molyneux’s “god game” Populous and its sequels had also allowed two players to compete on linked computers, as had a fair number of others. But computer-to-computer multiplayer-only games never sold very well, and most games that had networked multiplayer as an option seldom saw it used. Most people in those days didn’t even own modems; most computers were islands unto themselves.

By 1993, however, the isolationist mode of computing was slowly being nibbled away at. Not only was the World Wide Web on the verge of bursting into the cultural consciousness, but many offices and campuses were already networked internally, mostly using the systems of a company known as Novell. In fact, the id boys had just such a system in their Dallas office. When John Carmack told John Romero many months into the development of DOOM that multiplayer was feasible, the latter’s level of excitement was noteworthy even for him: “If we can get this done, this is going to be the fucking coolest game that the planet Earth has ever fucking seen in its entire history.” And it turned out that they could get it done because John Carmack was a programming genius.

While Carmack also implemented support for a modem connection or a direct computer-to-computer cable, it was under Novell’s IPX networking protocol that multiplayer DOOM really shined. Here you had a connection that was rock-solid and lightning-fast — and, best of all, here you could have up to four players in the same world instead of just two. You could tackle the single-player game as a team if you wanted to, but the id boys all agreed that deathmatch — all-out player-versus-player anarchy — was where the real fun lived. It made DOOM into more of a sport than a conventional computer game, something you could literally play forever. Soon the corridors at id were echoing with cries of “Suck it down!” as everyone engaged in frenzied online free-for-alls. Deathmatch was, in the diction of the id boys, “awesome.” It wasn’t just an improvement on what Wolfenstein 3D had done; it was something fundamentally different from it, something that was genuinely new under the sun. “This is the shit!” chortled Romero, and for once it sounded like an understatement.



The excitement over DOOM had reached a fever pitch by the fall of 1993. Some people seemed on the verge of a complete emotional meltdown, and launched into overwrought tirades every time Jay Wilbur had to push the release date back a bit more; people wrote poetry about the big day soon to come (“The Night Before DOOM“), and rang id’s offices at all hours of the day and night like junkies begging for a fix.

Even fuddy-duddy old Computer Gaming World stopped by the id offices to write up a two-page preview. This time out, no reservations whatsoever about the violence were expressed, much less any of the full-fledged hand-wringing that had been seen earlier from editor Johnny Wilson. Far from giving in to the gaming establishment, the id boys were, slowly but surely, remaking it in their own image.

At last, id announced that the free first episode of DOOM would go up at the stroke of midnight on December 10, 1993, on, of all places, the file server belonging to the University of Wisconsin–Parkside. When the id boys tried to log on to do the upload, so many users were already online waiting for the file to appear that they couldn’t get in; they had to call the university’s system administrator and have him kick everyone else off. Then, once the file did appear, the server promptly crashed under the load of 10,000 people, all trying to get DOOM at once on a system that expected no more than 175 users at a time. The administrator rebooted it; it crashed again. They would have a hard go of things at the modest small-town university for quite some time to come.



Legend had it that when Don Woods first uploaded his and Will Crowthers’s game Adventure in 1977, all work in the field of data processing stopped for a week while everyone tried to solve it. Now, not quite seventeen years later, something similar happened in the case of DOOM, arguably the most important computer game to appear since Adventure. The id boys had joked in an early press release that they expected DOOM to become “the number-one cause of decreased productivity in businesses around the world.” Even they were surprised by the extent to which that prediction came true.

Network administrators all over the world had to contend with this new phenomenon known as deathmatch. John Carmack had had no experience with network programming before DOOM, and in his naïveté had used a transmission method known as a broadcast packet that forced every computer on the network, whether it was running DOOM or not, to stop and analyze every packet which every DOOM-playing computer generated. As reports of the chaos that resulted poured in, Carmack scrambled to code an update which would use machine-to-machine packets instead.

In the meantime, DOOM brought entire information-technology infrastructures to their knees. Intel banned the game; high-school and university computers labs hardly knew what had hit them. A sign posted at Carnegie Mellon University before the day of release was even over was typical: “Since today’s release of DOOM, we have discovered [that the game is] bringing the campus network to a halt. Computing Services asks that all DOOM players please do not play DOOM in network mode. Use of DOOM in network mode causes serious degradation of performance for the players’ network, and during this time of finals network use is already at its peak. We may be forced to disconnect the PCs of those who are playing the game in network mode. Again, please do not play DOOM in network mode.” One clever system administrator at the University of Louisville created a program to search the hard drives of all machines on the network for the game, and delete it wherever it was found. All to no avail: DOOM was unstoppable.

But in these final months of the mostly-unconnected era of of personal computing — the World Wide Web would begin to hit big over the course of 1994 — a game still needed to reach those without modems or network cards in their computers in order to become a hit on the scale that id envisioned for DOOM. Jay Wilbur, displaying a wily marketing genius that went Scott Miller one better, decided that absolutely everyone should be allowed to distribute the first episode of DOOM on disk, charging whatever they could get for it: “We don’t care if you make money off this shareware demo. Move it! Move it in mass quantities.” For distribution, Wilbur realized, was the key to success. There are many ways to frame the story of DOOM, but certainly one of them is a story of guerrilla marketing at its finest.

The free episode of DOOM appeared in stores under many different imprints, but most, like this Australian edition, used the iconic cover id themselves provided. John Romero claims that he served as the artist’s model for the image.

The incentives for distribution were massive. If a little mom-and-pop operation in, say, far-off Australia could become the first to stick that episode onto disks, stick those disks in a box, and get the box onto store shelves, they could make a killing, free and clear. DOOM became omnipresent, inescapable all over the world. When you logged into CompuServe, there was DOOM; when you wandered into your local software store, there was DOOM again, possibly in several different forms of packaging; when you popped in the disk or CD that came with your favorite gaming magazine, there it was yet again. The traditional industry was utterly gobsmacked by this virulent weed of a game.

As with Wolfenstein 3D, a large majority of the people who acquired the first episode of DOOM in one way or another were perfectly satisfied with its eight big levels and unlimited deathmatch play; plenty of others doubtless never bothered to read the fine print, never even realized that more DOOM was on offer if they called 1-800-IDGAMES with their credit card in hand. And then, of course, there was the ever-present specter of piracy; nothing whatsoever stopped buyers of the paid episodes from sharing them with all of their DOOM-loving friends. By some estimates, the conversion rate from the free to the paid episodes was as low as 1 percent. Nevertheless, it was enough to make the id boys very, very rich young men.

Sometimes $100,000 worth of orders would roll in on a single day. John Carmack and John Romero each went out and bought a new Ferrari Testarossa; now it was the turn of Scott Miller and George Broussard to look on the id boys’ cars with envy. Glossy magazines, newspapers, and television news programs all begged to visit the id offices, where they wondered over the cars in the parking lot and the unkempt young men inside screaming the most horrid scatological and sexual insults at one another as they played deathmatch. If nothing else, the id boys were certainly a colorful story.

The id boys’ cars got almost as much magazine coverage as their games. Here we see John Carmack with his Ferrari, which he had modified to produce 800 horsepower: “I want dangerous acceleration.”

Indeed, the id story is as close as gaming ever came to fulfilling one of its most longstanding dreams: that of game developers as rock stars, as first articulated by Trip Hawkins in 1983 upon his founding of Electronic Arts. Yet if Hawkins’s initial stable of developers, so carefully posed in black and white in EA’s iconic early advertisements, resembled an artsy post-punk band — the interactive version of Talking Heads — the id boys were meat-and-potatoes heavy metal for the masses — Metallica at their Black Album peak. John Romero, the id boy who most looked the part of rock star, particularly reveled in the odd sort of obsequious hero worship that marks certain corners of gamer culture. He almost visibly swelled with pride every time a group of his minions started chanting “We’re not worthy!” and literally bowed down in his presence, and wore his “DOOM: Wrote It!” tee-shirt until the print peeled off.

The impact DOOM was having on the industry had become undeniable by the time of the Summer Consumer Electronics Show in June of 1994. Here everyone seemed to want in on id’s action. The phrase “first-person shooter” had yet to be invented, so the many soon-to-be-released games of the type were commonly referred to as “DOOM clones” — or, as Computer Gaming World preferred, “DOOM toos.” The same magazine, still seeming just a trifle ambivalent about it all, called it the “3D action fad.” But this was no fad; these games were here to stay. The boxed publishers who had scoffed at the shareware scene a year or two before were now all scrambling to follow id’s lead. LucasArts previewed a DOOM clone set in the Star Wars universe; SSI, previously known for their complicated strategic war games and licensed Dungeons & Dragons CRPGs, dipped a toe into these very different waters with something called CyClones.

And then, inevitably, there was id’s own DOOM II: Hell on Earth. As a piece of game design, it evinced no sign of the dreaded sophomore slump that afflicts so many rock groups — this even though it used the exact same engine as its predecessor, and even though John Romero, id’s rock-star-in-chief, was increasingly busy with extracurriculars and contributed only a handful of levels. His slack was largely taken up by one American McGee, the latest scruffy rebel to join the id boys, a 21-year-old former auto mechanic who had suffered through an even more hardscrabble upbringing than the two Johns. After beginning at id as a tester, he had gradually revealed an uncanny talent for making levels that combined the intricacy of Sandy Petersen’s with the gung-ho flair of John Romero’s. Now, he joined Petersen and, more intermittently, Romero to create a game that was if anything even more devious than its predecessor. The id boys had grown cockier than ever, but they could still back it up.

John Romero in 1994, doing something the other id boys wished he would do a bit more of: making a level for DOOM II.

They were approached by a New York City wheeler-and-dealer named Ron Chaimowitz who wanted to publish DOOM II exclusively to retail. His was not an established name in the gaming world; he had come of age in the music industry, where he had broken big acts like Gloria Estefan and Julio Iglesias during the previous decade, and he was now publishing Jane Fonda’s workout videos through a company called GoodTimes Entertainment. But he had distribution connections — and, as Jay Wilbur has so recently proved, distribution often means everything. GoodTimes sold millions of videotapes through Wal-Mart, the exploding epicenter of heartland retail, and Chaimowitz promised that the new software label he had in mind would be able to leverage those connections. He further promised to spend $2 million on advertising. He would prove as good as his word in both respects. The new GT Interactive manufactured an extraordinary 600,000 copies of DOOM II prior to its release, marking by far the largest initial production run in the history of computer gaming to date.

In marked contrast to the simple uploading of the first episode of the original DOOM, DOOM II was launched with all the pomp and circumstance that a $2 million promotional budget could provide. A party to commemorate the event took place on October 10, 1994, at a hip Gothic night club in New York City which had been re-decorated in a predictably gory manner. The party even came complete with protesters against the game’s violence, to add that delicious note of controversy that any group of rock stars worth their salt requires.

At the party, a fellow named Bob Huntley, owner of a small Houston software company, foisted a disk on John Romero containing “The Dial-Up Wide-Area Network Games Operation,” or “DWANGO.” Using it, you could dial into Huntley’s Houston server at any time to play a pick-up game of four-player DOOM deathmatch with strangers who might happen to be on the other side of the world. Romero expressed his love for the concept in his trademark profane logorrhea: “I like staying up late and I want to play people whenever the fuck I want to and I don’t want to have to wake up my buddy at three in the morning and go, ‘Hey, uh, you wanna get your skull cracked?’ This is the thing that you can dial into and just play!” He convinced the other id boys to give DWANGO their official endorsement, and the service went live within weeks. For just $8.96 per month, you could now deathmatch any time you wanted. And thus another indelible piece of modern gaming culture, as well as a milestone in the cultural history of the Internet, fell into place.

DOOM was becoming not just a way of gaming but a way of life, one that left little space in the hearts of its most committed adherents for anything else. Some say that gaming became better after DOOM, some that it became worse. One thing that everyone can agree on, however, is that it changed; it’s by no means unreasonable to divide the entire history of computer gaming into pre-DOOM and post-DOOM eras. Next time, then, in the concluding article of this series, we’ll do our best to come to terms with that seismic shift.

(Sources: the books Masters of Doom by David Kushner, Game Engine Black Book: Wolfenstein 3D and Game Engine Black Book: DOOM by Fabien Sanglard, and Principles of Three-Dimensional Computer Animation by Michael O’Rourke; Retro Gamer 75; Game Developer premiere issue and issues of June 1994 and February/March 1995; Computer Gaming World of July 1993, March 1994, July 1994, August 1994, September 1994. Online sources include “Apogee: Where Wolfenstein Got Its Start” by Chris Plante at Polygon, “Rocket Jump: Quake and the Golden Era of First-Person Shooters” by David L. Craddock at Shack News, Benj Edwards’s interview with Scott Miller for Game Developer, Jeremy Peels’s interview with John Romero for PC Games N, and Jay Wilbur’s old Usenet posts, which can now be accessed via Google Groups. And a special thanks to Alex Sarosi, better known in our comment threads as Lt. Nitpicker, for pointing out to me how important Jay Wilbur’s anything-goes approach to distribution of the free episode of DOOM was to the game’s success.

The original Doom episodes and Doom II are available as digital purchases on GOG.com.)

 

Tags: , , , , , ,