RSS

Tag Archives: id

The Rise of POMG, Part 1: It Takes a Village…

No one on their deathbed ever said, “I wish I had spent more time alone with my computer!”

— Dani Bunten Berry

If you ever want to feel old, just talk to the younger generation.

A few years ago now, I met the kids of a good friend of mine for the very first time: four boys between the ages of four and twelve, all more or less crazy about videogames. As someone who spends a lot of his time and earns a lot of his income writing about games, I arrived at their house with high expectations attached.

Alas, I’m afraid I proved a bit of a disappointment to them. The distance between the musty old games that I knew and the shiny modern ones that they played was just too far to bridge; shared frames of reference were tough to come up with. This was more or less what I had anticipated, given how painfully limited I already knew my knowledge of modern gaming to be. But one thing did genuinely surprise me: it was tough for these youngsters to wrap their heads around the very notion of a game that you played to completion by yourself and then put on the shelf, much as you might a book. The games they knew, from Roblox to Fortnite, were all social affairs that you played online with friends or strangers, that ended only when you got sick of them or your peer group moved on to something else. Games that you played alone, without at the very least leader boards and achievements on-hand to measure yourself against others, were utterly alien to them. It was quite a reality check for me.

So, I immediately started to wonder how we had gotten to this point — a point not necessarily better or worse than the sort of gaming that I knew growing up and am still most comfortable with, just very different. This series of articles should serve as the beginning of an answer to that complicated question. Their primary focus is not so much how computer games went multiplayer, nor even how they first went online; those things are in some ways the easy, obvious parts of the equation. It’s rather how games did those things persistently — i.e., permanently, so that each session became part of a larger meta-game, if you will, embedded in a virtual community. Or perhaps the virtual community is embedded in the game. It all depends on how you look at it, and which precise game you happen to be talking about. Whichever way, it has left folks like me, whose natural tendency is still to read games like books with distinct beginnings, middles, and ends, anachronistic iconoclasts in the eyes of the youthful mainstream.

Which, I hasten to add, is perfectly okay; I’ve always found the ditch more fun than the middle of the road anyway. Still, sometimes it’s good to know how the other 90 percent lives, especially if you claim to be a gaming historian…



“Persistent online multiplayer gaming” (POMG, shall we say?) is a mouthful to be sure, but it will have to do for lack of a better descriptor of the phenomenon that has created such a divide between myself and my friend’s children.  It’s actually older than you might expect, having first come to be in the 1970s on PLATO, a non-profit computer network run out of the University of Illinois but encompassing several other American educational institutions as well. Much has been written about this pioneering network, which uncannily presaged in so many of its particulars what the Internet would become for the world writ large two decades later. (I recommend Brian Dear’s The Friendly Orange Glow for a book-length treatment.) It should suffice for our purposes today to say that PLATO became host to, among other online communities of interest, an extraordinarily vibrant gaming culture. Thanks to the fact that PLATO games lived on a multi-user network rather than standalone single-user personal computers, they could do stuff that most gamers who were not lucky enough to be affiliated with a PLATO-connected university would have to wait many more years to experience.

The first recognizable single-player CRPGs were born on PLATO in the mid-1970s, inspired by the revolutionary new tabletop game known as Dungeons & Dragons. They were followed by the first multiplayer ones in amazingly short order. Already in 1975’s Moria,[1]The PLATO Moria was a completely different game from the 1983 single-player roguelike that bore the same name. players met up with their peers online to chat, brag, and sell or trade loot to one another. When they were ready to venture forth to kill monsters, they could do so in groups of up to ten, pooling their resources and sharing the rewards. A slightly later PLATO game called Oubliette implemented the same basic concept in an even more sophisticated way. The degree of persistence of these games was limited by a lack of storage capacity — the only data that was saved between sessions were the statistics and inventory of each player’s character, with the rest of the environment being generated randomly each time out — but they were miles ahead of anything available for the early personal computers that were beginning to appear at the same time. Indeed, Wizardry, the game that cemented the CRPG’s status as a staple genre on personal computers in 1981, was in many ways simply a scaled-down version of Oubliette, with the multiplayer party replaced by a party of characters that were all controlled by the same player.

Chester Bolingbroke, better known online as The CRPG Addict, plays Moria. Note the “Group Members” field at bottom right. Chester is alone here, but he could be adventuring with up to nine others.

A more comprehensive sort of persistence arrived with the first Multi-User Dungeon (MUD), developed by Roy Trubshaw and Richard Bartle, two students at the University of Essex in Britain, and first deployed there in a nascent form in late 1978 or 1979. A MUD borrowed the text-only interface and presentation of Will Crowther and Don Woods’s seminal game of Adventure, but the world it presented was a shared, fully persistent one between its periodic resets to a virgin state, chockablock with other real humans to interact with and perhaps fight. “The Land,” as Bartle dubbed his game’s environs, expanded to more than 600 rooms by the early 1980s, even as its ideas and a good portion of its code were used to set up other, similar environments at many more universities.

In the meanwhile, the first commercial online services were starting up in the United States. By 1984, you could, for the price of a substantial hourly fee, dial into the big mainframes of services like CompuServe using your home computer. Once logged in there, you could socialize, shop, bank, make travel reservations, read newspapers, and do much else that most people wouldn’t begin to do online until more than a decade later — including gaming. For example, CompuServe offered MegaWars, a persistent grand-strategy game of galactic conquest whose campaigns took groups of up to 100 players four to six weeks to complete. (Woe betide the ones who couldn’t log in for some reason of an evening in the midst of that marathon!) You could also find various MUDs, as well as Island of Kesmai, a multiplayer CRPG boasting most of the same features as PLATO’s Oubliette in a genuinely persistent world rather than a perpetually regenerated one. CompuServe’s competitor GEnie had Air Warrior, a multiplayer flight simulator with bitmapped 3D graphics and sound effects to rival any of the contemporaneous single-player simulators on personal computers. For the price of $11 per hour, you could participate in grand Air Warrior campaigns that lasted three weeks each and involved hundreds of other subscribers, organizing and flying bombing raids and defending against the enemy’s attacks on their own lines. In 1991, America Online put up Neverwinter Nights,[2]Not the same game as the 2002 Bioware CRPG of the same name. which did for the “Gold Box” line of licensed Dungeons & Dragons CRPGs what MUD had done for Adventure and Air Warrior had done for flight simulators, transporting the single-player game into a persistent multiplayer space.

All of this stuff was more or less incredible in the context of the times. At the same time, though, we mustn’t forget that it was strictly the purview of a privileged elite, made up of those with login credentials for institutional-computing networks or money in their pockets to pay fairly exorbitant hourly fees to feed their gaming habits. So, I’d like to back up now and tell a different story of POMG — one with more of a populist thrust, focusing on what was actually attainable by the majority of people out there, the ones who neither had access to a university’s mainframe nor could afford to spend hundreds of dollars per month on a hobby. Rest assured that the two narratives will meet before all is said and done.



POMG came to everyday digital gaming in the reverse order of the words that make up the acronym: first games were multiplayer, then they went online, and then these online games became persistent. Let’s try to unpack how that happened.

From the very start, many digital games were multiplayer, optionally if not unavoidably so. Spacewar!, the program generally considered the first fully developed graphical videogame, was exclusively multiplayer from its inception in the early 1960s. Ditto Pong, the game that launched Atari a decade later, and with it a slow-building popular craze for electronic games, first in public arcades and later in living rooms. Multiplayer here was not so much down to design intention as technological affordances. Pong was an elaborate analog state machine rather than a full-blown digital computer, relying on decentralized resistors and potentiometers and the like to do its “thinking.” It was more than hard enough just to get a couple of paddles and a ball moving around on the screen of a gadget like this; a computerized opponent was a bridge too far.

Very quickly, however, programmable microprocessors entered the field, changing everyone’s cost-benefit analyses. Building dual controls into an arcade cabinet was expensive, and the end result tended to take up a lot of space. The designers of arcade classics like Asteroids and Galaxian soon realized that they could replace the complications of a human opponent with hordes of computer-controlled enemies, flying in rudimentary, partially randomized patterns. Bulky multiplayer machines thus became rarer and rarer in arcades, replaced by slimmer, more standardized single-player cabinets. After all, if you wanted to compete with your friends in such games, there was still a way to do so: you could each play a round against the computerized enemies and compare your scores afterward.

While all of this was taking shape, the Trinity of 1977 — the Radio Shack TRS-80, Apple II, and Commodore PET — had ushered in the personal-computing era. The games these early microcomputers played were sometimes ports or clones of popular arcade hits, but just as often they were more cerebral, conceptually ambitious affairs where reflexes didn’t play as big — or any — role: flight simulations, adventure games, war and other strategy games. The last were often designed to be played optimally or even exclusively against another human, largely for the same reason Pong had been made that way: artificial intelligence was a hard thing to implement under any circumstances on an 8-bit computer with as little as 16 K of memory, and it only got harder when you were asking said artificial intelligence to formulate a strategy for Operation Barbarossa rather than to move a tennis racket around in front of a bouncing ball. Many strategy-game designers in these early days saw multiplayer options almost as a necessary evil, a stopgap until the computer could fully replace the human player, thus alleviating that eternal problem of the war-gaming hobby on the tabletop: the difficulty of finding other people in one’s neighborhood who were able and willing to play such weighty, complex games.

At least one designer, however, saw multiplayer as a positive advantage rather than a kludge — in fact, as the way the games of the future by all rights ought to be. “When I was a kid, the only times my family spent together that weren’t totally dysfunctional were when we were playing games,” remembered Dani Bunten Berry. From the beginning of her design career in 1979, when she made an auction game called Wheeler Dealers for the Apple II,[3]Wheeler Dealers and all of her other games that are mentioned in this article were credited to Dan Bunten, the name under which she lived until 1992. multiplayer was her priority. In fact, she was willing to go to extreme lengths to make it possible; in addition to a cassette tape containing the software, Wheeler Dealers shipped with a custom-made hardware add-on, the only method she could come up with to let four players bid at once. Such experiments culminated in M.U.L.E., one of the first four games ever published by Electronic Arts, a deeply, determinedly social game of economics and, yes, auctions for Atari and Commodore personal computers that many people, myself included, still consider her unimpeachable masterpiece.

A M.U.L.E. auction in progress.

And yet it was Seven Cities of Gold, her second game for Electronic Arts, that became a big hit. Ironically, it was also the first she had ever made with no multiplayer option whatsoever. She was learning to her chagrin that games meant to be played together on a single personal computer were a hard sell; such machines were typically found in offices and bedrooms, places where people went to isolate themselves, not in living rooms or other spaces where they went to be together. She decided to try another tack, thereby injecting the “online” part of POMG into our discussion.

In 1988, Electronic Arts published Berry’s Modem Wars, a game that seems almost eerily prescient in retrospect, anticipating the ludic zeitgeist of more than a decade later with remarkable accuracy. It was a strategy game played in real time (although not quite a real-time strategy of the resource-gathering and army-building stripe that would later be invented by Dune II and popularized by Warcraft and Command & Conquer). And it was intended to be played online against another human sitting at another computer, connected to yours by the gossamer thread of a peer-to-peer modem hookup over an ordinary telephone line. Like most of Berry’s games, it didn’t sell all that well, being a little too far out in front of the state of her nation’s telecommunications infrastructure.

Nevertheless, she continued to push her agenda of computer games as ways of being entertained together rather than alone over the years that followed. She never did achieve the breakout hit she craved, but she inspired countless other designers with her passion. She died far too young in 1998, just as the world was on the cusp of embracing her vision on a scale that even she could scarcely have imagined. “It is no exaggeration to characterize her as the world’s foremost authority on multiplayer computer games,” said Brian Moriarty when he presented Dani Bunten Berry with the first ever Game Developers Conference Lifetime Achievement Award two months before her death. “Nobody has worked harder to demonstrate how technology can be used to realize one of the noblest of human endeavors: bringing people together. Historians of electronic gaming will find in these eleven boxes [representing her eleven published games] the prototypes of the defining art form of the 21st century.” Let this article and the ones that will follow it, written well into said century, serve as partial proof of the truth of his words.

Danielle Bunten Berry, 1949-1998.

For by the time Moriarty spoke them, other designers had been following the trails she had blazed for quite some time, often with much more commercial success. A good early example is Populous, Peter Molyneux’s strategy game in real time (although, again, not quite a real-time strategy) that was for most of its development cycle strictly a peer-to-peer online multiplayer game, its offline single-player mode being added only during the last few months. An even better, slightly later one is DOOM, John Carmack and John Romero’s game of first-person 3D mayhem, whose star attraction, even more so than its sadistic single-player levels, was the “deathmatch” over a local-area network. Granted, these testosterone-fueled, relentlessly zero-sum contests weren’t quite the same as what Berry was envisioning for gaming’s multiplayer future near the end of her life; she wished passionately for games with a “people orientation,” directed toward “the more mainstream, casual players who are currently coming into the PC market.” Still, as the saying goes, you have to start somewhere.

But there is once more a caveat to state here about access, or rather the lack thereof. Being built for local networks only — i.e., networks that lived entirely within a single building or at most a small complex of them — DOOM deathmatches were out of reach on a day-to-day basis for those who didn’t happen to be students or employees at institutions with well-developed data-processing departments and permissive or oblivious authority figures. Outside of those ivory towers, this was the era of the “LAN party,” when groups of gamers would all lug their computers over to someone’s house, wire them together, and go at it over the course of a day or a weekend. These occasions went on to become treasured memories for many of their participants, but they achieved that status precisely because they were so sporadic and therefore special.

And yet DOOM‘s rise corresponded with the transformation of the Internet from an esoteric tool for the technological elite to the most flexible medium of communication ever placed at the disposal of the great unwashed, thanks to a little invention out of Switzerland called the World Wide Web. What if there was a way to move DOOM and other games like it from a local network onto this one, the mother of all wide-area networks? Instead of deathmatching only with your buddy in the next cubicle, you would be able to play against somebody on another continent if you liked. Now wouldn’t that be cool?

The problem was that local-area networks ran over a protocol known as IPX, while the Internet ran on a completely different one called TCP/IP. Whoever could bridge that gap in a reasonably reliable, user-friendly way stood to become a hero to gamers all over the world.



Jay Cotton discovered DOOM in the same way as many another data-processing professional: when it brought down his network. He was employed at the University of Georgia at the time, and was assigned to figure out why the university’s network kept buckling under unprecedented amounts of spurious traffic. He tracked the cause down to DOOM, the game that half the students on campus seemed to be playing more than half the time. More specifically, the problem was caused by a bug, which was patched out of existence by John Carmack as soon as he was informed. Problem solved. But Cotton stuck around to play, the warden seduced by the inmates of the asylum.

He was soon so much better at the game than anyone else on campus that he was getting a bit bored. Looking for worthier opponents, he stumbled across a program called TCPSetup, written by one Jake Page, which was designed to translate IPX packets into TCP/IP ones and vice versa on the fly, “tricking” DOOM into communicating across the vast Internet. It was cumbersome to use and extremely unreliable, but on a good day it would let you play DOOM over the Internet for brief periods of time at least, an amazing feat by any standard. Cotton would meet other players on an Internet chat channel dedicated to the game, they’d exchange IP addresses, and then they’d have at it — or try to, depending on the whims of the Technology Gods that day.

On August 22, 1994, Cotton received an email from a fellow out of the University of Illinois — yes, PLATO’s old home — whom he’d met and played in this way (and beaten, he was always careful to add). His name was Scott Coleman. “I have some ideas for hacking TCPSetup to make it a little easier. Care to do some testing later?” Coleman wrote. “I’ve already emailed Jake [Page] on this, but he hasn’t responded (might be on vacation or something). If he approves, I’m hoping some of these ideas might make it into the next release of TCPSetup. In the meantime, I want to do some experimenting to see what’s feasible.”

Jake Page never did respond to their queries, so Cotton and Coleman just kept beavering away on their own, eventually rewriting TCPSetup entirely to create iDOOM, a more reliable and far less fiddly implementation of the same concept, with support for three- or four-player deathmatches instead of just one-on-one duels. It took off like a rocket; the pair were bombarded with feature requests, most notably to make iDOOM work with other IPX-only games as well. In January of 1995, they added support for Heretic, one of the most popular of the first wave of so-called “DOOM clones.” They changed their program’s name to “iFrag” to reflect the fact that it was now about more than just DOOM.

Having come this far, Cotton and Coleman soon made the conceptual leap that would transform their software from a useful tool to a way of life for a time for many, many thousands of gamers. Why not add support for more games, they asked themselves, not in a bespoke way as they had been doing to date, but in a more sustainable one, by turning their program into a general-purpose IPX-to-TCP/IP bridge, suitable for use with the dozens of other multiplayer games out there that supported only local-area networks out of the box. And why not make their tool into a community while they were at it, by adding an integrated chat service? In addition to its other functions, the program could offer a list of “servers” hosting games, which you could join at the click of a button; no more trolling for opponents elsewhere on the Internet, then laboriously exchanging IP addresses and meeting times and hoping the other guy followed through. This would be instant-gratification online gaming. It would also provide a foretaste at least of persistent online multiplayer gaming; as people won matches, they would become known commodities in the community, setting up a meta-game, a sporting culture of heroes and zeroes where folks kept track of win-loss records and where everybody clamored to hear the results when two big wheels faced off against one another.

Cotton and Coleman renamed their software for the third time in less than nine months, calling it Kali, a name suggested by Coleman’s Indian-American girlfriend (later his wife). “The Kali avatar is usually depicted with swords in her hands and a necklace of skulls from those she has killed,” says Coleman, “which seemed appropriate for a deathmatch game.” Largely at the behest of Cotton, always the more commercially-minded of the pair, they decided to make Kali shareware, just like DOOM itself: multiplayer sessions would be limited to fifteen minutes at a time until you coughed up a $20 registration fee. Cotton went through the logistics of setting up and running a business in Georgia while Coleman did most of the coding in Illinois. (Rather astonishingly, Cotton and Coleman had still never met one another face to face in 2013, when gaming historian David L. Craddock conducted an interview with them that has been an invaluable source of quotes and information for this article.)

Kali certainly wasn’t the only solution in this space; a commercial service called DWANGO had existed since December of 1994, with the direct backing of John Carmack and John Romero, whose company id Software collected 20 percent of its revenue in return for the endorsement. But DWANGO ran over old-fashioned direct-dial-up connections rather than the Internet, meaning you had to pay long-distance charges to use it if you weren’t lucky enough to live close to one of its host computers. On top of that, it charged $9 for just five hours of access per month, with the fees escalating from there. Kali, by contrast, was available to you forever for as many hours per month as you liked after you plunked down your one-time fee of $20.

So, Kali was popular right from its first release on April 26, 1995. Yet it was still an awkward piece of software for the casual user despite the duo’s best efforts, being tied to MS-DOS, whose support for TCP/IP relied on a creaky edifice of third-party tools. The arrival of Windows 95 was a godsend for Kali, as it was for computer gaming in general, making the hobby accessible in a way it had never been before. The so-called “Kali95” was available by early 1996, and things exploded from there. Kali struck countless gamers with all the force of a revelation; who would have dreamed that it could be so easy to play against another human online? Lloyd Case, for example, wrote in Computer Gaming World magazine that using Kali for the first time was “one of the most profound gaming experiences I’ve had in a long time.” Reminiscing seventeen years later, David L. Craddock described how “using Kali for the first time was like magic. Jumping into a game and playing with other people. It blew my fourteen-year-old mind.” In late 1996, the number of registered Kali users ticked past 50,000, even as quite possibly just as many or more were playing with cracked versions that bypassed the simplistic serial-number-registration process. First-person-shooter deathmatches abounded, but you could also play real-time strategies like Command & Conquer and Warcraft, or even the Links golf simulation. Computer Gaming World gave Kali a special year-end award for “Online-Enabling Technology.”

Kali for Windows 95.

Competitors were rushing in at a breakneck pace by this time, some of them far more conventionally “professional” than Kali, whose origin story was, as we’ve seen, as underground and organic as that of DOOM itself. The most prominent of the venture-capital-funded startups were MPlayer (co-founded by Brian Moriarty of Infocom and LucasArts fame, and employing Dani Bunten Berry as a consultant during the last months of her life) and the Total Entertainment Network, better known as simply TEN. In contrast to Kali’s one-time fee, they, like DWANGO before them, relied on subscription billing: $20 per month for MPlayer, $15 per month for TEN. Despite slick advertising and countless other advantages that Kali lacked, neither would ever come close to overtaking its scruffy older rival, which had price as well as oodles of grass-roots goodwill on its side. Jay Cotton:

It was always my belief that Kali would continue to be successful as long as I never got greedy. I wanted everyone to be so happy with their purchase that they would never hesitate to recommend it to a friend. [I would] never charge more than someone would be readily willing to pay. It also became a selling point that Kali only charged a one-time fee, with free upgrades forever. People really liked this, and it prevented newcomers (TEN, Heat [a service launched in 1997 by Sega of America], MPlayer, etc.) from being able to charge enough to pay for their expensive overheads.

Kali was able to compete with TEN, MPlayer, and Heat because it already had a large established user base (more users equals more fun) and because it was much, much cheaper. These new services wanted to charge a subscription fee, but didn’t provide enough added benefit to justify the added expense.

It was a heady rush indeed, although it would also prove a short-lived one; Kali’s competitors would all be out of business within a year or so of the turn of the millennium. Kali itself stuck around after that, but as a shadow of what it had been, strictly a place for old-timers to reminisce and play the old hits. “I keep it running just out of habit,” said Jay Cotton in 2013. “I make just enough money on website ads to pay for the server.” It still exists today, presumably as a result of the same force of habit.

One half of what Kali and its peers offered was all too obviously ephemeral from the start: as the Internet went mainstream, developers inevitably began building TCP/IP support right into their games, eliminating the need for an external IPX-to-TCP/IP bridge. (For example, Quake, id Software’s much-anticipated follow-up to DOOM, did just this when it finally arrived in 1996.) But the other half of what they offered was community, which may have seemed a more durable sort of benefit. As it happened, though, one clever studio did an end-run around them here as well.



The folks at Blizzard Entertainment, the small studio and publisher that was fast coming to rival id Software for the title of the hottest name in gaming, were enthusiastic supporters of Kali in the beginning, to the point of hand-tweaking Warcraft II, their mega-hit real-time strategy, to run optimally over the service. They were rewarded by seeing it surpass even DOOM to become the most popular game there of all. But as they were polishing their new action-CRPG Diablo for release in 1996, Mike O’Brien, a Blizzard programmer, suggested that they launch their own service that would do everything Kali did in terms of community, albeit for Blizzard’s games alone. And then he additionally suggested that they make it free, gambling that knowledge of its existence would sell enough games for them at retail to offset its maintenance costs. Blizzard’s unofficial motto had long been “Let’s be awesome,” reflecting their determination to sell exactly the games that real hardcore gamers were craving, honed to a perfect finish, and to always give them that little bit extra. What better way to be awesome than by letting their customers effortlessly play and socialize online, and to do so for free?

The idea was given an extra dollop of urgency by the fact that Westwood Games, the maker of Warcraft‘s chief competitor Command & Conquer, had introduced a service called Westwood Chat that could launch people directly into a licensed version of Monopoly. (Shades of Dani Bunten Berry’s cherished childhood memories…) At the moment it supported only Monopoly, a title that appealed to a very different demographic from the hardcore crowd who favored Blizzard’s games, but who knew how long that would last?[4]Westwood Chat would indeed evolve eventually into Westwood Online, with full support for Command & Conquer, but that would happen only after Blizzard had rolled out their own service.

So, when Diablo shipped in the last week of 1996, it included something called Battle.net, a one-click chat and matchmaking service and multiplayer facilitator. Battle.net made everything easier than it had ever been before. It would even automatically patch your copy of the game to the latest version when you logged on, pioneering the “software as a service” model in gaming that has become everyday life in our current age of Steam. “It was so natural,” says Blizzard executive Max Schaefer. “You didn’t think about the fact that you were playing with a dude in Korea and a guy in Israel. It’s really a remarkable thing when you think about it. How often are people casually matched up in different parts of the world?” The answer to that question, of course, was “not very often” in the context of 1997. Today, it’s as normal as computers themselves, thanks to groundbreaking initiatives like this one. Blizzard programmer Jeff Strain:

We believed that in order for it [Battle.net] to really be embraced and adopted, that accessibility had to be there. The real catch for Battle.net was that it was inside-out rather than outside-in. You jumped right into the game. You connected players from within the game experience. You did not alt-tab off into a Web browser to set up your games and have the Web browser try to pass off information or something like that. It was a service designed from Day One to be built into actual games.

The combination of Diablo and Battle.net brought a new, more palpable sort of persistence to online gaming. Players of DOOM or Warcraft II might become known as hotshots on services like Kali, but their reputation conferred no tangible benefit once they entered a game session. A DOOM deathmatch or a Warcraft II battle was a one-and-done event, which everyone started on an equal footing, which everyone would exit again within an hour or so, with nothing but memories and perhaps bragging rights to show for what had transpired.

Diablo, however, was different. Although less narratively and systemically ambitious than many of its recent brethren, it was nevertheless a CRPG, a genre all about building up a character over many gaming sessions. Multiplayer Diablo retained this aspect: the first time you went online, you had to pick one of the three pre-made first-level characters to play, but after that you could keep bringing the same character back to session after session, with all of the skills and loot she had already collected. Suddenly the link between the real people in the chat rooms and their avatars that lived in the game proper was much more concrete. Many found it incredibly compelling. People started to assume the roles of their characters even when they were just hanging out in the chat rooms, started in some very real sense to live the game.

But it wasn’t all sunshine and roses. Battle.net became a breeding ground of the toxic behaviors that have continued to dog online gaming to this day, a social laboratory demonstrating what happens when you take a bunch of hyper-competitive, rambunctious young men and give them carte blanche to have at it any way they wish with virtual swords and spells. The service was soon awash with “griefers,” players who would join others on their adventures, ostensibly as their allies in the dungeon, then literally stab them in the back when they least expected it, killing their characters and running off with all of their hard-won loot. The experience could be downright traumatizing for the victims, who had thought they were joining up with friendly strangers simply to have fun together in a cool new game. “Going online and getting killed was so scarring,” acknowledges David Brevick, Diablo‘s original creator. “Those players are still feeling a little bit apprehensive.”

To make matters worse, many of the griefers were also cheaters. Diablo had been born and bred a single-player game; multiplayer had been a very late addition. This had major ramifications. Diablo stored all the information about the character you played online on your local hard drive rather than the Battle.net server. Learn how to modify this file, and you could create a veritable god for yourself in about ten minutes, instead of the dozens of hours it would take playing the honest way. “Trainers” — programs that could automatically do the necessary hacking for you — spread like wildfire across the Internet. Other folks learned to hack the game’s executable files themselves. Most infamously, they figured out ways to attack other players while they were still in the game’s above-ground town, supposedly a safe space reserved for shopping and healing. Battle.net as a whole took on a siege mentality, as people who wanted to play honorably and honestly learned to lock the masses out with passwords that they exchanged only with trusted friends. This worked after a fashion, but it was also a betrayal of the core premise and advantage of Battle.net, the ability to find a quick pick-up game anytime you wanted one. Yet there was nothing Blizzard could do about it without rewriting the whole game from the ground up. They would eventually do this — but they would call the end result Diablo II. In the meanwhile, it was a case of player beware.

It’s important to understand that, for all that it resembled what would come later all too much from a sociological perspective, multiplayer Diablo was still no more persistent than Moria and Oubliette had been on the old PLATO network: each player’s character was retained from session to session, but nothing about the state of the world. Each world, or instance of the game, could contain a maximum of four human players, and disappeared as soon as the last player left it, leaving as its legacy only the experience points and items its inhabitants had collected from it while it existed. Players could and did kill the demon Diablo, the sole goal of the single-player game, one that usually required ten hours or more of questing to achieve, over and over again in the online version. In this sense, multiplayer Diablo was a completely different game from single-player Diablo, replacing the simple quest narrative of the latter with a social meta-game of character-building and player-versus-player combat.

For lots and lots of people, this was lots and lots of fun; Diablo was hugely popular despite all of the exploits it permitted — indeed, for some players perchance, because of them. It became one of the biggest computer games of the 1990s, bringing online gaming to the masses in a way that even Kali had never managed. Yet there was still a ways to go to reach total persistence, to bring a permanent virtual world to life. Next time, then, we’ll see how mainstream commercial games of the 1990s sought to achieve a degree of persistence that the first MUD could boast of already in 1979. These latest virtual worlds, however, would attempt to do so with all the bells and whistles and audiovisual niceties that a new generation of gamers raised on multimedia and 3D graphics demanded. An old dog in the CRPG space was about to learn a new trick, creating in the process a new gaming acronym that’s even more of a mouthful than POMG.



Did you enjoy this article? If so, please think about pitching in to help me make many more like it. You can pledge any amount you like.


Sources: the books Stay Awhile and Listen Volumes 1 and 2 by David L. Craddock, Masters of Doom by David Kushner, and The Friendly Orange Glow by Brian Dear; Retro Gamer 43, 90, and 103; Computer Gaming World of September 1996 and May 1997; Next Generation of March 1997. Online sources include “The Story of Battle.net” by Wes Fenlon at PC Gamer, Dan Griliopoulos’s collection of interviews about Command & Conquer, Brian Moriarty’s speech honoring Dani Bunten Berry from the 1998 Game Developers Conference, and Jay Cotton’s history of Kali on the DOOM II fan site. Plus some posts on The CRPG Addict, to which I’ve linked in the article proper.

Footnotes

Footnotes
1 The PLATO Moria was a completely different game from the 1983 single-player roguelike that bore the same name.
2 Not the same game as the 2002 Bioware CRPG of the same name.
3 Wheeler Dealers and all of her other games that are mentioned in this article were credited to Dan Bunten, the name under which she lived until 1992.
4 Westwood Chat would indeed evolve eventually into Westwood Online, with full support for Command & Conquer, but that would happen only after Blizzard had rolled out their own service.
 
 

Tags: , , , , , , , , , , , ,

The Next Generation in Graphics, Part 3: Software Meets Hardware

The first finished devices to ship with the 3Dfx Voodoo chipset inside them were not add-on boards for personal computers, but rather standup arcade machines. That venerable segment of the videogames industry was enjoying its last lease on life in the mid-1990s; this was the last era when the graphics of the arcade machines were sufficiently better than those which home computers and consoles could generate as to make it worth getting up off the couch, driving into town, and dropping a quarter or two into a slot to see them. The Voodoo chips now became part and parcel of that, ironically just before they would do much to destroy the arcade market by bringing equally high-quality 3D graphics into homes. For now, though, they wowed players of arcade games like San Francisco Rush: Extreme Racing, Wayne Gretzky’s 3D Hockey, and NFL Blitz.

Still, Gary Tarolli, Scott Sellers, and Ross Smith were most excited by the potential of the add-on-board market. All too well aware of how the chicken-or-the-egg deadlock between game makers and players had doomed their earlier efforts with Pellucid and Media Vision, they launched an all-out charm offensive among game developers long before they had any actual hardware to show them. Smith goes so far as to call “connecting with the developers early on and evangelizing them” the “single most important thing we ever did” — more important, that is to say, than designing the Voodoo chips themselves, impressive as they were. Throughout 1995, somebody from 3Dfx was guaranteed to be present wherever developers got together to talk among themselves. While these evangelizers had no hardware as yet, they did have software simulations running on SGI workstations — simulations which, they promised, duplicated exactly the capabilities the real chips would have when they started arriving in quantity from Taiwan.

Our core trio realized early on that their task must involve software as much as hardware in another, more enduring sense: they had to make it as easy as possible to support the Voodoo chipset. In my previous article, I mentioned how their old employer SGI had created an open-source software library for 3D graphics, known as OpenGL. A team of programmers from 3Dfx now took this as the starting point of a slimmed-down, ultra-optimized MS-DOS library they called GLide; whereas OpenGL sported well over 300 individual function calls, GLide had less than 100. It was fast, it was lightweight, and it was easy to program. They had good reason to be proud of it. Its only drawback was that it would only work with the Voodoo chips — which was not necessarily a drawback at all in the eyes of its creators, given that they hoped and planned to dominate a thriving future market for hardware-accelerated 3D graphics on personal computers.

Yet that domination was by no means assured, for they were far from the only ones developing consumer-oriented 3D chipsets. One other company in particular gave every indication of being on the inside track to widespread acceptance. That company was Rendition, another small, venture-capital-funded startup that was doing all of the same things 3Dfx was doing — only Rendition had gotten started even earlier. It had actually been Rendition who announced a 3D chipset first, and they had been evangelizing it ever since every bit as tirelessly as 3Dfx.

The Voodoo chipset was technologically baroque in comparison to Rendition’s chips, which went under the name of Vérité. This meant that Voodoo should easily outperform them — eventually, once all of the logistics of East Asian chip fabricating had been dealt with and deals had been signed with board makers. In June of 1996, when the first Vérité-powered boards shipped, the Voodoo chipset quite literally didn’t exist as far as consumers were concerned. Those first Vérité boards were made by none other than Creative Labs, the 800-pound gorilla of the home-computer add-on market, maker of the ubiquitous Sound Blaster sound cards and many a “multimedia upgrade kit.” Such a partner must be counted as yet another early coup for Rendition.

The Vérité cards were followed by a flood of others whose slickly aggressive names belied their somewhat workmanlike designs: 3D Labs Permedia, S3 Virge, ATI 3D Rage, Matrox Mystique. And still Voodoo was nowhere.

What was everywhere was confusion; it was all but impossible for the poor, benighted gamer to make heads or tails of the situation. None of these chipsets were compatible with one another at the hardware level in the way that 2D graphics cards were; there were no hardware standards for 3D graphics akin to VGA, that last legacy of IBM’s era of dominance, much less the various SVGA standards defined by the Video Electronic Standards Association (VESA). Given that most action-oriented computer games still ran on MS-DOS, this was a serious problem.

For, being more of a collection of basic function calls than a proper operating system, MS-DOS was not known for its hardware agnosticism. Most of the folks making 3D chips did provide an MS-DOS software package for steering them, similar in concept to 3Dfx’s GLide, if seldom as optimized and elegant. But, just like GLide, such libraries worked only with the chipset for which they had been created. What was sorely needed was an intermediate layer of software to sit between games and the chipset-manufacturer-provided libraries, to automatically translate generic function calls into forms suitable for whatever particular chipset happened to exist on that particular computer. This alone could make it possible for one build of one game to run on multiple 3D chipsets. Yet such a level of hardware abstraction was far beyond the capabilities of bare-bones MS-DOS.

Absent a more reasonable solution, the only choice was to make separate versions of games for each of the various 3D chipsets. And so began the brief-lived, unlamented era of the 3D pack-in game. All of the 3D-hardware manufacturers courted the developers and publishers of popular software-rendered 3D games, dangling before them all sorts of enticements to create special versions that took advantage of their cards, more often than not to be included right in the box with them. Activision’s hugely successful giant-robot-fighting game MechWarrior 2 became the king of the pack-ins, with at least half a dozen different chipset-specific versions floating around, all paid for upfront by the board makers in cold, hard cash. (Whatever else can be said about him, Bobby Kotick has always been able to spot the seams in the gaming market where gold is waiting to be mined.)

It was an absurd, untenable situation; the game or games that came in the box were the only ones that the purchasers of some of the also-ran 3D contenders ever got a chance to play with their new toys. Gamers and chipset makers alike could only hope that, once Windows replaced MS-DOS as the gaming standard, their pain would go away.

In the meanwhile, the games studio that everyone with an interest in the 3D-acceleration sweepstakes was courting most of all was id Software — more specifically, id’s founder and tech guru, gaming’s anointed Master of 3D Algorithms, John Carmack. They all begged him for a version of Quake for their chipset.

And once again, it was Rendition that scored the early coup here. Carmack actually shared some of the Quake source code with them well before either the finished game or the finished Vérité chipset was available for purchase. Programmed by a pair of Rendition’s own staffers working with the advice and support of Carmack and Michael Abrash, the Vérité-rendered version of the game, commonly known as vQuake, came out very shortly after the software-rendered version. Carmack called it “the premier platform for Quake” — truly marketing copy to die for. Gamers too agreed that 3D acceleration made the original’s amazing graphics that much more amazing, while the makers of other 3D chipsets gnashed their teeth and seethed.

Quake with software rendering.

vQuake

Among these, of course, was the tardy 3Dfx. The first Voodoo cards appeared late, seemingly hopelessly so: well into the fall of 1996. Nor did they have the prestige and distribution muscle of a partner like Creative Labs behind them: the first two Voodoo boards rather came from smaller firms by the names of Diamond and Orchid. They sold for $300, putting them well up at the pricey end of the market —  and, unlike all of the competition’s cards, these required you to have another, 2D-graphics card in your computer as well. For all of these reasons, they seemed easy enough to dismiss as overpriced white elephants at first blush. But that impression lasted only until you got a look at them in action. The Voodoo cards came complete with a list of features that none of the competition could come close to matching in the aggregate: bilinear filtering, trilinear MIP-mapping, alpha blending, fog effects, accelerated light sources. If you don’t know what those terms mean, rest assured that they made games look better and play faster than anything else on the market. This was amply demonstrated by those first Voodoo boards’ pack-in title, an otherwise rather undistinguished, typical-of-its-time shooter called Hellbender. In its new incarnation, it suddenly looked stunning.

The Orchid Righteous 3D card, one of the first two to use the Voodoo chipset. (The only consumer category as fond of bro-dude phraseology like “extreme” and “righteous” as the makers of 3D cards was men’s razors.)

The battle lines were drawn between Rendition and 3Dfx. But sadly for the former, it quickly emerged that their chipset had one especially devastating weakness in comparison to its rival: its Z-buffering support left much to be desired. And what, you ask, is Z-buffering? Read on!

One of the non-obvious problems that 3D-graphics systems must solve is the need for objects in the foreground of a scene to realistically obscure those behind them. If, at the rendering stage, we were to simply draw the objects in whatever random order they came to us, we would wind up with a dog’s breakfast of overlapping shapes. We need to have a way of depth-sorting the objects if we want to end up with a coherent, correctly rendered scene.

The most straightforward way of depth-sorting is called the Painter’s Algorithm, because it duplicates the process a human artist usually goes through to paint a picture. Let’s say our artist wants to paint a still life of an apple sitting in front of a basket of other fruits. First she will paint the basket to her satisfaction, then paint the apple right over the top of it. Similarly, when we use a Painter’s Algorithm on the computer, we first sort the whole collection of objects into a hierarchy that begins with those that are farthest from our virtual camera and ends with those closest to it. Only after this has been done do we set about the task of actually drawing them to the screen, in our sorted order from the farthest away to the closest. And so we end up with a correctly rendered image.

But, as so often happens in matters like this, the most logically straightforward way is far from the most efficient way of depth-sorting a 3D scene. When the number of objects involved is few, the Painter’s Algorithm works reasonably well. When the numbers get into the hundreds or thousands, however, it results in much wasted effort, as the computer ends up drawing objects that are completely obscured by other objects in front of them — i.e., objects that don’t really need to be drawn at all. Even more importantly, the process of sorting all of the objects by depth beforehand is painfully time-consuming, a speed bump that stops the rendering process dead until it is completed. Even in the 1990s, when their technology was in a laughably primitive stage compared to today, GPUs tended to emphasize parallel processing — i.e., staying constantly busy with multiple tasks at the same time. The necessity of sorting every object in a scene by depth before even getting properly started on rendering it rather threw all that out the window.

Enter the Z-buffer. Under this approach, every object is rendered right away as soon as it comes down the pipeline, used to build the appropriate part of the raster of colored pixels that, once completed, will be sent to the monitor screen as a single frame. But there comes an additional wrinkle in the form of the Z-buffer itself: a separate, parallel raster containing not the color of each pixel but its distance from the camera. Before the GPU adds an entry to the raster of pixel colors, it compares the distance of that pixel from the camera with the number in that location in the Z-buffer. If the current distance is less than the one already found there, it knows that the pixel in question should be overwritten in the main raster and that the Z-buffer raster should be updated with that pixel’s new distance from the camera. Ditto if the Z-buffer contains a null value, indicating no object has yet been drawn at that pixel. But if the current distance is larger than the (non-null) number already found there, the GPU simply moves on without doing anything more, confident in the knowledge that what it had wanted to draw should actually be hidden by what it has already drawn.

There are plenty of occasions when the same pixel is drawn over twice — or many times — before reaching the screen even under this scheme, but it is nevertheless still vastly more efficient than the Painter’s Algorithm, because it keeps objects flowing through the pipeline steadily, with no hiccups caused by lengthy sorting operations. Z-buffering support was reportedly a last-minute addition to the Vérité chipset, and it showed. Turning depth-sorting on for 100-percent realistic rendering on these chips cut their throughput almost in half; the Voodoo chipset, by contrast, just said, “No worries!,” and kept right on trucking. This was an advantage of titanic proportions. It eventually emerged that the programmers at Rendition had been able to get Quake running acceptably on the Vérité chips only by kludging together their own depth-sorting algorithms in software. With Voodoo, programmers wouldn’t have to waste time with stuff like that.

But surprisingly, the game that blew open the doors for the Voodoo chipset wasn’t Quake or anything else from id. It was rather a little something called Tomb Raider, from the British studio Core Design, a game which used a behind-the-back third-person perspective rather than the more typical first-person view — the better to appreciate its protagonist, the buxom and acrobatic female archaeologist Lara Croft. In addition to Lara’s considerable assets, Tomb Raider attracted gamers with its unprecedentedly huge and wide-open 3D environments. (It will be the subject of my next article, for those interested in reading more about its massive commercial profile and somewhat controversial legacy.)

In November of 1996, when Tomb Raider been out for less than a month, Core put a  Voodoo patch for it up on their website. Gamers were blown away. “It’s a totally new game!” gushed one on Usenet. “It was playable but a little jerky without the patch, but silky smooth to play and beautiful to look at with the patch.” “The level of detail you get with the Voodoo chip is amazing!” enthused another. Or how about this for a ringing testimonial?

I had been playing the regular Tomb Raider on my PC for about two weeks
before I got the patch, with about ten people seeing the game, and not
really saying anything regarding how amazing it was. When I got the
accelerated patch, after about four days, every single person who has
seen the game has been in awe watching the graphics and how
smooth [and] lifelike the movement is. The feel is different, you can see
things much more clearly, it’s just a more enjoyable game now.

Tomb Raider became the biggest hit of the 1996 holiday season, and tens if not hundreds of thousands of Voodoo-based 3D cards joined it under Christmas trees.

Tomb Raider with software rendering.

Tomb Raider with a Voodoo card.

In January of 1997, id released GLQuake, a new version of that game that supported the Voodoo chipset. In telling contrast to the Vérité-powered vQuake, which had been coded by Rendition’s programmers, GLQuake had been taken on by John Carmack as a personal project. The proof was in the pudding; this Quake ran faster and looked better than either of the previous ones. Running on a machine with a 200 MHz Intel Pentium processor and a Voodoo card, GLQuake could manage 70 frames per second, compared to 41 frames for the software-rendered version, whilst appearing much more realistic and less pixelated.

GLQuake

One last stroke of luck put the finishing touch on 3Dfx’s destiny of world domination: the price of memory dropped precipitously, thanks to a number of new RAM-chip factories that came online all at once in East Asia. (The factories had been built largely to feed the memory demands of Windows 95, the straw that was stirring the drink of the entire computer industry.) The Voodoo chipset required 4 MB of memory to operate effectively — an appreciable quantity in those days, and a big reason why the cards that used it tended to cost almost as twice as much as those based on the Vérité chips, despite lacking the added complications and expense of 2D support. But with the drop in memory prices, it suddenly became practical to sell a Voodoo card for under $200. Rendition could also lower their prices somewhat thanks to the memory windfall, of course, but at these lower price points the dollar difference wasn’t as damaging to 3Dfx. After all, the Voodoo cards were universally acknowledged to be the class of the industry. They were surely worth paying a little bit of a premium for. By the middle of 1997, the Voodoo chipset was everywhere, the Vérité one left dead at the side of the road. “If you want full support for a gamut of games, you need to get a 3Dfx card,” wrote Computer Gaming World.

These were heady times at 3Dfx, which had become almost overnight the most hallowed name in hardcore action gaming outside of id Software, all whilst making an order of magnitude more money than id, whose business model under John Carmack was hardly fine-tuned to maximize revenues. In a comment he left recently on this site, reader Captain Kal said that, when it comes to 3D gaming in the late 1990s, “one company springs to my mind without even thinking: 3Dfx. Yes, we also had 3D solutions from ATI, NVIDIA, or even S3, but Voodoo cards created the kind of dedication that I hadn’t seen since the Amiga days.” The comparison strikes me as thoroughly apropos.

3Dfx brought in a high-profile CEO named Greg Ballard, formerly of Warner Music and the videogame giant Capcom, to oversee a smashingly successful initial public offering in June of 1997. He and the three thirty-something founders were the oldest people at the company. “Most of the software engineers were [in their] early twenties, gamers through and through, loved games,” says Scott Sellers. “Would code during the day and play games at night. It was a culture of fun.” Their offices stood at the eighth hole of a golf course in Sunnyvale, California. “We’d sit out there and drink beer,” says Ross Smith. “And you’d have to dodge incoming golf balls a bit. But the culture was great.” Every time he came down for a visit, says their investing angel Gordon Campbell,

they’d show you something new, a new demo, a new mapping technique. There was always something. It was a very creative environment. The work hard and play hard thing, that to me kind of was Silicon Valley. You went out and socialized with your crew and had beer fests and did all that kind of stuff. And a friendly environment where everybody knew everybody and everybody was not in a hierarchy so much as part of the group or the team.

I think the thing that was added here was, it’s the gaming industry. And that was a whole new twist on it. I mean, if you go to the trade shows, you’d have guys that would show up at our booth with Dracula capes and pointed teeth. I mean, it was just crazy.

Gary Tarolli, Scott Sellers, and Greg Ballard do battle with a dangerous houseplant. The 1990s were wild and crazy times, kids…

While the folks at 3Dfx were working hard and playing hard, an enormously consequential advancement in the field of software was on the verge of transforming the computer-games industry. As I noted previously, in 1996 most hardcore action games were still being released for MS-DOS. In 1997, however, that changed in a big way. With the exception of only a few straggling Luddites, game developers switched over to Windows 95 en masse. Quake had been an MS-DOS game; Quake II, which would ship at the end of 1997, ran under Windows. The same held true for the original Tomb Raider and its 1997 sequel, as it did for countless others.

Gaming was made possible on Windows 95 by Microsoft’s DirectX libraries, which finally let programmers do everything in Windows that they had once done in MS-DOS, with only a slight speed penalty if any, all while giving them the welcome luxury of hardware independence. That is to say, all of the fiddly details of disparate video and sound cards and all the rest were abstracted away into Windows device drivers that communicated automatically with DirectX to do the needful. It was an enormous burden lifted off of developers’ shoulders. Ditto gamers, who no longer had to futz about for hours with cryptic “autoexec.bat” and “config.sys” files, searching out the exact combination of arcane incantations that would allow each game they bought to run optimally on their precise machine. One no longer needed to be a tech-head simply to install a game.

In its original release of September 1995, the full DirectX suite consisted of DirectDraw for 2D pixel graphics, DirectSound for sound and music, DirectInput for managing joysticks and other game-centric input devices, and DirectPlay for networked multiplayer gaming. It provided no support for doing 3D graphics. But never fear, Microsoft said: 3D support was coming. Already in February of 1995, they had purchased a British company called RenderMorphics, the creator of Reality Lab, a hardware-agnostic 3D library. As promised, Microsoft added Direct3D to the DirectX collection with the latter’s 2.0 release, in June of 1996.

But, as the noted computer scientist Andrew Tanenbaum once said, “the nice thing about standards is that you have so many to choose from.” For the next several years, Direct3D would compete with another library serving the same purpose: a complete, hardware-agnostic Windows port of SGI’s OpenGL, whose most prominent booster was no less leading a light than John Carmack. Direct3D would largely win out in the end among game developers despite Carmack’s endorsement of its rival, but we need not concern ourselves overmuch with the details of that tempest in a teacup here. Suffice to say that even the most bitter partisans on one side of the divide or the other could usually agree that both Direct3D and OpenGL were vastly preferable to the bad old days of chipset-specific 3D games.

Unfortunately for them, 3Dfx, rather feeling their oats after all of their success, made in response to these developments the first of a series of bad decisions that would cause their time at the top of the 3D-graphics heap to be a relatively short one.

Like all of the others, the Voodoo chipset could be used under Windows with either Direct3D or OpenGL. But there were some features on the Voodoo chips that the current implementations of those libraries didn’t support. 3Dfx was worried, reasonably enough on the face of it, about a “least-common-denominator effect” which would cancel out the very real advantages of their 3D chipset and make one example of the breed more or less as good as any other. However, instead of working with the folks behind Direct3D and OpenGL to get support for the Voodoo chips’ special features into those libraries, they opted to release a Windows version of GLide, and to strongly encourage game developers to keep working with it instead of either of the more hardware-agnostic alternatives. “You don’t want to just have a title 80 percent as good as it could be because your competitors are all going to be at 100 percent,” they said pointedly. They went so far as to start speaking of Voodoo-equipped machines as a whole new platform unto themselves, separate from more plebeian personal computers.

It was the talk and actions of a company that had begun to take its own press releases a bit too much to heart. But for a time 3Dfx got away with it. Developers coded for GLide in addition to or instead of Direct3D or OpenGL, because you really could do a lot more with it and because the cachet of the “certified” 3Dfx logo that using GLide allowed them to put on their boxes really was huge.

In March of 1998, the first cards with a new 3Dfx chipset, known as Voodoo2, began to appear. Voodoo2 boasted twice the overall throughput of its predecessor, and could handle a screen resolution of 800 X 600 instead of just 640 X 480; you could even join two of the new cards together to get even better performance and higher resolutions. This latest chipset only seemed to cement 3Dfx’s position as the class of their field.

The bottom line reflected this. 3Dfx was, in the words of their new CEO Greg Ballard, “a rocket ship.” In 1995, they earned $4 million in revenue; in 1996, $44 million; in 1997, $210 million; and in 1998, their peak year, $450 million. And yet their laser focus on selling the Ferraris of 3D acceleration was blinding Ballard and his colleagues to the potential of 3D Toyotas, where the biggest money of all was waiting to be made.

Over the course of the second half of the 1990s, 3D GPUs went from being exotic pieces of kit known only to hardcore gamers to being just another piece of commodity hardware found in almost all computers. 3Dfx had nothing to do with this significant shift. Instead they all but ignored this so-called “OEM” (“Original Equipment Manufacturer”) side of the GPU equation: chipsets that weren’t the hottest or the sexiest on the market, but that were cheap and easy to solder right onto the motherboards of low-end and mid-range machines bearing such unsexy name plates as Compaq and Packard Bell. Ironically, Gordon Campbell had made a fortune with Chips & Technologies selling just such commodity-grade 2D graphics chipsets. But 3Dfx was obstinately determined to fly above the OEM segment, determined to offer “premium” products only. “It doesn’t matter if 20 million people have one of our competitors’ chips,” said Scott Sellers in 1997. “How many of those people are hardcore gamers? How many of those people are buying games?” “I can guarantee that 100 percent of 3Dfx owners are buying games,” chimed in a self-satisfied-sounding Gary Tarolli.

The obvious question to ask in response was why it should matter to 3Dfx how many games — or what types of games — the users of their chips were buying, as long as they were buying gadgets that contained their chips. While 3Dfx basked in their status as the hardcore gamer’s favorite, other companies were selling many more 3D chips, admittedly at much less of a profit on a chip-per-chip basis, at the OEM end of the market. Among these was a firm known as NVIDIA, which had been founded on the back of a napkin in a Denny’s diner in 1993. NVIDIA’s first attempt to compete head to head with 3Dfx at the high end was underwhelming at best: released well after the Voodoo2 chipset, the RIVA TNT ran so hot that it required a noisy onboard cooling fan, and yet still couldn’t match the Voodoo2’s performance. By that time, however, NVIDIA was already building a lucrative business out of cheaper, simpler chips on the OEM side, even as they were gaining the wisdom they would need to mount a more credible assault on the hardcore-gamer market. In late 1998, 3Dfx finally seemed to be waking up to the fact that they would need to reach beyond the hardcore to continue their rise, when they released a new chipset called Voodoo Banshee which wasn’t quite as powerful as the Voodoo2 chips but could do conventional 2D as well as 3D graphics, meaning its owners would not be forced to buy a second video card just in order to use their computers.

But sadly, they followed this step forward with an absolutely disastrous mistake. You’ll remember that prior to this point 3Dfx had sold their chips only to other companies, who then incorporated them into add-on boards of their own design, in the same way that Intel sold microprocessors to computer makers rather than directly to consumers (aside from the build-your-own-rig hobbyists, that is). This business model had made sense for 3Dfx when they were cash-strapped and hadn’t a hope of building retail-distribution channels equal to those of the established board makers. Now, though, they were flush with cash, and enjoyed far better name recognition than the companies that made the boards which used their chips; even the likes of Creative Labs, who had long since dropped Rendition and were now selling plenty of 3Dfx boards, couldn’t touch them in terms of prestige. Why not cut out all these middlemen by manufacturing their own boards using their own chips and selling them directly to consumers with only the 3Dfx name on the box? They decided to do exactly that with their third state-of-the-art 3D chipset, the predictably named Voodoo3, which was ready in the spring of 1999.

Those famous last words apply: “It seemed like a good idea at the time.” With the benefit of hindsight, we can see all too clearly what a terrible decision it actually was. The move into the board market became, says Scott Sellers, the “anchor” that would drag down the whole company in a rather breathtakingly short span of time: “We started competing with what used to be our own customers” — i.e., the makers of all those earlier Voodoo boards. Then, too, 3Dfx found that the logistics of selling a polished consumer product at retail, from manufacturing to distribution to advertising, were much more complex than they had reckoned with.

Still, they might — just might — have been able to figure it all out and make it work, if only the Voodoo3 chipset had been a bit better. As it was, it was an upgrade to be sure, but not quite as much of one as everyone had been expecting. In fact, some began to point out now that even the Voodoo2 chips hadn’t been that great a leap: they too were better than their predecessors, yes, but that was more down to ever-falling memory prices and ever-improving chip-fabrication technologies than any groundbreaking innovations in their fundamental designs. It seemed that 3Dfx had started to grow complacent some time ago.

NVIDIA saw their opening and made the most of it. They introduced a new line of their own, called the TNT2, which outdid its 3Dfx competitor in at least one key metric: it could do 24-bit color, giving it almost 17 million shades of onscreen nuance, compared to just over 65,000 in the case of Voodoo3. For the first time, 3Dfx’s chips were not the unqualified, undisputed technological leaders. To make matters worse, NVIDIA had been working closely with Microsoft in exactly the way that 3Dfx had never found it in their hearts to do, ensuring that every last feature of their chips was well-supported by the increasingly dominant Direct3D libraries.

And then, as the final nail in the coffin, there were all those third-party board makers 3Dfx had so rudely jilted when they decided to take over that side of the business themselves. These had nowhere left to go but into NVIDIA’s welcoming arms. And needless to say, these business partners spurned were highly motivated to make 3Dfx pay for their betrayal.

NVIDIA was on a roll now. They soon came out with yet another new chipset, the GeForce 256, which had a “Transform & Lighting” (T&L) engine built in, a major conceptual advance. And again, the new technology was accessible right from the start through Direct3D, thanks to NVIDIA’s tight relationship with Microsoft. Meanwhile the 3Dfx chips still needed GLide to perform at their best. With those chips’ sales now plummeting, more and more game developers decided the oddball library just wasn’t worth the trouble anymore. By the end of 1999, a 3Dfx death spiral that absolutely no one had seen coming at the start of the year was already well along. NVIDIA was rapidly sewing up both the high end and the low end, leaving 3Dfx with nothing.

In 2000, NVIDIA continued to go from strength to strength. Their biggest challenger at the hardcore-gamer level that year was not 3Dfx, but rather ATI, who arrived on the scene with a new architecture known as Radeon. 3Dfx attempted to right the ship with a two-pronged approach: a Voodoo4 chipset aimed at the long-neglected budget market, and a Voodoo5 aimed at the high end. Both had potential, but the company was badly strapped for cash by now, and couldn’t afford to give them the launch they deserved. In December of 2000, 3Dfx announced that they had agreed to sell out to NVIDIA, who thought they had spotted some bits and bobs in their more recent chips that they might be able to make use of. And that, as they say, was that.

3Dfx was a brief-burning comet by any standard, a company which did everything right up to the instant when someone somewhere flipped a switch and it suddenly started doing everything wrong instead. But whatever regrets Gary Tarolli, Scott Sellers, and Ross Smith may have about the way it all turned out, they can rest secure in the knowledge that they changed not just gaming but computing in general forever. Their vanquisher NVIDIA had revenues of almost $27 billion last year, on the strength of GPUs which are as far beyond the original Voodoo chips as an F-35 is beyond the Wright Brothers’ flier, which are at the forefront not just of 3D graphics but a whole new trend toward “massively parallel” computing.

And yet even today, the 3Dfx name and logo can still send a little tingle of excitement running down the spines of gamers of a certain age, just as that of the Amiga can among some just slightly older. For a brief few years there, over the course of one of most febrile, chaotic, and yet exciting periods in all of gaming history, having a Voodoo card in your computer meant that you had the best graphics money could buy. Most of us wouldn’t want to go back to the days of needing to constantly tinker with the innards of our computers, of dropping hundreds of dollars on the latest and the greatest and hoping that publishers would still be supporting it in six months, of poring over magazines trying to make sense of long lists of arcane bullet points that seemed like fragments of a particularly esoteric PhD thesis (largely because they originally were). No, we wouldn’t want to go back; those days were kind of ridiculous. But that doesn’t mean we can’t look back and smile at the extraordinary technological progression we were privileged to witness over such a disarmingly short period of time.



Did you enjoy this article? If so, please think about pitching in to help me make many more like it. You can pledge any amount you like.



(Sources: the books Renegades of the Empire: How Three Software Warriors Started a Revolution Behind the Walls of Fortress Microsoft by Michael Drummond, Masters of DOOM: How Two Guys Created an Empire and Transformed Pop Culture by David Kushner, and Principles of Three-Dimensional Computer Animation by Michael O’Rourke. Computer Gaming World of November 1995, January 1996, July 1996, November 1996, December 1996, September 1997, October 1997, November 1997, and April 1998; Next Generation of October 1997 and January 1998; Atomic of June 2003; Game Developer of December 1996/January 1997 and February/March 1997. Online sources include “3Dfx and Voodoo Graphics — The Technologies Within” at The Overclocker, former 3Dfx CEO Greg Ballard’s lecture for Stanford’s Entrepreneurial Thought Leader series, the Computer History Museum’s “oral history” with the founders of 3Dfx, Fabian Sanglard’s reconstruction of the workings of the Vérité chipset and the Voodoo 1 chipset, “Famous Graphics Chips: 3Dfx’s Voodoo” by Dr. Jon Peddie at the IEEE Computer Society’s site, and “A Fallen Titan’s Final Glory” by Joel Hruska at the long-defunct Sudhian Media. Also, the Usenet discussions that followed the release of the 3Dfx patch for Tomb Raider and Nicol Bolas’s crazily detailed reply to the Stack Exchange question “Why Do Game Developer Prefer Windows?”.)

 

Tags: , , , , , , , ,

The Next Generation in Graphics, Part 1: Three Dimensions in Software (or, Quake and Its Discontents)

“Mathematics,” wrote the historian of science Carl Benjamin Boyer many years ago, “is as much an aspect of culture as it is a collection of algorithms.” The same might be said about the mathematical algorithms we choose to prioritize — especially in these modern times, when the right set of formulas can be worth many millions of dollars, can be trade secrets as jealously guarded as the recipes for Coca-Cola or McDonald’s Special Sauce.

We can learn much about the tech zeitgeist from those algorithms the conventional wisdom thinks are most valuable. At the very beginning of the 1990s, when “multimedia” was the buzzword of the age and the future of games was believed to lie with “interactive movies” made out of video clips of real actors, the race was on to develop video codecs: libraries of code able to digitize footage from the analog world and compress it to a fraction of its natural size, thereby making it possible to fit a reasonable quantity of it on CDs and hard drives. This was a period when Apple’s QuickTime was regarded as a killer app in itself, when Philips’s ill-fated CD-i console could be delayed for years by the lack of a way to get video to its screen quickly and attractively.

It is a rule in almost all kinds of engineering that, the more specialized a device is, the more efficiently it can perform the tasks that lie within its limited sphere. This rule holds true as much in computing as anywhere else. So, when software proved able to stretch only so far in the face of the limited general-purpose computing power of the day, some started to build their video codecs into specialized hardware add-ons.

Just a few years later, after the zeitgeist in games had shifted, the whole process repeated itself in a different context.

By the middle years of the decade, with the limitations of working with canned video clips becoming all too plain, interactive movies were beginning to look like a severe case of the emperor’s new clothes. The games industry therefore shifted its hopeful gaze to another approach, one that would prove a much more lasting transformation in the way games were made. This 3D Revolution did have one point of similarity with the mooted and then abandoned meeting of Silicon Valley and Hollywood: it too was driven by algorithms, implemented first in software and then in hardware.

It was different, however, in that the entire industry looked to one man to lead it into its algorithmic 3D future. That man’s name was John Carmack.



Whether they happen to be pixel art hand-drawn by human artists or video footage captured by cameras, 2D graphics already exist on disk before they appear on the monitor screen. And therein lies the source of their limitations. Clever programmers can manipulate them to some extent — pixel art generally more so than digitized video — but the possibilities are bounded by the fundamentally static nature of the source material. 3D graphics, however, are literally drawn by the computer. They can go anywhere and do just about anything. For, while 2D graphics are stored as a concrete grid of pixels, 3D graphics are described using only the abstract language of mathematics — a language able to describe not just a scene but an entire world, assuming you have a powerful enough computer running a good enough algorithm.

Like so many things that get really complicated really quickly, the basic concepts of 3D graphics are disarmingly simple. The process behind them can be divided into two phases: the modeling phase and the rendering, or rasterization, phase.

It all begins with simple two-dimensional shapes of the sort we all remember from middle-school geometry, each defined as a collection of points on a plane and straight lines connecting them together. By combining and arranging these two-dimensional shapes, or surfaces, together in three-dimensional space, we can make solids — or, in the language of computerized 3D graphics, objects.

Here we see how 3D objects can be made ever more more complex by building them out of ever more surfaces. The trade-off is that more complex objects require more computing power to render in a timely fashion.

Once we have a collection of objects, we can put them into a world space, wherever we like and at whatever angle of orientation we like. This world space is laid out as a three-dimensional grid, with its point of origin — i.e., the point where X, Y, and Z are all zero — wherever we wish it to be. In addition to our objects, we also place within it a camera — or, if you like, an observer in our world — at whatever position and angle of orientation we wish. At their simplest, 3D graphics require nothing more at the modeling phase.

We sometimes call the second phase the “rasterization” phase in reference to the orderly two-dimensional grid of pixels which make up the image seen on a monitor screen, which in computer-science parlance is known as a raster. The whole point of this rasterization phase, then, is to make our computer’s monitor a window into our imaginary world from the point of view of our imaginary camera. This entails converting said world’s three dimensions back into our two-dimensional raster of pixels, using the rules of perspective that have been understood by human artists since the Renaissance.

We can think of rasterizing as observing a scene through a window screen. Each square in the mesh is one pixel, which can be exactly one color. The whole process of 3D rendering ultimately comes down to figuring out what each of those colors should be.

The most basic of all 3D graphics are of the “wire-frame” stripe, which attempt to draw only the lines that form the edges of their surfaces. They were seen fairly frequently on microcomputers as far back as the early 1980s, the most iconic example undoubtedly being the classic 1984 space-trading game Elite.

Even in something as simple as Elite, we can begin to see how 3D graphics blur the lines between a purely presentation-level technology and a full-blown world simulation. When we have one enemy spaceship in our sights in Elite, there might be several others above, behind, or below us, which the 3D engine “knows” about but which we may not. Combined with a physics engine and some player and computer agency in the model world (taking here the form of lasers and thrusters), it provides the raw materials for a game. Small wonder that so many game developers came to see 3D graphics as such a natural fit.

But, for all that those wire frames in Elite might have had their novel charm in their day, programmers realized that the aesthetics of 3D graphics had to get better for them to become a viable proposition over the long haul. This realization touched off an algorithmic arms race that is still ongoing to this day. The obvious first step was to paint in the surfaces of each solid in single blocks of color, as the later versions of Elite that were written for 16-bit rather than 8-bit machines often did. It was an improvement in a way, but it still looked jarringly artificial, even against a spartan star field in outer space.

The next way station on the road to a semi-realistic-looking computer-generated world was light sources of varying strengths, positioned in the world with X, Y, and Z coordinates of their own, casting their illumination and shadows realistically on the objects to be found there.

A 3D scene with light sources.

The final step was to add textures, small pictures that were painted onto surfaces in place of uniform blocks of color; think of the pitted paint job of a tired X-Wing fighter or the camouflage of a Sherman tank. Textures introduced an enormous degree of complication at the rasterization stage; it wasn’t easy for 3D engines to make them look believable from a multitude of different lines of sight. That said, believable lighting was almost as complicated. Textures or lighting, or both, were already the fodder for many an academic thesis before microcomputers even existed.

A 3D scene with light sources and textures.

In the more results-focused milieu of commercial game development, where what was possible was determined largely by which types of microprocessors Intel and Motorola were selling the most of in any given year, programmers were forced to choose between compromised visions of the academic ideal. These broke down into two categories, neatly exemplified by the two most profitable computer games of the 1990s. Those games that followed in one or the other’s footsteps came to be known as the “Myst clones” and the “DOOM clones.” They could hardly have been more dissimilar in personality, yet they were both symbols of a burgeoning 3D revolution.

The Myst clones got their name from a game developed by Cyan Studios and published by Brøderbund in September of 1993, which went on to sell at least 6 million copies as a boxed retail product and quite likely millions more as a pack-in of one description or another. Myst and the many games that copied its approach tended to be, as even their most strident detractors had to admit, rather beautiful to look at. This was because they didn’t attempt to render their 3D imagery in real time; their rendering was instead done beforehand, often on beefy workstation-class machines, then captured as finished rasters of pixels on disk. Given that they worked with graphics that needed to be rendered only once and could be allowed to take hours to do so if necessary, the creators of games like this could pull out all the stops in terms of textures, lighting, and the sheer number and complexity of the 3D solids that made up their worlds.

These games’ disadvantage — a pretty darn massive one in the opinion of many players — was that their scope of interactive potential was as sharply limited in its way as that of all those interactive movies built around canned video clips that the industry was slowly giving up on. They could present their worlds to their players only as a collection of pre-rendered nodes to be jumped between, could do nothing on the fly. These limitations led most of their designers to build their gameplay around set-piece puzzles found in otherwise static, non-interactive environments, which most players soon started to find a bit boring. Although the genre had its contemplative pleasures and its dedicated aficionados who appreciated them, its appeal as anything other than a tech demo — the basis on which the original Myst was primarily sold — turned out to be the very definition of niche, as the publishers of Myst clones belatedly learned to their dismay. The harsh reality became undeniable once Riven, the much-anticipated, sumptuously beautiful sequel to Myst, under-performed expectations by “only” selling 1 million copies when it finally appeared four years after its hallowed predecessor. With the exception only of Titanic: Adventure out of Time, which owed its fluke success to a certain James Cameron movie with which it happened to share a name and a setting, no other game of this style ever cracked half a million in unit sales. The genre has been off the mainstream radar for decades now.

The DOOM clones, on the other hand, have proved a far more enduring fixture of mainstream gaming. They took their name, of course, from the landmark game of first-person carnage which the energetic young men of id Software released just a couple of months after Myst reached store shelves. John Carmack, the mastermind of the DOOM engine, managed to present a dynamic, seamless, apparently 3D world in place of the static nodes of Myst, and managed to do it in real time, even on a fairly plebeian consumer-grade computer. He did so first of all by being a genius programmer, able to squeeze every last drop out of the limited hardware at his disposal. And then, when even that wasn’t enough to get the job done, he threw out feature after feature that the academics whose papers he had pored over insisted was essential for any “real” 3D engine. His motto was, if you can’t get it done honestly, cheat, by hard-coding assumptions about the world into your algorithms and simply not letting the player — or the level designer — violate them. The end result was no Myst-like archetype of beauty in still screenshots. It pasted 2D sprites into its world whenever there wasn’t horsepower enough to do real modeling, had an understanding of light and its properties that is most kindly described as rudimentary, and couldn’t even handle sloping floors or ceilings, or walls that weren’t perfectly vertical. Heck, it didn’t even let you look up or down.

And absolutely none of that mattered. DOOM may have looked a bit crude in freeze-frame, but millions of gamers found it awe-inspiring to behold in motion. Indeed, many of them thought that Carmack’s engine, combined with John Romero and Sandy Petersen’s devious level designs, gave them the most fun they’d ever had sitting behind a computer. This was immersion of a level they’d barely imagined possible, the perfect demonstration of the real potential of 3D graphics — even if it actually was, as John Carmack would be the first to admit, only 2.5D at best. No matter; DOOM felt like real 3D, and that was enough.

A hit game will always attract imitators, and a massive hit will attract legions of them. Accordingly, the market was soon flooded with, if anything, even more DOOM clones than Myst clones, all running in similar 2.5D engines, the product of both intense reverse engineering of DOOM itself and Carmack’s habit of talking freely about how he made the magic happen to pretty much anyone who asked him, no matter how much his colleagues at id begged him not to. “Programming is not a zero-sum game,” he said. “Teaching something to a fellow programmer doesn’t take it away from you. I’m happy to share what I can because I’m in it for the love of programming.” Carmack was elevated to veritable godhood, the prophet on the 3D mountaintop passing down whatever scraps of wisdom he deigned to share with the lesser mortals below.

Seen in retrospect, the DOOM clones are, like the Myst clones, a fairly anonymous lot for the most part, doubling down on transgressive ultra-violence instead of majestic isolation, but equally failing to capture a certain ineffable something that lay beyond the nuts and bolts of their inspiration’s technology. The most important difference between the Myst and DOOM clones came down to the filthy lucre of dollar and unit sales: whereas Myst‘s coattails proved largely illusory, producing few other hits, DOOM‘s were anything but. Most people who had bought Myst, it seemed, were satisfied with that single purchase; people who bought DOOM were left wanting more first-person mayhem, even if it wasn’t quite up to the same standard.

The one DOOM clone that came closest to replacing DOOM itself in the hearts of gamers was known as Duke Nukem 3D. Perhaps that isn’t surprising, given its pedigree: it was a product of 3D Realms, the rebranded incarnation of Scott Miller’s Apogee Software. Whilst trading under the earlier name, Miller had pioneered the episodic shareware model of game distribution, a way of escaping the heavy-handed group-think of the major boxed-game publishers and their tediously high-concept interactive movies in favor of games that were exponentially cheaper to develop, but also rawer, more visceral, more in line with what the teenage and twenty-something males who still constituted the large majority of dedicated gamers were actually jonesing to play. Miller had discovered the young men of id when they were still working for a disk magazine in Shreveport, Louisiana. He had then convinced them to move to his own glossier, better-connected hometown of Dallas, Texas, and distributed their proto-DOOM shooter Wolfenstein 3D to great success. His protégées had elected to strike out on their own when the time came to release DOOM, but it’s fair to say that that game would probably never have come to exist at all if not for their shareware Svengali. And even if it had, it probably wouldn’t have made them so much money; Jay Wilbur, id’s own tireless guerilla marketer, learned most of his tricks from watching Scott Miller.

Still a man with a keen sense of what his customers really wanted, Miller re-branded Apogee as 3D Realms as a way of signifying its continuing relevance amidst the 3D revolution that took the games industry by storm after DOOM. Then he, his junior partner George Broussard, and 3D Realms’s technical mastermind Ken Silverman set about making a DOOM-like engine of their own, known as Build, which they could sell to other developers who wanted to get up and running quickly. And they used the same engine to make a game of their own, which would turn out to be the most memorable of all those built with Build.

Duke Nukem 3D‘s secret weapon was one of the few boxes in the rubric of mainstream gaming success that DOOM had failed to tick off: a memorable character to serve as both star and mascot. First conceived several years earlier for a pair of Apogee 2D platformers, Duke Nukem was Joseph Lieberman’s worst nightmare, an unrepentant gangster with equally insatiable appetites for bombs and boobies, a fellow who “thinks the Bureau of Alcohol, Tobacco, and Firearms is a convenience store,” as his advertising trumpeted. His latest game combined some of the best, tightest level design yet seen outside of DOOM with a festival of adolescent transgression, from toilet water that served as health potions to strippers who would flash their pixelated breasts at you for the price of a dollar bill. The whole thing was topped off with the truly over-the-top quips of Duke himself: “I’m gonna rip off your head and shit down your neck!”; “Your face? Your ass? What’s the difference?” It was an unbeatable combination, proof positive that Miller’s ability to read his market was undimmed. Released in January of 1996, relatively late in the day for this generation of 3D — or rather 2.5D — technology, Duke Nukem 3D became by some reports the best-selling single computer game of that entire year. It is still remembered with warm nostalgia today by countless middle-aged men who would never want their own children to play a game like this. And so the cycle of life continues…

In a porno shop, shooting it out with policemen who are literally pigs…

Duke Nukem 3D was a triumph of design and attitude rather than technology; in keeping with most of the DOOM clones, the Build engine’s technical innovations over its inspiration were fairly modest. John Carmack scoffed that his old friends’ creation looked like it was “held together with bubble gum.”

The game that did push the technology envelope farthest, albeit without quite managing to escape the ghetto of the DOOM clones, was also a sign in another way of how quickly DOOM was changing the industry: rather than stemming from scruffy veterans of the shareware scene like id and 3D Realms, it came from the heart of the industry’s old-money establishment — from no less respectable and well-financed an entity than George Lucas’s very own games studio.

LucasArts’s Dark Forces was a shooter set in the Star Wars universe, which disappointed everyone right out of the gate with the news that it was not going to let you fight with a light saber. The developers had taken a hard look at it, they said, but concluded in the end that it just wasn’t possible to pull off satisfactorily within the hardware specifications they had to meet. This failing was especially ironic in light of the fact that they had chosen to name their new 2.5D engine “Jedi.” But they partially atoned for it by making the Jedi engine capable of hosting unprecedentedly enormous levels — not just horizontally so, but vertically as well. Dark Forces was full of yawning drop-offs and cavernous open spaces, the likes which you never saw in DOOM — or Duke Nukem 3D, for that matter, despite its release date of almost a year after Dark Forces. Even more importantly, Dark Forces felt like Star Wars, right from the moment that John Williams’s stirring theme song played over stage-setting text which scrolled away into the frame rather than across it. Although they weren’t allowed to make any of the movies’ characters their game’s star, LucasArts created a serviceable if slightly generic stand-in named Kyle Katarn, then sent him off on vertigo-inducing chases through huge levels stuffed to the gills with storm troopers in urgent need of remedial gunnery training, just like in the movies. Although Dark Forces toned down the violence that so many other DOOM clones were making such a selling point out of — there was no blood whatsoever on display here, just as there had not been in the movies — it compensated by giving gamers the chance to live out some of their most treasured childhood media memories, at a time when there were no new non-interactive Star Wars experiences to be had.

Unfortunately, LucasArts’s design instincts weren’t quite on a par with their presentation and technology. Dark Forces‘s levels were horribly confusing, providing little guidance about what to do or where to go in spaces whose sheer three-dimensional size and scope made the two-dimensional auto-map all but useless. Almost everyone who goes back to play the game today tends to agree that it just isn’t as much fun as it ought to be. At the time, though, the Star Wars connection and its technical innovations were enough to make Dark Forces a hit almost the equal of DOOM and Duke Nukem 3D. Even John Carmack made a point of praising LucasArts for what they had managed to pull off on hardware not much better than that demanded by DOOM.

Yet everyone seemed to be waiting on Carmack himself, the industry’s anointed Master of 3D Algorithms, to initiate the real technological paradigm shift. It was obvious what that must entail: an actual, totally non-fake rendered-on-the-fly first-person 3D engine, without all of the compromises that had marked DOOM and its imitators. Such engines weren’t entirely unheard of; the Boston studio Looking Glass Technologies had been working with them for five years, employing them in such innovative, immersive games as Ultima Underworld and System Shock. But those games were qualitatively different from DOOM and its clones: slower, more complex, more cerebral. The mainstream wanted a game that played just as quickly and violently and viscerally as DOOM, but that did it in uncompromising real 3D. With computers getting faster every year and with a genius like John Carmack to hand, it ought to be possible.

And so Carmack duly went to work on just such an engine, for a game that was to be called Quake. His ever-excitable level designer John Romero, who had the looks and personality to be the rock star gaming had been craving for years, was all in with bells on. “The next game is going to blow DOOM all to hell,” he told his legions of adoring fans. “DOOM totally sucks in comparison to our next game! Quake is going to be a bigger step over DOOM than DOOM was over Wolf 3D.” Drunk on success and adulation, he said that Quake would be more than just a game: “It will be a movement.” (Whatever that meant!) The drumbeat of excitement building outside of id almost seemed to justify his hyperbole; from all the way across the Atlantic, the British magazine PC Zone declared that the upcoming Quake would be “the most important PC game ever made.” The soundtrack alone was to be a significant milestone in the incorporation of gaming into mainstream pop culture, being the work of Trent Reznor and his enormously popular industrial-rock band Nine Inch Nails. Such a collaboration would have been unthinkable just a few years earlier.

While Romero was enjoying life as gaming’s own preeminent rock star and waiting for Carmack to get far enough along on the Quake engine to give him something to do, Carmack was living like a monk, working from 4 PM to 4 AM every day. In another sign of just how quickly id had moved up in the world, he had found himself an unexpectedly well-credentialed programming partner. Michael Abrash was one of the establishment’s star programmers, who had written a ton of magazine articles and two highly regarded technical tomes on assembly-language and graphics programming and was now a part of Microsoft’s Windows NT team. When Carmack, who had cut his teeth on Abrash’s writings, invited him out of the blue to come to Dallas and do Quake with him, Bill Gates himself tried to dissuade his employee. “You might not like it down there,” he warned. Abrash was, after all, pushing 40, a staid sort with an almost academic demeanor, while id was a nest of hyperactive arrested adolescence on a permanent sugar high. But he went anyway, because he was pretty sure Carmack was a genius, and because Carmack seemed to Abrash a bit lonely, working all night every night with only his computer for company. Abrash thought he saw in Quake a first glimmer of a new form of virtual existence that companies like Meta are still chasing eagerly today: “a pretty complicated, online, networked universe,” all in glorious embodied 3D. “We do Quake, other companies do other games, people start building worlds with our format and engine and tools, and these worlds can be glommed together via doorways from one to another. To me this sounds like a recipe for the first real cyberspace, which I believe will happen the way a real space station or habitat probably would — by accretion.”

He may not have come down if he had known precisely what he was getting into; he would later compare making Quake to “being strapped onto a rocket during takeoff in the middle of a hurricane.” The project proved a tumultuous, exhausting struggle that very nearly broke id as a cohesive company, even as the money from DOOM was continuing to roll in. (id’s annual revenues reached $15.6 million in 1995, a very impressive figure for what was still a relatively tiny company, with a staff numbering only a few dozen.)

Romero envisioned a game that would be as innovative in terms of gameplay as technology, that would be built largely around sword-fighting and other forms of hand-to-hand combat rather than gun play — the same style of combat that LucasArts had decided was too impractical for Dark Forces. Some of his early descriptions make Quake sound more like a full-fledged CRPG in the offing than another straightforward action game. But it just wouldn’t come together, according to some of Romero’s colleagues because he failed to communicate his expectations to them, rather leading them to suspect that even he wasn’t quite sure what he was trying to make.

Carmack finally stepped in and ordered his design team to make Quake essentially a more graphically impressive DOOM. Romero accepted the decision outwardly, but seethed inwardly at this breach of longstanding id etiquette; Carmack had always made the engines, then given Romero free rein to turn them into games. Romero largely checked out, opening a door that ambitious newcomers like American McGee and Tim Willits, who had come up through the thriving DOOM modding community, didn’t hesitate to push through. The offices of id had always been as hyper-competitive as a DOOM deathmatch, but now the atmosphere was becoming a toxic stew of buried resentments.

In a misguided attempt to fix the bad vibes, Carmack, whose understanding of human nature was as shallow as his understanding of computer graphics was deep, announced one day that he had ordered a construction crew in to knock down all of the walls, so that everybody could work together from a single “war room.” One for all and all for one, and all that. The offices of the most profitable games studio in the world were transformed into a dystopian setting perfect for a DOOM clone, as described by a wide-eyed reporter from Wired magazine who came for a visit: “a maze of drywall and plastic sheeting, with plaster dust everywhere, loose acoustic tiles, and cables dangling from the ceiling. Almost every item not directly related to the completion of Quake was gone. The only privacy to be found was between the padded earpieces of headphones.”

Wired magazine’s August 1996 cover, showing John Carmack flanked by John Romero and Adrian Carmack, marked the end of an era. By the time it appeared on newsstands, Romero had already been fired.

Needless to say, it didn’t have the effect Carmack had hoped for. In his book-length history of id’s early life and times, journalist David Kushner paints a jittery, unnerving picture of the final months of Quake‘s development: they “became a blur of silent and intense all-nighters, punctuated by the occasional crash of a keyboard against a wall. The construction crew had turned the office into a heap. The guys were taking their frustrations out by hurling computer parts into the drywall like knives.” Michael Abrash is more succinct: “A month before shipping, we were sick to death of working on Quake.” And level designer Sandy Petersen, the old man of the group, who did his best to keep his head down and stay out of the intra-office cold war, is even more so: “[Quake] was not fun to do.”

Quake was finally finished in June of 1996. It would prove a transitional game in more ways than one, caught between where games had recently been and where they were going. Still staying true to that odd spirit of hacker idealism that coexisted with his lust for ever faster Ferraris, Carmack insisted that Quake be made available as shareware, so that people could try it out before plunking down its full price. The game accordingly got a confusing, staggered release, much to the chagrin of its official publisher GT Interactive. To kick things off, the first eight levels went up online. Shortly after, there appeared in stores a $10 CD of the full game that had to be unlocked by paying id an additional $50 in order to play beyond the eighth level. Only after that, in August of 1996, did the game appear in a conventional retail edition.

Predictably enough, it all turned into a bit of a fiasco. Crackers quickly reverse-engineered the algorithms used for generating the unlocking codes, which were markedly less sophisticated than the ones used to generate the 3D graphics on the disc. As a result, hundreds of thousands of people were able to get the entirety of the most hotly anticipated game of the year for $10. Meanwhile even many of those unwilling or unable to crack their shareware copies decided that eight levels was enough for them, especially given that the unregistered version could be used for multiplayer deathmatches. Carmack’s misplaced idealism cost id and GT Interactive millions, poisoning relations between them; the two companies soon parted ways.

So, the era of shareware as an underground pipeline of cutting-edge games came to an end with Quake. From now on, id would concentrate on boxed games selling for full price, as would all of their fellow survivors from that wild and woolly time. Gaming’s underground had become its establishment.

But its distribution model wasn’t the only sense in which Quake was as much a throwback as a step forward. It held fast as well to Carmack’s disinterest in the fictional context of id’s games, as illustrated by his famous claim that the story behind a game was no more important than the story behind a porn movie. It would be blatantly incorrect to claim that the DOOM clones which flooded the market between 1994 and 1996 represented some great exploding of the potential of interactive narrative, but they had begun to show some interest, if not precisely in elaborate set-piece storytelling in the way of adventure games, at least in the appeal of setting and texture. Dark Forces had been a pioneer in this respect, what with its between-levels cut scenes, its relatively fleshed-out main character, and most of all its environments that really did look and feel like the Star Wars films, from their brutalist architecture to John Williams’s unmistakable score. Even Duke Nukem 3D had the character of Duke, plus a distinctively seedy, neon-soaked post-apocalyptic Los Angeles for him to run around in. No one would accuse it of being an overly mature aesthetic vision, but it certainly was a unified one.

Quake, on the other hand,  displayed all the signs of its fractious process of creation, of half a dozen wayward designers all pulling in different directions. From a central hub, you took “slipgates” into alternate dimensions that contained a little bit of everything on the designers’ not-overly-discriminating pop-culture radar, from zombie flicks to Dungeons & Dragons, from Jaws to H.P. Lovecraft, from The Terminator to heavy-metal music, and so wound up not making much of a distinct impression at all.

Most creative works are stamped with the mood of the people who created them, no matter how hard the project managers try to separate the art from the artists. With its color palette dominated by shocks of orange and red, DOOM had almost literally burst off the monitor screen with the edgy joie de vivre of a group of young men whom nobody had expected to amount to much of anything, who suddenly found themselves on the verge of remaking the business of games in their own unkempt image. Quake felt tired by contrast. Even its attempts to blow past the barriers of good taste seemed more obligatory than inspired; the Satanic symbolism, elaborate torture devices, severed heads, and other forms of gore were outdone by other games that were already pushing the envelope even further. This game felt almost somber — not an emotion anyone had ever before associated with id. Its levels were slower and emptier than those of DOOM, with a color palette full of mournful browns and other earth tones. Even the much-vaunted soundtrack wound up rather underwhelming. It was bereft of the melodic hooks that had made Nine Inch Nails’s previous output more palatable for radio listeners than that of most other “extreme” bands; it was more an exercise in sound design than music composition. One couldn’t help but suspect that Trent Reznor had held back all of his good material for his band’s next real record.

At its worst, Quake felt like a tech demo waiting for someone to turn it into an actual game, proving that John Carmack needed John Romero as badly as Romero needed him. But that once-fruitful relationship was never to be rehabilitated: Carmack fired Romero within days of finishing Quake. The two would never work together again.

It was truly the end of an era at id. Sandy Petersen was soon let go as well, Michael Abrash went back to the comfortable bosom of Microsoft, and Jay Wilbur quit for the best of all possible reasons: because his son asked him, “How come all the other daddies go to the baseball games and you never do?” All of them left as exhausted as Quake looks and feels.

Of course, there was nary a hint of Quake‘s infelicities to be found in the press coverage that greeted its release. Even more so than most media industries, the games industry has always run on enthusiasm, and it had no desire at this particular juncture to eat its own by pointing out the flaws in the most important PC game ever made. The coverage in the magazines was marked by a cloying fan-boy fawning that was becoming ever more sadly prominent in gamer culture. “We are not even worthy to lick your toenails free of grit and fluffy sock detritus,” PC Zone wrote in a public letter to id. “We genuflect deeply and offer our bare chests for you to stab with a pair of scissors.” (Eww! A sense of proportion is as badly lacking as a sense of self-respect…) Even the usually sober-minded (by gaming-journalism standards) Computer Gaming World got a little bit creepy: “Describing Quake is like talking about sex. It must be experienced to be fully appreciated.”

Still, I would be a poor historian indeed if I called all the hyperbole of 1996 entirely unjustified. The fact is that the passage of time has tended to emphasize Quake‘s weaknesses, which are mostly in the realm of design and aesthetics, whilst obscuring its contemporary strengths, which were in the realm of technology. Although not quite the first game to graft a true 3D engine onto ultra-fast-action gameplay — Interplay’s Descent beat it to the market by more than a year — it certainly did so more flexibly and credibly than anything else to date, even if Carmack still wasn’t above cheating a bit when push came to shove. (By no means is the Quake engine entirely free of tricksy 2D sprites in places where proper 3D models are just too expensive to render.)

Nevertheless, it’s difficult to fully convey today just how revolutionary the granular details of Quake seemed in 1996: the way you could look up and down and all around you with complete freedom; the way its physics engine made guns kick so that you could almost feel it in your mouse hand; the way you could dive into water and experience the visceral sensation of actually swimming; the way the wood paneling of its walls glinted realistically under the overhead lighting. Such things are commonplace today, but Quake paved the way. Most of the complaints I’ve raised about it could be mitigated by the simple expedient of not even bothering with the lackluster single-player campaign, of just playing it with your mates in deathmatch.

But even if you preferred to play alone, Quake was a sign of better things to come. “It goes beyond the game and more into the engine and the possibilities,” says Rob Smith, who watched the Quake mania come and go as the editor of PC Gamer magazine. “Quake presented options to countless designers. The game itself doesn’t make many ‘all-time’ lists, but its impact [was] as a game changer for 3D gaming, [an] engine that allowed other game makers to express themselves.” For with the industry’s Master of 3D Algorithms John Carmack having shown what was possible and talking as freely as ever about how he had achieved it, with Michael Abrash soon to write an entire book about how he and Carmack had made the magic happen, more games of this type, ready and able to harness the technology of true 3D to more exciting designs, couldn’t be far behind. “We’ve pretty much decided that our niche is in first-person futuristic action games,” said John Carmack. “We stumble when we get away from the techno stuff.” The industry was settling into a model that would remain in place for years to come: id would show what was possible with the technology of 3D graphics, then leave it to other developers to bend it in more interesting directions.

Soon enough, then, titles like Jedi Knight and Half-Life would push the genre once known as DOOM clones, now trading under the more sustainable name of the first-person shooter, in more sophisticated directions in terms of storytelling and atmosphere, without losing the essence of what made their progenitors so much fun. They will doubtless feature in future articles.

Next time, however, I want to continue to focus on the technology, as we turn to another way in which Quake was a rough draft for a better gaming future: months after its initial release, it became one of the first games to display the potential of hardware acceleration for 3D graphics, marking the beginning of a whole new segment of the microcomputer industry, one worth many billions of dollars today.



Did you enjoy this article? If so, please think about pitching in to help me make many more like it. You can pledge any amount you like.



(Sources: the books Rocket Jump: Quake and the Golden Age of First-Person Shooters by David L. Craddock, The Graphics Programming Black Book by Michael Abrash, Masters of DOOM: How Two Guys Created an Empire and Transformed Pop Culture by David Kushner, Dungeons and Dreamers: The Rise of Computer Game Culture from Geek to Chic by Brad King and John Borland, Principles of Three-Dimensional Computer Animation by Michael O’Rourke, and Computer Graphics from Scratch: A Programmer’s Introduction by Gabriel Gambetta. PC Zone of May 1996; Computer Gaming World of July 1996 and October 1996; Wired of August 1996 and January 2010. Online sources include Michael Abrash’s “Ramblings in Realtime” for Blue’s News.

Quake is available as a digital purchase at GOG.com, as is Star Wars: Dark Forces. Duke Nukem 3D can be found on Steam.)

 
 

Tags: , , , , , , ,

The Ratings Game, Part 3: Dueling Standards

When Sega, Nintendo, and the Software Publishers Association (SPA) announced just before the Senate hearing of December 9, 1993, that they had agreed in principle to create a standardized rating system for videogames, the timing alone marked it as an obvious ploy to deflect some of the heat that was bound to come their way later that day. At the same time, though, it was also more than a ploy: it was in fact the culmination of an effort that had been underway in some quarters of the industry for months already, one which had begun well before the good Senators Lieberman and Kohl discovered the horrors of videogame violence and sex. As Bill White of Sega was at pains to point out throughout the hearing, Sega had been seriously engaged with the question of a rating system for quite some time, and had managed to secure promises of support from a considerable portion of the industry. But the one entity that had absolutely rejected the notion was the very one whose buy-in was most essential for any overarching initiative of this sort: Nintendo. “Howard [Lincoln] was not going to be part of any group created by Sega,” laughs Dr. Arthur Pober, one of the experts the latter consulted.

So, Sega decided to go it alone. Again as described by Bill White at the hearing, they rolled out a thoroughly worked-out rating system for any and all games on their platforms just in time for Mortal Kombat in September of 1993. It divided games into three categories: GA for general audiences, MA-13 for those age thirteen or older, and MA-17 for those age seventeen or older. An independent board of experts was drafted to assign each new game its rating without interference from Sega’s corporate headquarters; its chairman was the aforementioned Arthur Pober, a distinguished educational psychologist with decades of research experience about the role of media in children’s lives on his CV. Under his stewardship, Mortal Kombat wound up with an MA-13 rating; Night Trap, which had already been in stores for the better part of a year by that point, was retroactively assigned a rating of MA-17.

Although one might certainly quibble that these ratings reflected the American media establishment’s terror of sex and relatively blasé attitude toward violence, Sega’s rating system bore all the outward signs of being a good-faith exercise. At the very least it was, as White repeatedly stated at the hearing, a good first step, one that was taken before any of the real controversy even began.

The second step was of course Nintendo’s grudging acquiescence to the concept of a universal rating system on the day of the hearing — a capitulation whose significance should not be underestimated in light of the company’s usual attitude toward intra-industry cooperation, which might be aptly summarized as “our way or the highway.” And the third step came less than a month later, at the 1994 Winter Consumer Electronics Show, which in accordance with long tradition took place over the first week of the new year in Las Vegas.

Anyone wandering the floor at this latest edition of CES would have seen a digital-games industry that was more fiercely competitive than ever. Sega, celebrating a recent report that gave them for the first time a slight edge over Nintendo in overall market share, had several attention-grabbing new products on offer, including the latest of their hugely popular Sonic the Hedgehog games; the Activator, an early attempt at a virtual-reality controller; the CDX, a portable CD player that could also be used as a game console; and, most presciently of all, a partnership with AT&T to bring online multiplayer gaming, including voice communication, to the Genesis. Meanwhile Nintendo gave the first hints about what would see the light of day some 30 months later as the Nintendo 64. And other companies were still trying to muscle their way into the bifurcated milieu of the living-room consoles. Among them were Atari, looking for a second shot at videogame glory with their Jaguar console; Philips, still flogging the dead horse known as CD-I; and a well-financed new company known as 3DO, with a console that bore the same name. Many traditional makers of business-oriented computers were suddenly trying to reach many of the same consumers, through products like Compaq’s new home-oriented Presario line; even stodgy old WordPerfect was introducing a line of entertainment and educational software. Little spirit of cooperation was in evidence amidst any of this. With “multimedia” the buzzword of the zeitgeist, the World Wide Web looming on the near horizon, and no clarity whatsoever about what direction digital technology in the home was likely to take over the next few years, the competition in the space was as cutthroat as it had ever been.

And yet in a far less glitzy back room of the conference center, all of these folks and more met to discuss the biggest cooperative initiative ever proposed for their industry, prompted by the ultimatum they had so recently been given by Senators Lieberman and Kohl: “Come up with a rating system for yourself, or we’ll do it for you.” The meeting was organized by the SPA, which had the virtue of not being any of the arch-rival console makers, and was thus presumably able to evince a degree of impartiality. “Companies such as 3DO, Atari, Acclaim, id Software, and Apogee already have rating systems,” said Ken Wasch, the longstanding head of the SPA, to open the proceedings. “But a proliferation of rating systems is confusing to retailers and consumers alike. Even before this became an issue in the halls of Congress or in the media, there was a growing belief that we needed a single, easily recognizable system to rate and label our products.”

But the SPA lost control of the meeting almost from the moment Wasch stepped down from the podium. The industry was extremely fortunate that neither Senator Lieberman nor Kohl took said organization up on an invitation to attend in person. One participant remembers the meeting consisting mostly of “people sitting around a table screaming and carrying on.” Cries of “Censorship!” and “Screw ’em! We’ll make the games we want to make!” dominated for long stretches. Many regarded the very notion of a rating system as an unacceptable intrusion by holier-than-thou bureaucrats; they wanted to call what they insisted was the senators’ bluff, to force them to put up actual government legislation — legislation whose constitutionality would be highly questionable — or to shut up about it.

Yet such advocates of the principle of free speech over all other concerns weren’t the sum total of the problem. Even many of those who felt that a rating system was probably necessary were thoroughly unimpressed with the hosts of the meeting, and not much disposed to fall meekly into line behind them.

The hard reality was that the SPA had never been viewed as a terribly effectual organization. Formed  to be the voice of the computer-software industry in 1984 — i.e., just after the Great Videogame Crash — it had occupied itself mostly with anti-piracy campaigns and an annual awards banquet in the years since. The return of a viable console marketplace in the form of the Nintendo Entertainment System and later the Sega Genesis had left it in an odd position. Most of the publishers of computer games who began moving some or all of their output to the consoles were members of the SPA, and through them the SPA itself got pulled into this brave new world. But there were certainly grounds to question whether the organization’s remit really ought to involve the console marketplace at all. Was the likes of Acclaim, the publisher of console-based videogames like Mortal Kombat, truly in the same business as such other SPA members as the business-software titans Microsoft and WordPerfect? Nintendo had always pointedly ignored the SPA; Sega had joined as a gesture of goodwill to their outside publishers who were also members, but hardly regarded it as a major part of their corporate strategy. In addition to being judged slow, bureaucratic, and uncreative, the SPA was regarded by everyone involved with the consoles as being much more invested in computer software of all stripes than console-based videogames. And what with computer games representing in the best case fifteen percent of the overall digital-games market, that alone struck them as a disqualifier for spearheading an initiative like this one.

Electronic Arts, the largest of all of the American game publishers, was in an interesting position here. Founded in 1983 to publish games exclusively for computers, EA had begun moving onto consoles in a big way at the dawn of the 1990s, scoring hits there with such games as the first installments in the evergreen John Madden Football series. By the beginning of 1994, console games made up over two-thirds of their total business.

A senior vice president at EA by the name of Jack Heistand felt that an industry-wide rating system was “the right thing to do. I really believed in my heart that we needed to communicate to parents what the content was inside games.” Yet he also felt convinced from long experience that the SPA was hopelessly ill-equipped for a project of this magnitude, and the disheartening meeting which the SPA tried to lead at CES only cemented that belief. So, immediately after the meeting was over, he approached EA’s CEO Larry Probst with a proposal: “Let’s get all the [other] CEOs together to form an industry association. I will chair it.” Probst readily agreed.

Jack Heistand

The SPA was not included in this other, secret meeting, even though it convened at that same CES. Its participants rather included a representative from each of the five manufacturers of currently or potentially viable consoles: Sega, Nintendo, Atari, Philips, and 3DO. Rounding out their numbers were two videogame-software publishers: Acclaim Entertainment of Mortal Kombat fame and of course Electronic Arts. With none of the console makers willing to accept one of their rivals as chairman of the new steering committee, they soon voted to bestow the role upon Jack Heistand, just as he had planned it.

Sega, convinced of the worthiness of their own rating system, would have happily brought the entirety of the industry under its broad tent and been done with it, but this Nintendo’s pride would never allow. It became clear as soon as talks began, if it hadn’t been already, that whatever came next would have to be built from scratch. With Senators Lieberman and Kohl breathing down their necks, they would all have to find a way to come together, and they would have to do so quickly. The conspirators agreed upon an audacious timetable indeed: they wanted to have a rating system in place for all games that shipped after October 31, 1994 — just in time, in other words, for the next Christmas buying season. It was a tall order, but they knew that they would be able to force wayward game publishers to comply if they could only get their own house in order, thanks to the fact all of the console makers in the group employed the walled-garden approach to software: all required licenses to publish on their platforms, meaning they could dictate which games would and would not appear there. They could thus force a rating system to become a ubiquitous reality simply by pledging not to allow any games on their consoles which didn’t include a rating.

On February 3, 1994, Senator Lieberman introduced the “Video Game Rating Act” to the United States Senate, stipulating that an “Interactive Entertainment Rating Commission” should be established, with five members appointed by President Bill Clinton himself; this temporary commission would be tasked with founding a new permanent governmental body to do what the industry had so far not been willing to do for itself. Shortly thereafter, Representative Tom Lantos, a Democrat from California, introduced parallel legislation in the House. Everyone involved made it clear, however, that they would be willing to scrap their legislation if the industry could demonstrate to their satisfaction that it was now addressing the problem itself. Lieberman, Kohl, and Lantos were all pleased when Sega dropped Night Trap from their product line as a sort of gesture of good faith; the controversial game had never been a particularly big seller, and had now become far more trouble than it was worth. (Mortal Kombat, on the other hand, was still posting sales that made it worth the controversy…)

On March 4, 1994, three representatives of the videogame industry appeared before Lieberman, Kohl, and Lantos at a hearing that was billed as a “progress report.” The only participant in the fractious hearing of three months before who returned for this one was Howard Lincoln of Nintendo, who had established something of a rapport with Senator Lieberman on that earlier occasion. Sega kept Bill White, who most definitely had not, well away, sending instead a white-haired senior vice president named Edward Volkwein. But most of the talking was done by the industry’s third representative, Jack Heistand. His overriding goal was to convince the lawmakers that he and his colleagues were moving as rapidly as possible toward a consistent industry-wide rating system, and should be allowed the balance of the year to complete their work before any legislation went forward. He accordingly emphasized over and over that ratings would appear on the boxes of all new videogames released after October 31.

The shift in tone from the one hearing to the next was striking; this one was a much more relaxed, even collegial affair than last time out. Lieberman, Kohl, and Lantos all praised the industry’s efforts so far, and kept the “think of the children!” rhetoric to a minimum in favor of asking practical questions about how the rating system would be implemented. “I don’t need to get into that argument again,” said Senator Lieberman when disagreements over the probability of a linkage between videogame violence and real-world aggression briefly threatened to ruin the good vibe in the room.

“I think you’re doing great,” said Senator Kohl at the end of the hearing. “It’s a wonderful start. I really am very pleased.” Mission accomplished: Heistand had bought himself enough time to either succeed or fail before the heavy hand of government came back on the scene.



Heistand’s remit was rapidly growing into something much more all-encompassing than just a content-rating board. To view his progress was to witness nothing less than an industry waking up to its shared potential and its shared problems. As I’ve already noted, the videogame industry as a whole had long been dissatisfied with its degree of representation in the SPA, as well as with the latter’s overall competence as a trade organization. This, it suddenly realized, was a chance to remedy that. Why not harness the spirit of cooperation that was in the air to create an alternative to the SPA that would focus solely on the needs of videogame makers? Once that was done, this new trade organization could tackle the issue of a rating system as just the first of many missions.

The International Digital Software Association (IDSA) was officially founded in April of 1994. Its initial members included Acclaim, Atari, Capcom, Crystal Dynamics, Electronic Arts, Konami, Nintendo, Philips, Sega, Sony, Viacom, and Virgin, companies whose combined sales made up no less than 60 percent of the whole videogame industry. Its founding chairman was Jack Heistand, and its first assigned task was the creation of an independent Entertainment Software Rating Board (ESRB).

Heistand managed to convince Nintendo and the others to accept the man who had chaired Sega’s ratings board for the same role in the industry-wide system. Arthur Pober had a reputation for being, as Heistand puts it, “very honorable. A man of integrity.” “Arthur was the perfect guy,” says Tom Kalinske, then the president and CEO of Sega of America. “He had good relationships inside of the education world, inside of the child-development world, and knew the proper child psychologists and sociologists. Plus, we knew he could do it — because he had already done it for us!”

Neutral parties like Pober helped to ease some of the tension that inevitably sprang up any time so many fierce competitors were in the room together. Heistand extracted a promise from everyone not to talk publicly about their work here — a necessary measure given that Howard Lincoln and Tom Kalinske normally used each and every occasion that offered itself to advance their own company and disparage their rival. (Witness Lincoln’s performance at the hearing of December 9…)

Over the course of the next several months, the board hammered out a rating system that was more granular and detailed than the one Sega had been using. It divided games into five rather than three categories: “Early Childhood” (EC) for children as young as age three; “Kids to Adults” (K-A) for anyone six years of age or older; “Teen” (T) for those thirteen or older; “Mature” (M) for those seventeen or older; and “Adults Only” (AO) for those eighteen or older. It was not a coincidence that these ratings corresponded fairly closely to the movie industry’s ratings of G, PG, PG-13, R, and NC-17. A team of graphic artists came up with easily recognizable icons for each of the categories — icons which proved so well-designed for their purpose that most of them are still used to this day.

The original slate of ESRB icons. Since 1994, remarkably few changes have been made: the “Kids to Adults” category has been renamed “Everyone,” and a sixth category of games suitable for those ten years and older, known in the rating system’s nomenclature as “Everyone 10+,” has been added.

The ESRB itself was founded as a New York-based non-profit. Each game would be submitted to it in the form of a videotape of 30 to 40 minutes in length, which must contain the game’s most “extreme” content. The board would then assign the game to one of its teams of three reviewers, all of whom were trained and overseen by the ESRB under the close scrutiny of Arthur Pober. The reviewers were allowed to have no financial or personal ties to the videogame industry, and were hired with an eye to demographic diversity: an example which Heistand gave of an ideal panel consisted of a retired black male elementary-school principal, a 35-year-old white full-time mother of two, and a 22-year-old white male law student. A measure of checks and balances was built into the process: publishers would have the chance to appeal ratings with which they disagreed, and all rated games would have to pass a final audit a week before release to ensure that the videotape which had been submitted had been sufficiently representative of the overall experience. The ESRB aimed to begin accepting videotapes on September 1, 1994, in keeping with the promise that all games released after October 31 would have a rating on the box. Everything was coming together with impressive speed.

But as Heistand prepared to return to Washington to report all of this latest progress on July 29, 1994, there remained one part of the games industry which had not fallen into line. The SPA was not at all pleased by the creation of a competing trade association, nor by having the rug pulled out from under its own rating initiative. And the computer-game makers among its members didn’t face the same compulsion to accept the ESRB’s system, given that they published on open platforms with no gatekeepers.



The relationship between computer games and their console-based brethren had always been more complicated than outsiders such as Senators Lieberman and Kohl were wont to assume. While the degree of crossover between the two had always been considerable, computer gaming had been in many ways a distinct form of media in its own right since the late 1970s. Computer-game makers claimed that their works were more sophisticated forms of entertainment, with more variety in terms of theme and subject matter and, in many cases, more complex and cerebral forms of gameplay on offer. They had watched the resurrection of the console marketplace with as much dismay as joy, being unimpressed by what many of them saw as the dumbed-down “kiddie aesthetic” of Nintendo and the stultifying effect which the consoles’ walled gardens had on creativity; there was a real feeling that the success of Nintendo and its ilk had come at the cost of a more diverse and interesting future for interactive entertainment as a whole. Perhaps most of all, computer-game makers and their older-skewing demographic of players profoundly resented the wider culture’s view of digital games of any stripe as essentially children’s toys, to be regulated in the same way that one regulated Barbie dolls and Hot Wheels cars. These resentments had not disappeared even as many of the larger traditional computer-game publishers, such as EA, had been tempted by the booming market for console-based videogames into making products for those systems as well.

Johnny L. Wilson, the editor-in-chief of Computer Gaming World magazine, voiced in an editorial the objections which many who made or played computer games had to the ESRB:

[The ESRB rating system] has been developed by videogame manufacturers and videogame publishers without significant input by computer-based publishers. The lone exception to this rule is Electronic Arts, which publishes personal-computer titles but nets more than two-thirds of its proceeds from videogame sales. The plan advocated by this group of videogame-oriented companies calls for every game to be viewed by an independent panel prior to release. This independent panel would consist of parents, child psychologists, and educators.

How does this hurt you? This panel is not going to understand that you are a largely adult audience. They are not going to perceive that there is a marketplace of mature gamers. Everything they evaluate will be examined under the rubric, “Is it good for children?” As a result, many of the games covered in Computer Gaming World will be rated as unsuitable for children, and many retailers will refuse to handle these games because they perceive themselves as family-oriented stores and cannot sell unsuitable merchandise.

The fate of Night Trap, an unusually “computer-like” console game, struck people like Wilson as an ominous example of how rating games could lead to censoring them.

Honestly held if debatable opinions like the above, combined perhaps with pettier resentments about the stratospheric sales of console games in comparison to those that ran on computers and its own sidelining by the IDSA, led the SPA to reject the ESRB, and to announce the formation of its own ratings board just for computer games. It was to be called the Recreational Software Advisory Council (RSAC), and its founding president was to be Robert Roden, the general counsel and director of business affairs for the computer-game publisher LucasArts. This choice of an industry insider rather than an outside expert like Arthur Pober reflected much of what was questionable about the alternative rating initiative.

Indeed, and although much of the reasoning used to justify a competing standard was cogent enough, the RSAC’s actual plan for its rating process was remarkable mostly for how comprehensively it failed to address the senators’ most frequently stated concerns about any self-imposed rating standard. Instead of asking publishers to submit videotapes of gameplay for review by an independent panel, the RSAC merely provided them with a highly subjective questionnaire to fill out; in effect, it allowed them to “self-rate” their own games. And, in a reflection of computer-game makers’ extreme sensitivity to any insinuation that their creations were just kids’ stuff, the RSAC rejected outright any form of age-based content rating. Age-based rating systems were “patronizing,” claimed the noted RSAC booster Johnny L. Wilson, because “different people of widely disparate ages have different perceptions of what is appropriate.” In lieu of sorting ratings by age groups, the RSAC would use descriptive labels stipulating the amount and type of violence, sex, and profanity, with each being ranked on a scale from zero to four.

The movie industry’s rating system was an obvious counterexample to this idea that age-based classification must necessarily entail the infantilization of art; certainly cinema still enjoyed vastly more cultural cachet than computer games, despite its own longstanding embrace of just such a system. But the computer-game makers were, it would seem, fairly blinded by their own insecurities and resentments.

A representative of the SPA named Mark Traphagen was invited to join Jack Heistand at the hearing of July 29 in order to make the case for the RSAC’s approach to rating computer games. The hearing began in an inauspicious fashion for him. Senator Lieberman, it emerged during opening statements, had discovered id Software’s hyper-violent computer game of DOOM in the interim between this hearing and the last. This occasion thus came to mark the game’s coming-out party on the national stage. For the first but by no means the last time, a politician showed a clip of it in action, then lit into what the audience had just seen.

What you see there is an individual with a successive round of weapons — a handgun, machine gun, chainsaw — just continuing to attack targets. The bloodshed, the gunfire, and the increasingly realistic imagery combine to create a game that I would not want my daughter or any other child to see or to play.

What you have not seen is some of the language that is displayed onscreen when the game is about to be played. “Act like a man!” the player is told. “Slap a few shells into your shotgun and let’s kick some demonic butt! You’ll probably end up in Hell eventually. Shouldn’t you know your way around before you make an extended visit?”

Well, some may say this is funny, but I think it sends the wrong message to our kids. The game’s skill levels include “I’m Too Young To Die” and “Hurt Me Plenty.” That obviously is not the message parents want their kids to hear.

Mark Traphagen received quite a grilling from Lieberman for the patent failings of the RSAC self-rating system. He did the best he could, whilst struggling to educate his interrogators on the differences between computer and console games. He stipulated that the two were in effect different industries entirely — despite the fact that many software publishers were, as we’ve seen, active in both. This was an interesting stand to take, not least in the way that it effectively ceded the ground of console-based software to the newly instituted IDSA, in the hope that the SPA could hang onto computer games.

Traphagen: Despite popular misconceptions and their admitted similarities to consumers, there are major differences between the personal-computer-software industry and the videogame industry. While personal-computer software and videogame software may be converging toward the compact disc as the preferred storage medium, those of us who develop and publish entertainment software see no signs of a convergence in either product development or marketing.

The personal-computer-software industry is primarily U.S.-based, small to medium in size, entrepreneurial, and highly innovative. Like our plan to rate software, it is based on openness. Its products run on open-platform computers and can be produced by any of thousands of companies of different sizes, without restrictive licensing agreements. There is intense competition between our industry and the videogame industry, marked by the great uncertainty about whether personal computers or some closed platform will prevail in the forthcoming “information superhighway.”

Senator Lieberman: Maybe you should define what a closed platform is in this regard.

Traphagen: A closed platform, Senator, is one in which the ability to create software that will run on that particular equipment is controlled by licensing agreements. In order to create software that will run on those platforms, one has to have the permission and consent of the equipment manufacturer.

Senator Lieberman: And give us an example of that.

Traphagen: A closed platform would be a videogame player.

Senator Lieberman: Such as a Sega or Nintendo?

Traphagen: That is right. In contrast, personal computers are an open platform in which any number of different companies can simply buy a development package at a retailer or a specialty store and then create software that will operate on the computer.

Traphagen explained the unwillingness of computer-game makers to fall under the thumb of the IDSA by comparing them to indie film studios attempting to negotiate the Hollywood machine. Yet he was able to offer little in defense of the RSAC’s chosen method of rating games. He made the dubious claim that creating a videotape for independent evaluation would be too technically burdensome on a small studio, and had even less to offer when asked what advantage accrued to not rating games by suitable age groups: “I do not believe there is an advantage, Senator. There was simply a decision that was taken that the ratings would be as informative as possible, without being judgmental.”

Some five weeks after this hearing, the RSAC would hold a press conference in Dallas, Texas, the home of id Software of DOOM fame. In fact, that game was used to illustrate how the rating system would work. Even some of the more sanguine members of the gaming press were surprised when it received a rating of just three out of four for violence. The difference maker, the RSAC representatives explained, was the fact that DOOM‘s violence wasn’t “gratuitous”; the monsters were trying to kill you, so you had no choice but to kill them. One has to presume that Senators Lieberman and Kohl would not have been impressed, and that Mark Traphagen was profoundly thankful that the press conference occurred after his appearance before them.

Even as it was, the senators’ skepticism toward the RSAC’s rating system at the hearing stood out all the more in contrast to their reception of the ESRB’s plan. The relationship between Senator Lieberman and Jack Heistand had now progressed from the cordial to the downright genial; the two men, now on a first-name basis, even made room for some banter on Heistand’s abortive youthful attempts to become a rock star. The specter of government legislation was never even raised to Heistand. It was, needless to say, a completely different atmosphere from the one of December 9. When the hearing was finished, both sides sent out press notices praising the wisdom and can-do spirit of the other in glowing terms.

But much of the rest of the games industry showed far less good grace. As the summer became the fall and it became clear that game ratings really were happening, the rants began, complete with overheated references to Fahrenheit 451 and all of the other usual suspects. Larry O’Brien, the editor of the new Game Developer magazine, made his position clear in the first line of his editorial: “Rating systems are crap.”

With the entire entertainment industry rolling over whenever Congress calls a hearing, it’s fallen on us to denounce these initiatives for what they are: cynical posturing and electioneering with no substance. Rating systems, whether for movies, television, videogames, or any other form of communication, don’t work, cost money, and impede creativity. Everyone at those hearings, politicians and witnesses alike, knows that. But there’s nothing politicians love more than “standing up for the family” and blaming America’s cultural violence on Hollywood. So the entertainment industry submissively pisses all over itself and proposes “voluntary” systems from the pathetic to the laughable.

Parents should decide. If parents don’t want their kids to play X-COM or see Terminator 2, they should say no and put up with the ensuing argument. They don’t need and shouldn’t get a rating system to supplement their authority. The government has no right to help parents say no at the video store if that governmental interference impedes your right to develop whatever content you feel appropriate.

We all have responsibilities. To create responsibly, to control the viewing and gaming habits of our own children, and to call the government’s ratings initiatives what they are: cynical, ineffective, and wrong-headed.

The libertarian-leaning Wired magazine, that voice of cyber-futurism, published a jeremiad from Rogier Van Bakel that was equally strident.

Violent games such as DOOM, Night Trap, and Mortal Kombat are corrupting the minds and morals of millions of American children. So what do you do? Easy.

You elect people like Herb Kohl and Joe Lieberman to the US Senate. You applaud them when they tell the videogame industry that it’s made up of irrepressible purveyors of gratuitous gore and nefarious nudity. You nod contentedly when the senators give the industry an ultimatum: “Either you start rating and stickering your games real soon, or we, the government, will do it for you.”

You are pleasantly surprised by the industry’s immediate white flag: a rating system that is almost as detailed as the FDA-mandated nutrition information on a can of Campbell’s. You contend that that is, in fact, a perfect analogy: all you want, as a consumer, is honest product labeling. Campbell’s equals Sega equals Kraft equals 3DO.

Finally, you shrug when someone remarks that it may not be a good idea to equate soup with freedom of speech.

All that was needed now was a good conspiracy theory. This Karen Crowther, a spokesperson for makers of shareware computer games, helpfully provided when she said that the government had gotten “hoodwinked by a bunch of foreign billion-dollar corporations (such as Sony, Nintendo, and Sega) out to crush their US competition.”

Robert Peck, a lawyer for the American Civil Liberties Union, flirted with a legal challenge:

This [rating] system is a response to the threat of Senators Lieberman and Kohl that they would enact legislation requiring labels unless the industry did something to preempt them. The game manufacturers are being required to engage in speech that they would otherwise not engage in. These ratings have the government’s fingerprints all over them.

This present labeling system isn’t going to be the end of it. I think some games are going to be negatively affected, sales-wise, and the producers of those games will probably bring a lawsuit. We will then see that this system will be invalidated.

The above bears a distinct whiff of legalistic wishful thinking; none of it came to pass.

While voices like these ranted and raved, Jack Heistand, Arthur Pober, and their associates buckled down soberly to the non-trivial task of putting a rating on all new console-based videogames that holiday season, and succeeded in doing so with an efficiency that one has to admire, regardless of one’s position on the need for such a system. Once the initial shock to the media ecosystem subsided, even some of the naysayers began to see the value in the ESRB’s work.

Under the cover of the rating system, for example, Nintendo felt able to relax many of their strict “family-friendly” content policies. The second “Mortal Monday,” heralding the release of Mortal Kombat II on home consoles, came in September of 1994, before the ESRB’s icons had even started to appear on games. Nevertheless, Nintendo improvised a stopgap badge labeling the game unsuitable for those under the age of seventeen, and felt protected enough by it to allow the full version of the coin-op original on their platform this time, complete with even more blood and gore than its predecessor. It was an early sign that content ratings might, rather than leading game makers to censor themselves, give them a feeling of carte blanche to be more extreme.

By 1997, Game Developer was no longer railing against the very idea of a rating system, but was fretting instead over whether the ESRB’s existing approach was looking hard enough at the ever more lifelike violence made possible by the latest graphics hardware. The magazine worried about unscrupulous publishers submitting videotapes that did not contain their games’ most extreme content, and the ESRB failing to catch on to this as games continued to grow larger and larger: “The ESRB system uses three (count ’em, three) ‘demographically diverse’ people to rate a game. (And I thought television’s Nielsen rating system used a small sample set.) As the stakes go up in the ratings game, the threat of a publisher abusing our rating system grows larger and larger.”

Meanwhile the RSAC strolled along in a more shambolic manner, stickering games here and there, but never getting anything close to the complete buy-in from computer-game publishers that the ESRB received from console publishers. These respective patterns held throughout the five years in which the dueling standards existed.

In the end, in other words, the computer-game people got what they had really wanted all along: a continuing lack of any concerted examination of the content of their works. Some computer games did appear with the ESRB icons on their boxes, others with the RSAC schemas, but plenty more bothered to include no content guidance at all. Satisfied for the time being with the ESRB, Senators Lieberman and Kohl didn’t call any more hearings, allowing the less satisfying RSAC system to slip under the radar along with the distinct minority of digital games to which it was applied, even as computer games like Duke Nukem 3D raised the bar for violence far beyond the standard set by DOOM. The content of computer games wouldn’t suffer serious outside scrutiny again until 1999, the year that a pair of rabid DOOM and Duke Nukem fans shot up their high school in Columbine, Colorado, killing thirteen teachers and students and injuring another 24. But that is a tragedy and a controversy for a much, much later article…

(Sources: the books Dungeons and Dreamers: The Rise of Computer Game Culture from Geek to Chic by Brad King and John Borland, The Ultimate History of Video Games by Steven L. Kent, and Game Over: How Nintendo Conquered the World by David Sheff; Game Developer of September 1994, December 1994, August/September 1995, September 1997, and January 1998; Computer Gaming World of June 1994, December 1994, May 1996, and July 1999; Electronic Entertainment of November 1994 and January 1995; Mac Addict of January 1996; Sierra’s newsletter InterAction of Spring 1994; Washington Post of July 29 1994; the article “Regulating Violence in Video Games: Virtually Everything” by Alex Wilcox in the Journal of the National Association of Administrative Law Judiciary, Volume 31, Issue 1; the United States Senate Committee on the Judiciary’s publication Rating Video Games: A Parent’s Guide to Games; the 1994 episode of the television show Computer Chronicles entitled “Consumer Electronics Show.” Online sources include Blake J. Harris’s “Oral History of the ESRB” at VentureBeat and C-SPAN’s coverage of the Senate hearings of December 9 1993, March 4 1994, and July 29 1994.)

 

Tags: , , , , , , , ,

The Shareware Scene, Part 5: Narratives of DOOM

Let me begin today by restating the obvious: DOOM was very, very popular, probably the most popular computer game to date.

That “probably” has to stand there because DOOM‘s unusual distribution model makes quantifying its popularity frustratingly difficult. It’s been estimated that id sold 2 to 3 million copies of the shareware episodes of the original DOOM. The boxed-retail-only DOOM II may have sold a similar quantity; it reportedly became the third best-selling boxed computer game of the 1990s. But these numbers, impressive as they are in their own right, leave out not only the ever-present reality of piracy but also the free episode of DOOM, which was packaged and distributed in such an unprecedented variety of ways all over the world. Players of it likely numbered well into the eight digits.

Yet if the precise numbers associated with the game’s success are slippery, the cultural impact of the game is easier to get a grip on. The release of DOOM marks the biggest single sea change in the history of computer gaming. It didn’t change gaming instantly, mind you — a contemporaneous observer could be forgiven for assuming it was still largely business as usual a year or even two years after DOOM‘s release — but it did change it forever.

I should admit here and now that I’m not entirely comfortable with the changes DOOM brought to gaming. In fact, for a long time, when I was asked when I thought I might bring this historical project to a conclusion, I pointed to the arrival of DOOM as perhaps the most logical place to hang it up. I trust that most of you will be pleased to hear that I no longer feel so inclined, but I do recognize that my feelings about DOOM are, at best, conflicted. I can’t help but see it as at least partially responsible for a certain coarsening in the culture of gaming that followed it. I can muster respect for the id boys’ accomplishment, but no love. Hopefully the former will be enough to give the game its due.

As the title of this article alludes, there are many possible narratives to spin about DOOM‘s impact. Sometimes the threads are contradictory — sometimes even self-contradictory. Nevertheless, let’s take this opportunity to follow a few of them to wherever they lead us as we wrap up this series on the shareware movement and the monster it spawned.


3D 4EVA!

The least controversial, most incontrovertible aspect of DOOM‘s impact is its influence on the technology of games. It was nothing less than the coming-out party for 3D graphics as a near-universal tool — this despite the fact that 3D graphics had been around in some genres, most notably vehicular simulations, almost as long as microcomputer games themselves had been around, and despite the fact that DOOM itself was far from a complete implementation of a 3D environment. (John Carmack wouldn’t get all the way to that goal until 1996’s Quake, the id boys’ anointed successor to DOOM.) As we’ve seen already, Blue Sky Productions’s Ultima Underworld actually offered the complete 3D implementation which DOOM lacked twenty months before the latter’s arrival.

But as I also noted earlier, Ultima Underworld was complex, a little esoteric, hard to come to terms with at first sight. DOOM, on the other hand, took what the id boys had started with Wolfenstein 3D, added just enough additional complexity to make it into a more satisfying game over the long haul, topped it off with superb level design that took full advantage of all the new affordances, and rammed it down the throat of the gaming mainstream with all the force of one of its coveted rocket launchers. The industry never looked back. By the end of the decade, it would be hard to find a big boxed game that didn’t use 3D graphics.

Many if not all of these applications of 3D were more than warranted: the simple fact is that 3D lets you do things in games that aren’t possible any other way. Other forms of graphics consist at bottom of fixed, discrete patterns of colored pixels. These patterns can be moved about the screen — think of the sprites in a classic 2D videogame, such as Nintendo’s Super Mario Bros. or id’s Commander Keen — but their forms cannot be altered with any great degree of flexibility. And this in turn limits the degree to which the world of a game can become an embodied, living place of emergent interactions; it does no good to simulate something in the world model if you can’t represent it on the player’s screen.

3D graphics, on the other hand, are stored not as pixels but as a sort of architectural plan of an imaginary 3D space, expressed in the language of mathematics. The computer then extrapolates from said plan to render the individual pixels on the fly in response to the player’s actions. In other words, the world and the representation of the world are stored as one in the computer’s memory. This means that things can happen there which no artist ever anticipated. 3D allowed game makers to move beyond hand-crafted fictions and set-piece puzzles to begin building virtual realities in earnest. Not for nothing did many people refer to DOOM-like games in the time before the term “first-person shooter” was invented as “virtual-reality games.”

Ironically, others showed more interest than the id boys themselves in probing the frontiers of formal possibility thus opened. While id continued to focus purely on ballistics and virtual violence in their extended series of Quake games after making DOOM, Looking Glass Technologies — the studio which had previously been known as Blue Sky Productions — worked many of the innovations of Ultima Underworld and DOOM alike into more complex virtual worlds in games like System Shock and Thief. Nevertheless, DOOM was the proof of concept, the game which demonstrated indubitably to everyone that 3D graphics could provide amazing experiences which weren’t possible any other way.

From the standpoint of the people making the games, 3D graphics had another massive advantage: they were also cheaper than the alternative. When DOOM first appeared in December of 1993, the industry was facing a budgetary catch-22 with no obvious solution. Hiring armies of artists to hand-paint every screen in a game was expensive; renting or building a sound stage, then hiring directors and camera people and dozens of actors to provide hours of full-motion-video footage was even more so. Players expected ever bigger, richer, longer games, which was intensely problematic when every single element in their worlds had to be drawn or filmed by hand. Sales were increasing at a steady clip by 1993, but they weren’t increasing quickly enough to offset the spiraling costs of production. Even major publishers like Sierra were beginning to post ugly losses on their bottom lines despite their increasing gross revenues.

3D graphics had the potential to fix all that, practically at a stroke. A 3D world is, almost by definition, a collection of interchangeable parts. Consider a simple item of furniture, like, say, a desk. In a 2D world, every desk must be laboriously hand-drawn by an artist in the same way that a traditional carpenter planes and joins the wood for such a thing in a workshop. But in a 3D world, the data constituting the basic form of “desk” can be inserted in a matter of seconds; desks can now make their way into games with the same alacrity with which they roll off of an IKEA production line. But you say that you don’t want every desk in your world to look exactly the same? Very well; it takes just a few keystrokes to change the color or wood grain or even the size of your desk, or to add or take away a drawer. We can arrive at endless individual implementations of “desk” from our Platonic ideal with surprising speed. Small wonder that, when the established industry was done marveling at DOOM‘s achievements in terms of gameplay, the thing they kept coming back to over and over was its astronomical profit margins. 3D graphics provided a way to make games make money again.

So, 3D offered worlds with vastly more emergent potential, made at a greatly reduced cost. There had to be a catch, right?

Alas, there was indeed. In many contexts, 3D graphics were right on the edge of what a typical computer could do at all in the mid-1990s, much less do with any sort of aesthetic appeal. Gamers would have to accept jagged edges, tearing textures, and a generalized visual crudity in 3D games for quite some time to come. A freeze-frame visual comparison with the games the industry had been making immediately before the 3D revolution did the new ones no favors: the games coming out of studios like Sierra and LucasArts had become genuinely beautiful by the early 1990s, thanks to those companies’ rooms full of dedicated pixel artists. It would take a considerable amount of time before 3D games would look anywhere near this nice. One can certainly argue that 3D was in some fairly fundamental sense necessary for the continuing evolution of game design, that this period of ugliness was one that the industry simply needed to plow through in order to emerge on the other side with a whole new universe of visual and emergent possibility to hand. Still, people mired in the middle of it could be forgiven for asking whether, from the evidence of screenshots alone, gaming technology wasn’t regressing rather than progressing.

But be that as it may, the 3D revolution ushered in by DOOM was here to stay. People would just have to get used to the visual crudity for the time being, and trust that eventually things would start to look better again.


Playing to the Base

There’s an eternal question in political and commercial marketing alike: do you play to the base, or do you try to reach out to a broader spectrum of people? The former may be safer, but raises the question of how many more followers you can collect from the same narrow slice of the population; the latter tempts you with the prospect of countless virgin souls waiting to embrace you, but is far riskier, with immense potential to backfire spectacularly if you don’t get the message and tone just right. This was the dichotomy confronting the boxed-games industry in the early 1990s.

By 1993, the conventional wisdom inside the industry had settled on the belief that outreach was the way forward. This dream of reaching a broader swath of people, of becoming as commonplace in living rooms as prime-time dramas and sitcoms, was inextricably bound up with the technology of CD-ROM, what with its potential to put footage of real human actors into games alongside spoken dialog and orchestral soundtracks. “What we think of today as a computer or a videogame system,” wrote Ken Williams of Sierra that year, “will someday assume a much broader role in our homes. I foresee a day when there is one home-entertainment device which combines the functions of a CD-audio player, VCR, videogame system, and computer.”

And then along came DOOM with its stereotypically adolescent-male orientation, along with sales numbers that threatened to turn the conventional wisdom about how well the industry could continue to feed off the same old demographic on its head. About six months after DOOM‘s release, when the powers that were were just beginning to grapple with its success and what it meant to each and every one of them, Alexander Antoniades, a founding editor of the new Game Developer magazine, more fully articulated the dream of outreach, as well as some of the doubts that were already beginning to plague it.

The potential of CD-ROM is tremendous because it is viewed as a superset not [a] subset of the existing computer-games industry. Everyone’s hoping that non-technical people who would never buy an Ultima, flight simulator, or DOOM will be willing to buy a CD-ROM game designed to appeal to a wider audience — changing the computer into [an] interactive VCR. If these technical neophytes’ first experience is a bad one, for $60 a disc, they’re not going to continue making the same mistake.

It will be this next year, as these consumers make their first CD-ROM purchases, that will determine the shape of the industry. If CD-ROM games are able to vary more in subject matter than traditional computer games, retain their platform independence, and capture new demographics, they will attain the status of a new platform [in themselves]. If not, they will just be another means to get product to market and will be just another label on the side of a box.

The next couple of years did indeed become a de-facto contest between these two ideas of gaming’s future. At first, the outreach camp could point to some notable successes on a scale similar to that of DOOM: The 7th Guest sold over 2 million copies, Myst sold an extraordinary 6 million or more. Yet the reality slowly dawned that most of those outside the traditional gaming demographic who purchased those games regarded them as little more than curiosities; most evidence would seem to indicate that they were never seriously played to a degree commensurate with their sales. Meanwhile the many similar titles which the industry rushed out in the wake of these success stories almost invariably became commercial disappointments.

The problems inherent in these multimedia-heavy “interactive movies” weren’t hard to see even at the time. In the same piece from which I quoted above, Alexander Antoniades noted that too many CD-ROM productions were “the equivalent of Pong games with captured video images of professional tennis players and CD-quality sounds of bouncing balls.” For various reasons — the limitations inherent in mixing and matching canned video clips; the core limitations of the software and hardware technology; perhaps simply a failure of imagination — the makers of too many of these extravaganzas never devised new modes of gameplay to complement their new modes of presentation. Instead they seemed to believe that the latter alone ought to be enough. Too often, these games fell back on rote set-piece puzzle-solving — an inherently niche activity even if done more creatively than we often saw in these games — for lack of any better ideas for making the “interactive” in interactive movies a reality. The proverbial everyday person firing up the computer-cum-stereo-cum-VCR at the end of a long workday wasn’t going to do so in order to watch a badly acted movie gated with frustrating logic puzzles.

While the multimedia came first with these productions, games of the DOOM school flipped that script. As the years went on and they too started to ship on the now-ubiquitous medium of CD-ROM, they too picked up cut scenes and spoken dialog, but they never suffered the identity crisis of their rivals; they knew that they were games first and foremost, and knew exactly what forms their interactivity should take. And most importantly from the point of view of the industry, these games sold. Post-1996 or so, high-concept interactive movies were out, as was most serious talk of outreach to new demographics. Visceral 3D action games were in, along with a doubling-down on the base.

To blame the industry’s retrenchment — its return to the demographically tried-and-true — entirely on DOOM is a stretch. Yet DOOM was a hugely important factor, standing as it did as a living proof of just how well the traditional core values of gaming could pay. The popularity of DOOM, combined with the exercise in diminishing commercial returns that interactive movies became, did much to push the industry down the path of retrenchment.

The minor tragedy in all this was not so much the end of interactive movies, given what intensely problematic endeavors they so clearly were, but rather that the latest games’ vision proved to be so circumscribed in terms of fiction, theme, and mechanics alike. By late in the decade, they had brought the boxed industry to a place of dismaying homogeneity; the values of the id boys had become the values of computer gaming writ large. Game fictions almost universally drew from the same shallow well of sci-fi action flicks and Dungeons & Dragons, with perhaps an occasional detour into military simulation. A shocking proportion of the new games being released fell into one of just two narrow gameplay genres: the first-person shooter and the real-time-strategy game.

These fictional and ludic genres are not, I hasten to note, illegitimate in themselves; I’ve enjoyed plenty of games in all of them. But one craves a little diversity, a more vibrant set of possibilities to choose from when wandering into one’s local software store. It would take a new outsider movement coupled with the rise of convenient digital distribution in the new millennium to finally make good on that early-1990s dream of making games for everyone. (How fitting that shaking loose the stranglehold of DOOM‘s progeny would require the exploitation of another alternative form of distribution, just as the id boys exploited the shareware model…)


The Murder Simulator

DOOM was mentioned occasionally in a vaguely disapproving way by mainstream media outlets immediately after its release, but largely escaped the ire of the politicians who were going after games like Night Trap and Mortal Kombat at the time; this was probably because its status as a computer rather than a console game led to its being played in bedrooms rather than living rooms, free from the prying eyes of concerned adults. It didn’t become the subject of a full-blown moral panic until weirdly late in its history.

On April 20, 1999, Eric Harris and Dylan Klebold, a pair of students at Columbine High School in the Colorado town of the same name, walked into their school armed to the teeth with knives, explosives, and automatic weapons. They proceeded to kill 13 students and teachers and to injure 24 more before turning their guns on themselves. The day after the massacre, an Internet gaming news site called Blue’s News posted a message that “several readers have written in reporting having seen televised news reports showing the DOOM logo on something visible through clear bags containing materials said to be related to the suspected shooters. There is no word yet of what connection anyone is drawing between these materials and this case.” The word would come soon enough.

It turned out that Harris and Klebold had been great devotees of the game, not only as players but as creators of their own levels. “It’s going to be just like DOOM,” wrote Harris in his diary just before the massacre. “I must not be sidetracked by my feelings of sympathy. I will force myself to believe that everyone is just a monster from DOOM.” He chose his prize shotgun because it looked like one found in the game. On the surveillance tapes that recorded the horror in real time, the weapons-festooned boys pranced and preened as if they were consciously imitating the game they loved so much. Weapons experts noted that they seemed to have adopted their approach to shooting from what worked in DOOM. (In this case, of course, that was a wonderful thing, in that it kept them from killing anywhere close to the number of people they might otherwise have with the armaments at their disposal.)

There followed a storm of controversy over videogame content, with DOOM and the genre it had spawned squarely at its center. Journalists turned their attention to the FPS subculture for the first time, and discovered that more recent games like Duke Nukem 3D — the Columbine shooters’ other favorite game, a creation of Scott Miller’s old Apogee Software, now trading under the name of 3D Realms — made DOOM‘s blood and gore look downright tame. Senator Joseph Lieberman, a longstanding critic of videogames, beat the drum for legislation, and the name of DOOM even crossed the lips of President Bill Clinton. “My hope,” he said, “[is] to persuade the nation’s top cultural producers to call a cease-fire in the virtual arms race, to stop the release of ultra-violent videogames such as DOOM. Several of the school gunmen murderously mimicked [it] down to the choice of weapons and apparel.”

When one digs into the subject, one can’t help but note how the early life stories of John Carmack and John Romero bear some eerie similarities with those of Eric Harris and Dylan Klebold. The two Johns as well were angry kids who found it hard to fit in with their peers, who engaged in petty crime and found solace in action movies, heavy-metal music, and computer games. Indeed, a big part of the appeal of DOOM for its most committed fans was the sense that it had been made by people just like them, people who were coming from the same place. What caused Harris and Klebold, alone among the millions like them, to exorcise their anger and aggression in such a horrifying way? It’s a question that we can’t begin to answer. We can only say that, unfair though it may be, perceptions of DOOM outside the insular subculture of FPS fandom must always bear the taint of its connection with a mass murder.

And yet the public controversy over DOOM and its progeny resulted in little concrete change in the end. Lieberman’s proposed legislation died on the vine after the industry fecklessly promised to do a better job with content warnings, and the newspaper pundits moved on to other outrages. Forget talk of free speech; there was too much money in these types of games for them to go away. Just ten months after Columbine, Activision released Soldier of Fortune, which made a selling point of dismembered bodies and screams of pain so realistic that one reviewer claimed they left his dog a nervous wreck cowering in a corner. After the requisite wave of condemnation, the mainstream media forgot about it too.

Violence in games didn’t begin with DOOM or even Wolfenstein 3D, but it was certainly amplified and glorified by those games and the subculture they wrought. While a player may very well run up a huge body count in, say, a classic arcade game or an old-school CRPG, the violence there is so abstract as to be little more than a game mechanic. But in DOOM — and even more so in the games that followed it — experiential violence is a core part of the appeal. One revels in killing not just because of the new high score or character experience level one gets out of it, but for the thrill of killing itself, as depicted in such a visceral, embodied way. This does strike me as a fundamental qualitative shift from most of the games that came before.

Yet it’s very difficult to have a reasonable discussion on said violence’s implications, simply because opinions have become so hardened on the subject. To express concern on any level is to invite association with the likes of Joe Lieberman, a thoroughly conventional thinker with a knack for embracing the most flawed of all conventional wisdoms on every single issue, who apparently was never fortunate enough to have a social-science professor drill the fact that correlation isn’t causation into his head.

Make no mistake: the gamers who scoff at the politicians’ hand-wringing have a point. Harris and Klebold probably were drawn to games like DOOM and Duke Nukem 3D because they already had violent fantasies, rather than having said fantasies inculcated by the games they happened to play. In a best-case scenario, we can even imagine other potential mass murderers channeling their aggression into a game rather than taking it out on real people, in much the same way that easy access to pornography may be a cause of the dramatic decline in incidents of rape and sexual violence in most Western countries since the rise of the World Wide Web.

That said, I for one am also willing to entertain the notion that spending hours every day killing things in the most brutal, visceral manner imaginable inside an embodied virtual space may have some negative effects on some personalities. Something John Carmack said about the subject in a fairly recent interview strikes me as alarmingly fallacious:

In later games and later times, when games [came complete with] moral ambiguity or actual negativity about what you’re doing, I always felt good about the decision that in DOOM, you’re fighting demons. There’s no gray area here. It is black and white. You’re the good guys, they’re the bad guys, and everything that you’re doing to them is fully deserved.

In reality, though, the danger which games like DOOM may present, especially in the polarized societies many of us live in in our current troubled times, is not that they ask us to revel in our moral ambiguity, much less our pure evil. It’s rather the way they’re able to convince us that the Others whom we’re killing “fully deserve” the violence we visit upon them because “they’re the bad guys.” (Recall those chilling words from Eric Harris’s diary, about convincing himself that his teachers and classmates are really just monsters…) This tendency is arguably less insidious when the bad guys in question are ridiculously over-the-top demons from Hell than when they’re soldiers who just happen to be wearing a different uniform, one which they may quite possibly have had no other choice but to don. Nevertheless, DOOM started something which games like the interminable Call of Duty franchise were only too happy to run with.

I personally would like to see less violence rather than more in games, all things being equal, and would like to see more games about building things up rather than tearing them down, fun though the latter can be on occasion. It strikes me that the disturbing association of some strands of gamer culture with some of the more hateful political movements of our times may not be entirely accidental, and that some of the root causes may stretch all the way back to DOOM — which is not to say that it’s wrong for any given individual to play DOOM or even Call of Duty. It’s only to say that the likes of GamerGate may be yet another weirdly attenuated part of DOOM‘s endlessly multi-faceted legacy.


Creative Destruction?

In other ways, though, the DOOM community actually was — and is — a community of creation rather than destruction. (I did say these narratives of DOOM wouldn’t be cut-and-dried, didn’t I?)

John Carmack, by his own account alone among the id boys, was inspired rather than dismayed by the modding scene that sprang up around Wolfenstein 3D — so much so that, rather than taking steps to make such things more difficult in DOOM, he did just the opposite: he separated the level data from the game engine much more completely than had been the case with Wolfenstein 3D, thus making it possible to distribute new DOOM levels completely legally, and released documentation of the WAD format in which the levels were stored on the same day that id released the game itself.

The origins of his generosity hearken back once again to this idea that the people who made DOOM weren’t so very different from the people who played it. One of Carmack’s formative experiences as a hacker was his exploration of Ultima II on his first Apple II. Carmack:

To go ahead and hack things to turn trees into chests or modify my gold or whatever… I loved that. The ability to go several steps further and release actual source code, make it easy to modify things, to let future generations get what I wished I had had a decade earlier—I think that’s been a really good thing. To this day I run into people all the time that say, whether it was Doom, or maybe even more so Quake later on, that that openness and that ability to get into the guts of things was what got them into the industry or into technology. A lot of people who are really significant people in significant places still have good things to say about that.

Carmack speaks of “a decade-long fight inside id about how open we should be with the technology and the modifiability.” The others questioned this commitment to what Carmack called “open gaming” more skeptically than ever when some companies started scooping up some of the thousands of fan-made levels, plopping them onto CDs, and selling them without paying a cent to id. But in the long run, the commitment to openness kept DOOM alive; rather than a mere computer game, it became a veritable cottage industry of its own. Plenty of people played literally nothing else for months or even years at a stretch.

The debate inside id raged more than ever in 1997, when Carmack insisted on releasing the complete original source code to DOOM. (He had done the same for the Wolfenstein 3D code two years before.) As he alludes above, the DOOM code became a touchstone for an up-and-coming generation of game programmers, even as many future game designers cut their teeth and made early names for themselves by creating custom levels to run within the engine. And, inevitably, the release of the source code led to a flurry of ports to every imaginable platform: “Everything that has a 32-bit [or better] processor has had DOOM run on it,” says Carmack with justifiable pride. Today you can play DOOM on digital cameras, printers, and even thermostats, and do so if you like in hobbyist-created levels that coax the engine into entirely new modes of play that the id boys never even began to conceive of.

This narrative of DOOM bears a distinct similarity to that of another community of creation with which I happen to be much better acquainted: the post-Infocom interactive-fiction community that arose at about the same time that the original DOOM was taking the world by storm. Like the DOOM people, the interactive-fiction people built upon a beloved company’s well-nigh timeless software engineering; like them, they eventually stretched that engine in all sorts of unanticipated directions, and are still doing it to this day. A comparison between the cerebral text adventures of Infocom and the frenetic shooters of id might seem incongruous at first blush, but there you are. Long may their separate communities of love and craft continue to thrive.



As you have doubtless gathered by now, the legacy of DOOM is a complicated one that’s almost uniquely resistant to simplification. Every statement has a qualifier; every yang has a yin. This can be frustrating for a writer; it’s in the nature of us as a breed to want straightforward causes and effects. The desire for them may lead one to make trends that were obscure at best to the people living through them seem more obvious than they really were. Therefore allow me to reiterate that the new gaming order which DOOM created wouldn’t become undeniable to everyone until fully three or four years after its release. A reader recently emailed me the argument that 1996 was actually the best year ever for adventure games, the genre which, according to some oversimplified histories, DOOM and games like it killed at a stroke — and darned if he didn’t make a pretty good case for it.

So, while I’m afraid I’ll never be much of a gibber and/or fragger, we should continue to have much to talk about. Onward, then, into the new order. I dare say that from the perspective of the boots on the ground it will continue to look much like the old one for quite some time to come. And after that? Well, we’ll take it as it comes. I won’t be mooting any more stopping dates.

(Sources: the books The Complete Wargames Handbook (2000 edition) by James F. Dunnigan, Masters of Doom by David Kushner, Game Engine Black Book: DOOM by Fabien Sanglard, Principles of Three-Dimensional Computer Animation by Michael O’Rourke, and Columbine by Dave Cullen; Retro Gamer 75; Game Developer of June 1994; Chris Kohler’s interview with John Carmack for Wired. And a special thanks to Alex Sarosi, a.k.a. Lt. Nitpicker, for his valuable email correspondence on the legacy of DOOM, as well as to Josh Martin for pointing out in a timely comment to the last article the delightful fact that DOOM can now be run on a thermostat.)

 

Tags: , , , ,