RSS

Tag Archives: quake

The Next Generation in Graphics, Part 3: Software Meets Hardware

The first finished devices to ship with the 3Dfx Voodoo chipset inside them were not add-on boards for personal computers, but rather standup arcade machines. That venerable segment of the videogames industry was enjoying its last lease on life in the mid-1990s; this was the last era when the graphics of the arcade machines were sufficiently better than those which home computers and consoles could generate as to make it worth getting up off the couch, driving into town, and dropping a quarter or two into a slot to see them. The Voodoo chips now became part and parcel of that, ironically just before they would do much to destroy the arcade market by bringing equally high-quality 3D graphics into homes. For now, though, they wowed players of arcade games like San Francisco Rush: Extreme Racing, Wayne Gretzky’s 3D Hockey, and NFL Blitz.

Still, Gary Tarolli, Scott Sellers, and Ross Smith were most excited by the potential of the add-on-board market. All too well aware of how the chicken-or-the-egg deadlock between game makers and players had doomed their earlier efforts with Pellucid and Media Vision, they launched an all-out charm offensive among game developers long before they had any actual hardware to show them. Smith goes so far as to call “connecting with the developers early on and evangelizing them” the “single most important thing we ever did” — more important, that is to say, than designing the Voodoo chips themselves, impressive as they were. Throughout 1995, somebody from 3Dfx was guaranteed to be present wherever developers got together to talk among themselves. While these evangelizers had no hardware as yet, they did have software simulations running on SGI workstations — simulations which, they promised, duplicated exactly the capabilities the real chips would have when they started arriving in quantity from Taiwan.

Our core trio realized early on that their task must involve software as much as hardware in another, more enduring sense: they had to make it as easy as possible to support the Voodoo chipset. In my previous article, I mentioned how their old employer SGI had created an open-source software library for 3D graphics, known as OpenGL. A team of programmers from 3Dfx now took this as the starting point of a slimmed-down, ultra-optimized MS-DOS library they called GLide; whereas OpenGL sported well over 300 individual function calls, GLide had less than 100. It was fast, it was lightweight, and it was easy to program. They had good reason to be proud of it. Its only drawback was that it would only work with the Voodoo chips — which was not necessarily a drawback at all in the eyes of its creators, given that they hoped and planned to dominate a thriving future market for hardware-accelerated 3D graphics on personal computers.

Yet that domination was by no means assured, for they were far from the only ones developing consumer-oriented 3D chipsets. One other company in particular gave every indication of being on the inside track to widespread acceptance. That company was Rendition, another small, venture-capital-funded startup that was doing all of the same things 3Dfx was doing — only Rendition had gotten started even earlier. It had actually been Rendition who announced a 3D chipset first, and they had been evangelizing it ever since every bit as tirelessly as 3Dfx.

The Voodoo chipset was technologically baroque in comparison to Rendition’s chips, which went under the name of Vérité. This meant that Voodoo should easily outperform them — eventually, once all of the logistics of East Asian chip fabricating had been dealt with and deals had been signed with board makers. In June of 1996, when the first Vérité-powered boards shipped, the Voodoo chipset quite literally didn’t exist as far as consumers were concerned. Those first Vérité boards were made by none other than Creative Labs, the 800-pound gorilla of the home-computer add-on market, maker of the ubiquitous Sound Blaster sound cards and many a “multimedia upgrade kit.” Such a partner must be counted as yet another early coup for Rendition.

The Vérité cards were followed by a flood of others whose slickly aggressive names belied their somewhat workmanlike designs: 3D Labs Permedia, S3 Virge, ATI 3D Rage, Matrox Mystique. And still Voodoo was nowhere.

What was everywhere was confusion; it was all but impossible for the poor, benighted gamer to make heads or tails of the situation. None of these chipsets were compatible with one another at the hardware level in the way that 2D graphics cards were; there were no hardware standards for 3D graphics akin to VGA, that last legacy of IBM’s era of dominance, much less the various SVGA standards defined by the Video Electronic Standards Association (VESA). Given that most action-oriented computer games still ran on MS-DOS, this was a serious problem.

For, being more of a collection of basic function calls than a proper operating system, MS-DOS was not known for its hardware agnosticism. Most of the folks making 3D chips did provide an MS-DOS software package for steering them, similar in concept to 3Dfx’s GLide, if seldom as optimized and elegant. But, just like GLide, such libraries worked only with the chipset for which they had been created. What was sorely needed was an intermediate layer of software to sit between games and the chipset-manufacturer-provided libraries, to automatically translate generic function calls into forms suitable for whatever particular chipset happened to exist on that particular computer. This alone could make it possible for one build of one game to run on multiple 3D chipsets. Yet such a level of hardware abstraction was far beyond the capabilities of bare-bones MS-DOS.

Absent a more reasonable solution, the only choice was to make separate versions of games for each of the various 3D chipsets. And so began the brief-lived, unlamented era of the 3D pack-in game. All of the 3D-hardware manufacturers courted the developers and publishers of popular software-rendered 3D games, dangling before them all sorts of enticements to create special versions that took advantage of their cards, more often than not to be included right in the box with them. Activision’s hugely successful giant-robot-fighting game MechWarrior 2 became the king of the pack-ins, with at least half a dozen different chipset-specific versions floating around, all paid for upfront by the board makers in cold, hard cash. (Whatever else can be said about him, Bobby Kotick has always been able to spot the seams in the gaming market where gold is waiting to be mined.)

It was an absurd, untenable situation; the game or games that came in the box were the only ones that the purchasers of some of the also-ran 3D contenders ever got a chance to play with their new toys. Gamers and chipset makers alike could only hope that, once Windows replaced MS-DOS as the gaming standard, their pain would go away.

In the meanwhile, the games studio that everyone with an interest in the 3D-acceleration sweepstakes was courting most of all was id Software — more specifically, id’s founder and tech guru, gaming’s anointed Master of 3D Algorithms, John Carmack. They all begged him for a version of Quake for their chipset.

And once again, it was Rendition that scored the early coup here. Carmack actually shared some of the Quake source code with them well before either the finished game or the finished Vérité chipset was available for purchase. Programmed by a pair of Rendition’s own staffers working with the advice and support of Carmack and Michael Abrash, the Vérité-rendered version of the game, commonly known as vQuake, came out very shortly after the software-rendered version. Carmack called it “the premier platform for Quake” — truly marketing copy to die for. Gamers too agreed that 3D acceleration made the original’s amazing graphics that much more amazing, while the makers of other 3D chipsets gnashed their teeth and seethed.

Quake with software rendering.

vQuake

Among these, of course, was the tardy 3Dfx. The first Voodoo cards appeared late, seemingly hopelessly so: well into the fall of 1996. Nor did they have the prestige and distribution muscle of a partner like Creative Labs behind them: the first two Voodoo boards rather came from smaller firms by the names of Diamond and Orchid. They sold for $300, putting them well up at the pricey end of the market —  and, unlike all of the competition’s cards, these required you to have another, 2D-graphics card in your computer as well. For all of these reasons, they seemed easy enough to dismiss as overpriced white elephants at first blush. But that impression lasted only until you got a look at them in action. The Voodoo cards came complete with a list of features that none of the competition could come close to matching in the aggregate: bilinear filtering, trilinear MIP-mapping, alpha blending, fog effects, accelerated light sources. If you don’t know what those terms mean, rest assured that they made games look better and play faster than anything else on the market. This was amply demonstrated by those first Voodoo boards’ pack-in title, an otherwise rather undistinguished, typical-of-its-time shooter called Hellbender. In its new incarnation, it suddenly looked stunning.

The Orchid Righteous 3D card, one of the first two to use the Voodoo chipset. (The only consumer category as fond of bro-dude phraseology like “extreme” and “righteous” as the makers of 3D cards was men’s razors.)

The battle lines were drawn between Rendition and 3Dfx. But sadly for the former, it quickly emerged that their chipset had one especially devastating weakness in comparison to its rival: its Z-buffering support left much to be desired. And what, you ask, is Z-buffering? Read on!

One of the non-obvious problems that 3D-graphics systems must solve is the need for objects in the foreground of a scene to realistically obscure those behind them. If, at the rendering stage, we were to simply draw the objects in whatever random order they came to us, we would wind up with a dog’s breakfast of overlapping shapes. We need to have a way of depth-sorting the objects if we want to end up with a coherent, correctly rendered scene.

The most straightforward way of depth-sorting is called the Painter’s Algorithm, because it duplicates the process a human artist usually goes through to paint a picture. Let’s say our artist wants to paint a still life of an apple sitting in front of a basket of other fruits. First she will paint the basket to her satisfaction, then paint the apple right over the top of it. Similarly, when we use a Painter’s Algorithm on the computer, we first sort the whole collection of objects into a hierarchy that begins with those that are farthest from our virtual camera and ends with those closest to it. Only after this has been done do we set about the task of actually drawing them to the screen, in our sorted order from the farthest away to the closest. And so we end up with a correctly rendered image.

But, as so often happens in matters like this, the most logically straightforward way is far from the most efficient way of depth-sorting a 3D scene. When the number of objects involved is few, the Painter’s Algorithm works reasonably well. When the numbers get into the hundreds or thousands, however, it results in much wasted effort, as the computer ends up drawing objects that are completely obscured by other objects in front of them — i.e., objects that don’t really need to be drawn at all. Even more importantly, the process of sorting all of the objects by depth beforehand is painfully time-consuming, a speed bump that stops the rendering process dead until it is completed. Even in the 1990s, when their technology was in a laughably primitive stage compared to today, GPUs tended to emphasize parallel processing — i.e., staying constantly busy with multiple tasks at the same time. The necessity of sorting every object in a scene by depth before even getting properly started on rendering it rather threw all that out the window.

Enter the Z-buffer. Under this approach, every object is rendered right away as soon as it comes down the pipeline, used to build the appropriate part of the raster of colored pixels that, once completed, will be sent to the monitor screen as a single frame. But there comes an additional wrinkle in the form of the Z-buffer itself: a separate, parallel raster containing not the color of each pixel but its distance from the camera. Before the GPU adds an entry to the raster of pixel colors, it compares the distance of that pixel from the camera with the number in that location in the Z-buffer. If the current distance is less than the one already found there, it knows that the pixel in question should be overwritten in the main raster and that the Z-buffer raster should be updated with that pixel’s new distance from the camera. Ditto if the Z-buffer contains a null value, indicating no object has yet been drawn at that pixel. But if the current distance is larger than the (non-null) number already found there, the GPU simply moves on without doing anything more, confident in the knowledge that what it had wanted to draw should actually be hidden by what it has already drawn.

There are plenty of occasions when the same pixel is drawn over twice — or many times — before reaching the screen even under this scheme, but it is nevertheless still vastly more efficient than the Painter’s Algorithm, because it keeps objects flowing through the pipeline steadily, with no hiccups caused by lengthy sorting operations. Z-buffering support was reportedly a last-minute addition to the Vérité chipset, and it showed. Turning depth-sorting on for 100-percent realistic rendering on these chips cut their throughput almost in half; the Voodoo chipset, by contrast, just said, “No worries!,” and kept right on trucking. This was an advantage of titanic proportions. It eventually emerged that the programmers at Rendition had been able to get Quake running acceptably on the Vérité chips only by kludging together their own depth-sorting algorithms in software. With Voodoo, programmers wouldn’t have to waste time with stuff like that.

But surprisingly, the game that blew open the doors for the Voodoo chipset wasn’t Quake or anything else from id. It was rather a little something called Tomb Raider, from the British studio Core Design, a game which used a behind-the-back third-person perspective rather than the more typical first-person view — the better to appreciate its protagonist, the buxom and acrobatic female archaeologist Lara Croft. In addition to Lara’s considerable assets, Tomb Raider attracted gamers with its unprecedentedly huge and wide-open 3D environments. (It will be the subject of my next article, for those interested in reading more about its massive commercial profile and somewhat controversial legacy.)

In November of 1996, when Tomb Raider been out for less than a month, Core put a  Voodoo patch for it up on their website. Gamers were blown away. “It’s a totally new game!” gushed one on Usenet. “It was playable but a little jerky without the patch, but silky smooth to play and beautiful to look at with the patch.” “The level of detail you get with the Voodoo chip is amazing!” enthused another. Or how about this for a ringing testimonial?

I had been playing the regular Tomb Raider on my PC for about two weeks
before I got the patch, with about ten people seeing the game, and not
really saying anything regarding how amazing it was. When I got the
accelerated patch, after about four days, every single person who has
seen the game has been in awe watching the graphics and how
smooth [and] lifelike the movement is. The feel is different, you can see
things much more clearly, it’s just a more enjoyable game now.

Tomb Raider became the biggest hit of the 1996 holiday season, and tens if not hundreds of thousands of Voodoo-based 3D cards joined it under Christmas trees.

Tomb Raider with software rendering.

Tomb Raider with a Voodoo card.

In January of 1997, id released GLQuake, a new version of that game that supported the Voodoo chipset. In telling contrast to the Vérité-powered vQuake, which had been coded by Rendition’s programmers, GLQuake had been taken on by John Carmack as a personal project. The proof was in the pudding; this Quake ran faster and looked better than either of the previous ones. Running on a machine with a 200 MHz Intel Pentium processor and a Voodoo card, GLQuake could manage 70 frames per second, compared to 41 frames for the software-rendered version, whilst appearing much more realistic and less pixelated.

GLQuake

One last stroke of luck put the finishing touch on 3Dfx’s destiny of world domination: the price of memory dropped precipitously, thanks to a number of new RAM-chip factories that came online all at once in East Asia. (The factories had been built largely to feed the memory demands of Windows 95, the straw that was stirring the drink of the entire computer industry.) The Voodoo chipset required 4 MB of memory to operate effectively — an appreciable quantity in those days, and a big reason why the cards that used it tended to cost almost as twice as much as those based on the Vérité chips, despite lacking the added complications and expense of 2D support. But with the drop in memory prices, it suddenly became practical to sell a Voodoo card for under $200. Rendition could also lower their prices somewhat thanks to the memory windfall, of course, but at these lower price points the dollar difference wasn’t as damaging to 3Dfx. After all, the Voodoo cards were universally acknowledged to be the class of the industry. They were surely worth paying a little bit of a premium for. By the middle of 1997, the Voodoo chipset was everywhere, the Vérité one left dead at the side of the road. “If you want full support for a gamut of games, you need to get a 3Dfx card,” wrote Computer Gaming World.

These were heady times at 3Dfx, which had become almost overnight the most hallowed name in hardcore action gaming outside of id Software, all whilst making an order of magnitude more money than id, whose business model under John Carmack was hardly fine-tuned to maximize revenues. In a comment he left recently on this site, reader Captain Kal said that, when it comes to 3D gaming in the late 1990s, “one company springs to my mind without even thinking: 3Dfx. Yes, we also had 3D solutions from ATI, NVIDIA, or even S3, but Voodoo cards created the kind of dedication that I hadn’t seen since the Amiga days.” The comparison strikes me as thoroughly apropos.

3Dfx brought in a high-profile CEO named Greg Ballard, formerly of Warner Music and the videogame giant Capcom, to oversee a smashingly successful initial public offering in June of 1997. He and the three thirty-something founders were the oldest people at the company. “Most of the software engineers were [in their] early twenties, gamers through and through, loved games,” says Scott Sellers. “Would code during the day and play games at night. It was a culture of fun.” Their offices stood at the eighth hole of a golf course in Sunnyvale, California. “We’d sit out there and drink beer,” says Ross Smith. “And you’d have to dodge incoming golf balls a bit. But the culture was great.” Every time he came down for a visit, says their investing angel Gordon Campbell,

they’d show you something new, a new demo, a new mapping technique. There was always something. It was a very creative environment. The work hard and play hard thing, that to me kind of was Silicon Valley. You went out and socialized with your crew and had beer fests and did all that kind of stuff. And a friendly environment where everybody knew everybody and everybody was not in a hierarchy so much as part of the group or the team.

I think the thing that was added here was, it’s the gaming industry. And that was a whole new twist on it. I mean, if you go to the trade shows, you’d have guys that would show up at our booth with Dracula capes and pointed teeth. I mean, it was just crazy.

Gary Tarolli, Scott Sellers, and Greg Ballard do battle with a dangerous houseplant. The 1990s were wild and crazy times, kids…

While the folks at 3Dfx were working hard and playing hard, an enormously consequential advancement in the field of software was on the verge of transforming the computer-games industry. As I noted previously, in 1996 most hardcore action games were still being released for MS-DOS. In 1997, however, that changed in a big way. With the exception of only a few straggling Luddites, game developers switched over to Windows 95 en masse. Quake had been an MS-DOS game; Quake II, which would ship at the end of 1997, ran under Windows. The same held true for the original Tomb Raider and its 1997 sequel, as it did for countless others.

Gaming was made possible on Windows 95 by Microsoft’s DirectX libraries, which finally let programmers do everything in Windows that they had once done in MS-DOS, with only a slight speed penalty if any, all while giving them the welcome luxury of hardware independence. That is to say, all of the fiddly details of disparate video and sound cards and all the rest were abstracted away into Windows device drivers that communicated automatically with DirectX to do the needful. It was an enormous burden lifted off of developers’ shoulders. Ditto gamers, who no longer had to futz about for hours with cryptic “autoexec.bat” and “config.sys” files, searching out the exact combination of arcane incantations that would allow each game they bought to run optimally on their precise machine. One no longer needed to be a tech-head simply to install a game.

In its original release of September 1995, the full DirectX suite consisted of DirectDraw for 2D pixel graphics, DirectSound for sound and music, DirectInput for managing joysticks and other game-centric input devices, and DirectPlay for networked multiplayer gaming. It provided no support for doing 3D graphics. But never fear, Microsoft said: 3D support was coming. Already in February of 1995, they had purchased a British company called RenderMorphics, the creator of Reality Lab, a hardware-agnostic 3D library. As promised, Microsoft added Direct3D to the DirectX collection with the latter’s 2.0 release, in June of 1996.

But, as the noted computer scientist Andrew Tanenbaum once said, “the nice thing about standards is that you have so many to choose from.” For the next several years, Direct3D would compete with another library serving the same purpose: a complete, hardware-agnostic Windows port of SGI’s OpenGL, whose most prominent booster was no less leading a light than John Carmack. Direct3D would largely win out in the end among game developers despite Carmack’s endorsement of its rival, but we need not concern ourselves overmuch with the details of that tempest in a teacup here. Suffice to say that even the most bitter partisans on one side of the divide or the other could usually agree that both Direct3D and OpenGL were vastly preferable to the bad old days of chipset-specific 3D games.

Unfortunately for them, 3Dfx, rather feeling their oats after all of their success, made in response to these developments the first of a series of bad decisions that would cause their time at the top of the 3D-graphics heap to be a relatively short one.

Like all of the others, the Voodoo chipset could be used under Windows with either Direct3D or OpenGL. But there were some features on the Voodoo chips that the current implementations of those libraries didn’t support. 3Dfx was worried, reasonably enough on the face of it, about a “least-common-denominator effect” which would cancel out the very real advantages of their 3D chipset and make one example of the breed more or less as good as any other. However, instead of working with the folks behind Direct3D and OpenGL to get support for the Voodoo chips’ special features into those libraries, they opted to release a Windows version of GLide, and to strongly encourage game developers to keep working with it instead of either of the more hardware-agnostic alternatives. “You don’t want to just have a title 80 percent as good as it could be because your competitors are all going to be at 100 percent,” they said pointedly. They went so far as to start speaking of Voodoo-equipped machines as a whole new platform unto themselves, separate from more plebeian personal computers.

It was the talk and actions of a company that had begun to take its own press releases a bit too much to heart. But for a time 3Dfx got away with it. Developers coded for GLide in addition to or instead of Direct3D or OpenGL, because you really could do a lot more with it and because the cachet of the “certified” 3Dfx logo that using GLide allowed them to put on their boxes really was huge.

In March of 1998, the first cards with a new 3Dfx chipset, known as Voodoo2, began to appear. Voodoo2 boasted twice the overall throughput of its predecessor, and could handle a screen resolution of 800 X 600 instead of just 640 X 480; you could even join two of the new cards together to get even better performance and higher resolutions. This latest chipset only seemed to cement 3Dfx’s position as the class of their field.

The bottom line reflected this. 3Dfx was, in the words of their new CEO Greg Ballard, “a rocket ship.” In 1995, they earned $4 million in revenue; in 1996, $44 million; in 1997, $210 million; and in 1998, their peak year, $450 million. And yet their laser focus on selling the Ferraris of 3D acceleration was blinding Ballard and his colleagues to the potential of 3D Toyotas, where the biggest money of all was waiting to be made.

Over the course of the second half of the 1990s, 3D GPUs went from being exotic pieces of kit known only to hardcore gamers to being just another piece of commodity hardware found in almost all computers. 3Dfx had nothing to do with this significant shift. Instead they all but ignored this so-called “OEM” (“Original Equipment Manufacturer”) side of the GPU equation: chipsets that weren’t the hottest or the sexiest on the market, but that were cheap and easy to solder right onto the motherboards of low-end and mid-range machines bearing such unsexy name plates as Compaq and Packard Bell. Ironically, Gordon Campbell had made a fortune with Chips & Technologies selling just such commodity-grade 2D graphics chipsets. But 3Dfx was obstinately determined to fly above the OEM segment, determined to offer “premium” products only. “It doesn’t matter if 20 million people have one of our competitors’ chips,” said Scott Sellers in 1997. “How many of those people are hardcore gamers? How many of those people are buying games?” “I can guarantee that 100 percent of 3Dfx owners are buying games,” chimed in a self-satisfied-sounding Gary Tarolli.

The obvious question to ask in response was why it should matter to 3Dfx how many games — or what types of games — the users of their chips were buying, as long as they were buying gadgets that contained their chips. While 3Dfx basked in their status as the hardcore gamer’s favorite, other companies were selling many more 3D chips, admittedly at much less of a profit on a chip-per-chip basis, at the OEM end of the market. Among these was a firm known as NVIDIA, which had been founded on the back of a napkin in a Denny’s diner in 1993. NVIDIA’s first attempt to compete head to head with 3Dfx at the high end was underwhelming at best: released well after the Voodoo2 chipset, the RIVA TNT ran so hot that it required a noisy onboard cooling fan, and yet still couldn’t match the Voodoo2’s performance. By that time, however, NVIDIA was already building a lucrative business out of cheaper, simpler chips on the OEM side, even as they were gaining the wisdom they would need to mount a more credible assault on the hardcore-gamer market. In late 1998, 3Dfx finally seemed to be waking up to the fact that they would need to reach beyond the hardcore to continue their rise, when they released a new chipset called Voodoo Banshee which wasn’t quite as powerful as the Voodoo2 chips but could do conventional 2D as well as 3D graphics, meaning its owners would not be forced to buy a second video card just in order to use their computers.

But sadly, they followed this step forward with an absolutely disastrous mistake. You’ll remember that prior to this point 3Dfx had sold their chips only to other companies, who then incorporated them into add-on boards of their own design, in the same way that Intel sold microprocessors to computer makers rather than directly to consumers (aside from the build-your-own-rig hobbyists, that is). This business model had made sense for 3Dfx when they were cash-strapped and hadn’t a hope of building retail-distribution channels equal to those of the established board makers. Now, though, they were flush with cash, and enjoyed far better name recognition than the companies that made the boards which used their chips; even the likes of Creative Labs, who had long since dropped Rendition and were now selling plenty of 3Dfx boards, couldn’t touch them in terms of prestige. Why not cut out all these middlemen by manufacturing their own boards using their own chips and selling them directly to consumers with only the 3Dfx name on the box? They decided to do exactly that with their third state-of-the-art 3D chipset, the predictably named Voodoo3, which was ready in the spring of 1999.

Those famous last words apply: “It seemed like a good idea at the time.” With the benefit of hindsight, we can see all too clearly what a terrible decision it actually was. The move into the board market became, says Scott Sellers, the “anchor” that would drag down the whole company in a rather breathtakingly short span of time: “We started competing with what used to be our own customers” — i.e., the makers of all those earlier Voodoo boards. Then, too, 3Dfx found that the logistics of selling a polished consumer product at retail, from manufacturing to distribution to advertising, were much more complex than they had reckoned with.

Still, they might — just might — have been able to figure it all out and make it work, if only the Voodoo3 chipset had been a bit better. As it was, it was an upgrade to be sure, but not quite as much of one as everyone had been expecting. In fact, some began to point out now that even the Voodoo2 chips hadn’t been that great a leap: they too were better than their predecessors, yes, but that was more down to ever-falling memory prices and ever-improving chip-fabrication technologies than any groundbreaking innovations in their fundamental designs. It seemed that 3Dfx had started to grow complacent some time ago.

NVIDIA saw their opening and made the most of it. They introduced a new line of their own, called the TNT2, which outdid its 3Dfx competitor in at least one key metric: it could do 24-bit color, giving it almost 17 million shades of onscreen nuance, compared to just over 65,000 in the case of Voodoo3. For the first time, 3Dfx’s chips were not the unqualified, undisputed technological leaders. To make matters worse, NVIDIA had been working closely with Microsoft in exactly the way that 3Dfx had never found it in their hearts to do, ensuring that every last feature of their chips was well-supported by the increasingly dominant Direct3D libraries.

And then, as the final nail in the coffin, there were all those third-party board makers 3Dfx had so rudely jilted when they decided to take over that side of the business themselves. These had nowhere left to go but into NVIDIA’s welcoming arms. And needless to say, these business partners spurned were highly motivated to make 3Dfx pay for their betrayal.

NVIDIA was on a roll now. They soon came out with yet another new chipset, the GeForce 256, which had a “Transform & Lighting” (T&L) engine built in, a major conceptual advance. And again, the new technology was accessible right from the start through Direct3D, thanks to NVIDIA’s tight relationship with Microsoft. Meanwhile the 3Dfx chips still needed GLide to perform at their best. With those chips’ sales now plummeting, more and more game developers decided the oddball library just wasn’t worth the trouble anymore. By the end of 1999, a 3Dfx death spiral that absolutely no one had seen coming at the start of the year was already well along. NVIDIA was rapidly sewing up both the high end and the low end, leaving 3Dfx with nothing.

In 2000, NVIDIA continued to go from strength to strength. Their biggest challenger at the hardcore-gamer level that year was not 3Dfx, but rather ATI, who arrived on the scene with a new architecture known as Radeon. 3Dfx attempted to right the ship with a two-pronged approach: a Voodoo4 chipset aimed at the long-neglected budget market, and a Voodoo5 aimed at the high end. Both had potential, but the company was badly strapped for cash by now, and couldn’t afford to give them the launch they deserved. In December of 2000, 3Dfx announced that they had agreed to sell out to NVIDIA, who thought they had spotted some bits and bobs in their more recent chips that they might be able to make use of. And that, as they say, was that.

3Dfx was a brief-burning comet by any standard, a company which did everything right up to the instant when someone somewhere flipped a switch and it suddenly started doing everything wrong instead. But whatever regrets Gary Tarolli, Scott Sellers, and Ross Smith may have about the way it all turned out, they can rest secure in the knowledge that they changed not just gaming but computing in general forever. Their vanquisher NVIDIA had revenues of almost $27 billion last year, on the strength of GPUs which are as far beyond the original Voodoo chips as an F-35 is beyond the Wright Brothers’ flier, which are at the forefront not just of 3D graphics but a whole new trend toward “massively parallel” computing.

And yet even today, the 3Dfx name and logo can still send a little tingle of excitement running down the spines of gamers of a certain age, just as that of the Amiga can among some just slightly older. For a brief few years there, over the course of one of most febrile, chaotic, and yet exciting periods in all of gaming history, having a Voodoo card in your computer meant that you had the best graphics money could buy. Most of us wouldn’t want to go back to the days of needing to constantly tinker with the innards of our computers, of dropping hundreds of dollars on the latest and the greatest and hoping that publishers would still be supporting it in six months, of poring over magazines trying to make sense of long lists of arcane bullet points that seemed like fragments of a particularly esoteric PhD thesis (largely because they originally were). No, we wouldn’t want to go back; those days were kind of ridiculous. But that doesn’t mean we can’t look back and smile at the extraordinary technological progression we were privileged to witness over such a disarmingly short period of time.



Did you enjoy this article? If so, please think about pitching in to help me make many more like it. You can pledge any amount you like.



(Sources: the books Renegades of the Empire: How Three Software Warriors Started a Revolution Behind the Walls of Fortress Microsoft by Michael Drummond, Masters of DOOM: How Two Guys Created an Empire and Transformed Pop Culture by David Kushner, and Principles of Three-Dimensional Computer Animation by Michael O’Rourke. Computer Gaming World of November 1995, January 1996, July 1996, November 1996, December 1996, September 1997, October 1997, November 1997, and April 1998; Next Generation of October 1997 and January 1998; Atomic of June 2003; Game Developer of December 1996/January 1997 and February/March 1997. Online sources include “3Dfx and Voodoo Graphics — The Technologies Within” at The Overclocker, former 3Dfx CEO Greg Ballard’s lecture for Stanford’s Entrepreneurial Thought Leader series, the Computer History Museum’s “oral history” with the founders of 3Dfx, Fabian Sanglard’s reconstruction of the workings of the Vérité chipset and the Voodoo 1 chipset, “Famous Graphics Chips: 3Dfx’s Voodoo” by Dr. Jon Peddie at the IEEE Computer Society’s site, and “A Fallen Titan’s Final Glory” by Joel Hruska at the long-defunct Sudhian Media. Also, the Usenet discussions that followed the release of the 3Dfx patch for Tomb Raider and Nicol Bolas’s crazily detailed reply to the Stack Exchange question “Why Do Game Developer Prefer Windows?”.)

 

Tags: , , , , , , , ,

The Next Generation in Graphics, Part 1: Three Dimensions in Software (or, Quake and Its Discontents)

“Mathematics,” wrote the historian of science Carl Benjamin Boyer many years ago, “is as much an aspect of culture as it is a collection of algorithms.” The same might be said about the mathematical algorithms we choose to prioritize — especially in these modern times, when the right set of formulas can be worth many millions of dollars, can be trade secrets as jealously guarded as the recipes for Coca-Cola or McDonald’s Special Sauce.

We can learn much about the tech zeitgeist from those algorithms the conventional wisdom thinks are most valuable. At the very beginning of the 1990s, when “multimedia” was the buzzword of the age and the future of games was believed to lie with “interactive movies” made out of video clips of real actors, the race was on to develop video codecs: libraries of code able to digitize footage from the analog world and compress it to a fraction of its natural size, thereby making it possible to fit a reasonable quantity of it on CDs and hard drives. This was a period when Apple’s QuickTime was regarded as a killer app in itself, when Philips’s ill-fated CD-i console could be delayed for years by the lack of a way to get video to its screen quickly and attractively.

It is a rule in almost all kinds of engineering that, the more specialized a device is, the more efficiently it can perform the tasks that lie within its limited sphere. This rule holds true as much in computing as anywhere else. So, when software proved able to stretch only so far in the face of the limited general-purpose computing power of the day, some started to build their video codecs into specialized hardware add-ons.

Just a few years later, after the zeitgeist in games had shifted, the whole process repeated itself in a different context.

By the middle years of the decade, with the limitations of working with canned video clips becoming all too plain, interactive movies were beginning to look like a severe case of the emperor’s new clothes. The games industry therefore shifted its hopeful gaze to another approach, one that would prove a much more lasting transformation in the way games were made. This 3D Revolution did have one point of similarity with the mooted and then abandoned meeting of Silicon Valley and Hollywood: it too was driven by algorithms, implemented first in software and then in hardware.

It was different, however, in that the entire industry looked to one man to lead it into its algorithmic 3D future. That man’s name was John Carmack.



Whether they happen to be pixel art hand-drawn by human artists or video footage captured by cameras, 2D graphics already exist on disk before they appear on the monitor screen. And therein lies the source of their limitations. Clever programmers can manipulate them to some extent — pixel art generally more so than digitized video — but the possibilities are bounded by the fundamentally static nature of the source material. 3D graphics, however, are literally drawn by the computer. They can go anywhere and do just about anything. For, while 2D graphics are stored as a concrete grid of pixels, 3D graphics are described using only the abstract language of mathematics — a language able to describe not just a scene but an entire world, assuming you have a powerful enough computer running a good enough algorithm.

Like so many things that get really complicated really quickly, the basic concepts of 3D graphics are disarmingly simple. The process behind them can be divided into two phases: the modeling phase and the rendering, or rasterization, phase.

It all begins with simple two-dimensional shapes of the sort we all remember from middle-school geometry, each defined as a collection of points on a plane and straight lines connecting them together. By combining and arranging these two-dimensional shapes, or surfaces, together in three-dimensional space, we can make solids — or, in the language of computerized 3D graphics, objects.

Here we see how 3D objects can be made ever more more complex by building them out of ever more surfaces. The trade-off is that more complex objects require more computing power to render in a timely fashion.

Once we have a collection of objects, we can put them into a world space, wherever we like and at whatever angle of orientation we like. This world space is laid out as a three-dimensional grid, with its point of origin — i.e., the point where X, Y, and Z are all zero — wherever we wish it to be. In addition to our objects, we also place within it a camera — or, if you like, an observer in our world — at whatever position and angle of orientation we wish. At their simplest, 3D graphics require nothing more at the modeling phase.

We sometimes call the second phase the “rasterization” phase in reference to the orderly two-dimensional grid of pixels which make up the image seen on a monitor screen, which in computer-science parlance is known as a raster. The whole point of this rasterization phase, then, is to make our computer’s monitor a window into our imaginary world from the point of view of our imaginary camera. This entails converting said world’s three dimensions back into our two-dimensional raster of pixels, using the rules of perspective that have been understood by human artists since the Renaissance.

We can think of rasterizing as observing a scene through a window screen. Each square in the mesh is one pixel, which can be exactly one color. The whole process of 3D rendering ultimately comes down to figuring out what each of those colors should be.

The most basic of all 3D graphics are of the “wire-frame” stripe, which attempt to draw only the lines that form the edges of their surfaces. They were seen fairly frequently on microcomputers as far back as the early 1980s, the most iconic example undoubtedly being the classic 1984 space-trading game Elite.

Even in something as simple as Elite, we can begin to see how 3D graphics blur the lines between a purely presentation-level technology and a full-blown world simulation. When we have one enemy spaceship in our sights in Elite, there might be several others above, behind, or below us, which the 3D engine “knows” about but which we may not. Combined with a physics engine and some player and computer agency in the model world (taking here the form of lasers and thrusters), it provides the raw materials for a game. Small wonder that so many game developers came to see 3D graphics as such a natural fit.

But, for all that those wire frames in Elite might have had their novel charm in their day, programmers realized that the aesthetics of 3D graphics had to get better for them to become a viable proposition over the long haul. This realization touched off an algorithmic arms race that is still ongoing to this day. The obvious first step was to paint in the surfaces of each solid in single blocks of color, as the later versions of Elite that were written for 16-bit rather than 8-bit machines often did. It was an improvement in a way, but it still looked jarringly artificial, even against a spartan star field in outer space.

The next way station on the road to a semi-realistic-looking computer-generated world was light sources of varying strengths, positioned in the world with X, Y, and Z coordinates of their own, casting their illumination and shadows realistically on the objects to be found there.

A 3D scene with light sources.

The final step was to add textures, small pictures that were painted onto surfaces in place of uniform blocks of color; think of the pitted paint job of a tired X-Wing fighter or the camouflage of a Sherman tank. Textures introduced an enormous degree of complication at the rasterization stage; it wasn’t easy for 3D engines to make them look believable from a multitude of different lines of sight. That said, believable lighting was almost as complicated. Textures or lighting, or both, were already the fodder for many an academic thesis before microcomputers even existed.

A 3D scene with light sources and textures.

In the more results-focused milieu of commercial game development, where what was possible was determined largely by which types of microprocessors Intel and Motorola were selling the most of in any given year, programmers were forced to choose between compromised visions of the academic ideal. These broke down into two categories, neatly exemplified by the two most profitable computer games of the 1990s. Those games that followed in one or the other’s footsteps came to be known as the “Myst clones” and the “DOOM clones.” They could hardly have been more dissimilar in personality, yet they were both symbols of a burgeoning 3D revolution.

The Myst clones got their name from a game developed by Cyan Studios and published by Brøderbund in September of 1993, which went on to sell at least 6 million copies as a boxed retail product and quite likely millions more as a pack-in of one description or another. Myst and the many games that copied its approach tended to be, as even their most strident detractors had to admit, rather beautiful to look at. This was because they didn’t attempt to render their 3D imagery in real time; their rendering was instead done beforehand, often on beefy workstation-class machines, then captured as finished rasters of pixels on disk. Given that they worked with graphics that needed to be rendered only once and could be allowed to take hours to do so if necessary, the creators of games like this could pull out all the stops in terms of textures, lighting, and the sheer number and complexity of the 3D solids that made up their worlds.

These games’ disadvantage — a pretty darn massive one in the opinion of many players — was that their scope of interactive potential was as sharply limited in its way as that of all those interactive movies built around canned video clips that the industry was slowly giving up on. They could present their worlds to their players only as a collection of pre-rendered nodes to be jumped between, could do nothing on the fly. These limitations led most of their designers to build their gameplay around set-piece puzzles found in otherwise static, non-interactive environments, which most players soon started to find a bit boring. Although the genre had its contemplative pleasures and its dedicated aficionados who appreciated them, its appeal as anything other than a tech demo — the basis on which the original Myst was primarily sold — turned out to be the very definition of niche, as the publishers of Myst clones belatedly learned to their dismay. The harsh reality became undeniable once Riven, the much-anticipated, sumptuously beautiful sequel to Myst, “only” sold 1.5 million copies when it finally appeared four years after its hallowed predecessor. With the exception only of Titanic: Adventure out of Time, which owed its fluke success to a certain James Cameron movie with which it happened to share a name and a setting, no other game of this style ever cracked half a million in unit sales. The genre has been off the mainstream radar for decades now.

The DOOM clones, on the other hand, have proved a far more enduring fixture of mainstream gaming. They took their name, of course, from the landmark game of first-person carnage which the energetic young men of id Software released just a couple of months after Myst reached store shelves. John Carmack, the mastermind of the DOOM engine, managed to present a dynamic, seamless, apparently 3D world in place of the static nodes of Myst, and managed to do it in real time, even on a fairly plebeian consumer-grade computer. He did so first of all by being a genius programmer, able to squeeze every last drop out of the limited hardware at his disposal. And then, when even that wasn’t enough to get the job done, he threw out feature after feature that the academics whose papers he had pored over insisted was essential for any “real” 3D engine. His motto was, if you can’t get it done honestly, cheat, by hard-coding assumptions about the world into your algorithms and simply not letting the player — or the level designer — violate them. The end result was no Myst-like archetype of beauty in still screenshots. It pasted 2D sprites into its world whenever there wasn’t horsepower enough to do real modeling, had an understanding of light and its properties that is most kindly described as rudimentary, and couldn’t even handle sloping floors or ceilings, or walls that weren’t perfectly vertical. Heck, it didn’t even let you look up or down.

And absolutely none of that mattered. DOOM may have looked a bit crude in freeze-frame, but millions of gamers found it awe-inspiring to behold in motion. Indeed, many of them thought that Carmack’s engine, combined with John Romero and Sandy Petersen’s devious level designs, gave them the most fun they’d ever had sitting behind a computer. This was immersion of a level they’d barely imagined possible, the perfect demonstration of the real potential of 3D graphics — even if it actually was, as John Carmack would be the first to admit, only 2.5D at best. No matter; DOOM felt like real 3D, and that was enough.

A hit game will always attract imitators, and a massive hit will attract legions of them. Accordingly, the market was soon flooded with, if anything, even more DOOM clones than Myst clones, all running in similar 2.5D engines, the product of both intense reverse engineering of DOOM itself and Carmack’s habit of talking freely about how he made the magic happen to pretty much anyone who asked him, no matter how much his colleagues at id begged him not to. “Programming is not a zero-sum game,” he said. “Teaching something to a fellow programmer doesn’t take it away from you. I’m happy to share what I can because I’m in it for the love of programming.” Carmack was elevated to veritable godhood, the prophet on the 3D mountaintop passing down whatever scraps of wisdom he deigned to share with the lesser mortals below.

Seen in retrospect, the DOOM clones are, like the Myst clones, a fairly anonymous lot for the most part, doubling down on transgressive ultra-violence instead of majestic isolation, but equally failing to capture a certain ineffable something that lay beyond the nuts and bolts of their inspiration’s technology. The most important difference between the Myst and DOOM clones came down to the filthy lucre of dollar and unit sales: whereas Myst‘s coattails proved largely illusory, producing few other hits, DOOM‘s were anything but. Most people who had bought Myst, it seemed, were satisfied with that single purchase; people who bought DOOM were left wanting more first-person mayhem, even if it wasn’t quite up to the same standard.

The one DOOM clone that came closest to replacing DOOM itself in the hearts of gamers was known as Duke Nukem 3D. Perhaps that isn’t surprising, given its pedigree: it was a product of 3D Realms, the rebranded incarnation of Scott Miller’s Apogee Software. Whilst trading under the earlier name, Miller had pioneered the episodic shareware model of game distribution, a way of escaping the heavy-handed group-think of the major boxed-game publishers and their tediously high-concept interactive movies in favor of games that were exponentially cheaper to develop, but also rawer, more visceral, more in line with what the teenage and twenty-something males who still constituted the large majority of dedicated gamers were actually jonesing to play. Miller had discovered the young men of id when they were still working for a disk magazine in Shreveport, Louisiana. He had then convinced them to move to his own glossier, better-connected hometown of Dallas, Texas, and distributed their proto-DOOM shooter Wolfenstein 3D to great success. His protégées had elected to strike out on their own when the time came to release DOOM, but it’s fair to say that that game would probably never have come to exist at all if not for their shareware Svengali. And even if it had, it probably wouldn’t have made them so much money; Jay Wilbur, id’s own tireless guerilla marketer, learned most of his tricks from watching Scott Miller.

Still a man with a keen sense of what his customers really wanted, Miller re-branded Apogee as 3D Realms as a way of signifying its continuing relevance amidst the 3D revolution that took the games industry by storm after DOOM. Then he, his junior partner George Broussard, and 3D Realms’s technical mastermind Ken Silverman set about making a DOOM-like engine of their own, known as Build, which they could sell to other developers who wanted to get up and running quickly. And they used the same engine to make a game of their own, which would turn out to be the most memorable of all those built with Build.

Duke Nukem 3D‘s secret weapon was one of the few boxes in the rubric of mainstream gaming success that DOOM had failed to tick off: a memorable character to serve as both star and mascot. First conceived several years earlier for a pair of Apogee 2D platformers, Duke Nukem was Joseph Lieberman’s worst nightmare, an unrepentant gangster with equally insatiable appetites for bombs and boobies, a fellow who “thinks the Bureau of Alcohol, Tobacco, and Firearms is a convenience store,” as his advertising trumpeted. His latest game combined some of the best, tightest level design yet seen outside of DOOM with a festival of adolescent transgression, from toilet water that served as health potions to strippers who would flash their pixelated breasts at you for the price of a dollar bill. The whole thing was topped off with the truly over-the-top quips of Duke himself: “I’m gonna rip off your head and shit down your neck!”; “Your face? Your ass? What’s the difference?” It was an unbeatable combination, proof positive that Miller’s ability to read his market was undimmed. Released in January of 1996, relatively late in the day for this generation of 3D — or rather 2.5D — technology, Duke Nukem 3D became by some reports the best-selling single computer game of that entire year. It is still remembered with warm nostalgia today by countless middle-aged men who would never want their own children to play a game like this. And so the cycle of life continues…

In a porno shop, shooting it out with policemen who are literally pigs…

Duke Nukem 3D was a triumph of design and attitude rather than technology; in keeping with most of the DOOM clones, the Build engine’s technical innovations over its inspiration were fairly modest. John Carmack scoffed that his old friends’ creation looked like it was “held together with bubble gum.”

The game that did push the technology envelope farthest, albeit without quite managing to escape the ghetto of the DOOM clones, was also a sign in another way of how quickly DOOM was changing the industry: rather than stemming from scruffy veterans of the shareware scene like id and 3D Realms, it came from the heart of the industry’s old-money establishment — from no less respectable and well-financed an entity than George Lucas’s very own games studio.

LucasArts’s Dark Forces was a shooter set in the Star Wars universe, which disappointed everyone right out of the gate with the news that it was not going to let you fight with a light saber. The developers had taken a hard look at it, they said, but concluded in the end that it just wasn’t possible to pull off satisfactorily within the hardware specifications they had to meet. This failing was especially ironic in light of the fact that they had chosen to name their new 2.5D engine “Jedi.” But they partially atoned for it by making the Jedi engine capable of hosting unprecedentedly enormous levels — not just horizontally so, but vertically as well. Dark Forces was full of yawning drop-offs and cavernous open spaces, the likes which you never saw in DOOM — or Duke Nukem 3D, for that matter, despite its release date of almost a year after Dark Forces. Even more importantly, Dark Forces felt like Star Wars, right from the moment that John Williams’s stirring theme song played over stage-setting text which scrolled away into the frame rather than across it. Although they weren’t allowed to make any of the movies’ characters their game’s star, LucasArts created a serviceable if slightly generic stand-in named Kyle Katarn, then sent him off on vertigo-inducing chases through huge levels stuffed to the gills with storm troopers in urgent need of remedial gunnery training, just like in the movies. Although Dark Forces toned down the violence that so many other DOOM clones were making such a selling point out of — there was no blood whatsoever on display here, just as there had not been in the movies — it compensated by giving gamers the chance to live out some of their most treasured childhood media memories, at a time when there were no new non-interactive Star Wars experiences to be had.

Unfortunately, LucasArts’s design instincts weren’t quite on a par with their presentation and technology. Dark Forces‘s levels were horribly confusing, providing little guidance about what to do or where to go in spaces whose sheer three-dimensional size and scope made the two-dimensional auto-map all but useless. Almost everyone who goes back to play the game today tends to agree that it just isn’t as much fun as it ought to be. At the time, though, the Star Wars connection and its technical innovations were enough to make Dark Forces a hit almost the equal of DOOM and Duke Nukem 3D. Even John Carmack made a point of praising LucasArts for what they had managed to pull off on hardware not much better than that demanded by DOOM.

Yet everyone seemed to be waiting on Carmack himself, the industry’s anointed Master of 3D Algorithms, to initiate the real technological paradigm shift. It was obvious what that must entail: an actual, totally non-fake rendered-on-the-fly first-person 3D engine, without all of the compromises that had marked DOOM and its imitators. Such engines weren’t entirely unheard of; the Boston studio Looking Glass Technologies had been working with them for five years, employing them in such innovative, immersive games as Ultima Underworld and System Shock. But those games were qualitatively different from DOOM and its clones: slower, more complex, more cerebral. The mainstream wanted a game that played just as quickly and violently and viscerally as DOOM, but that did it in uncompromising real 3D. With computers getting faster every year and with a genius like John Carmack to hand, it ought to be possible.

And so Carmack duly went to work on just such an engine, for a game that was to be called Quake. His ever-excitable level designer John Romero, who had the looks and personality to be the rock star gaming had been craving for years, was all in with bells on. “The next game is going to blow DOOM all to hell,” he told his legions of adoring fans. “DOOM totally sucks in comparison to our next game! Quake is going to be a bigger step over DOOM than DOOM was over Wolf 3D.” Drunk on success and adulation, he said that Quake would be more than just a game: “It will be a movement.” (Whatever that meant!) The drumbeat of excitement building outside of id almost seemed to justify his hyperbole; from all the way across the Atlantic, the British magazine PC Zone declared that the upcoming Quake would be “the most important PC game ever made.” The soundtrack alone was to be a significant milestone in the incorporation of gaming into mainstream pop culture, being the work of Trent Reznor and his enormously popular industrial-rock band Nine Inch Nails. Such a collaboration would have been unthinkable just a few years earlier.

While Romero was enjoying life as gaming’s own preeminent rock star and waiting for Carmack to get far enough along on the Quake engine to give him something to do, Carmack was living like a monk, working from 4 PM to 4 AM every day. In another sign of just how quickly id had moved up in the world, he had found himself an unexpectedly well-credentialed programming partner. Michael Abrash was one of the establishment’s star programmers, who had written a ton of magazine articles and two highly regarded technical tomes on assembly-language and graphics programming and was now a part of Microsoft’s Windows NT team. When Carmack, who had cut his teeth on Abrash’s writings, invited him out of the blue to come to Dallas and do Quake with him, Bill Gates himself tried to dissuade his employee. “You might not like it down there,” he warned. Abrash was, after all, pushing 40, a staid sort with an almost academic demeanor, while id was a nest of hyperactive arrested adolescence on a permanent sugar high. But he went anyway, because he was pretty sure Carmack was a genius, and because Carmack seemed to Abrash a bit lonely, working all night every night with only his computer for company. Abrash thought he saw in Quake a first glimmer of a new form of virtual existence that companies like Meta are still chasing eagerly today: “a pretty complicated, online, networked universe,” all in glorious embodied 3D. “We do Quake, other companies do other games, people start building worlds with our format and engine and tools, and these worlds can be glommed together via doorways from one to another. To me this sounds like a recipe for the first real cyberspace, which I believe will happen the way a real space station or habitat probably would — by accretion.”

He may not have come down if he had known precisely what he was getting into; he would later compare making Quake to “being strapped onto a rocket during takeoff in the middle of a hurricane.” The project proved a tumultuous, exhausting struggle that very nearly broke id as a cohesive company, even as the money from DOOM was continuing to roll in. (id’s annual revenues reached $15.6 million in 1995, a very impressive figure for what was still a relatively tiny company, with a staff numbering only a few dozen.)

Romero envisioned a game that would be as innovative in terms of gameplay as technology, that would be built largely around sword-fighting and other forms of hand-to-hand combat rather than gun play — the same style of combat that LucasArts had decided was too impractical for Dark Forces. Some of his early descriptions make Quake sound more like a full-fledged CRPG in the offing than another straightforward action game. But it just wouldn’t come together, according to some of Romero’s colleagues because he failed to communicate his expectations to them, rather leading them to suspect that even he wasn’t quite sure what he was trying to make.

Carmack finally stepped in and ordered his design team to make Quake essentially a more graphically impressive DOOM. Romero accepted the decision outwardly, but seethed inwardly at this breach of longstanding id etiquette; Carmack had always made the engines, then given Romero free rein to turn them into games. Romero largely checked out, opening a door that ambitious newcomers like American McGee and Tim Willits, who had come up through the thriving DOOM modding community, didn’t hesitate to push through. The offices of id had always been as hyper-competitive as a DOOM deathmatch, but now the atmosphere was becoming a toxic stew of buried resentments.

In a misguided attempt to fix the bad vibes, Carmack, whose understanding of human nature was as shallow as his understanding of computer graphics was deep, announced one day that he had ordered a construction crew in to knock down all of the walls, so that everybody could work together from a single “war room.” One for all and all for one, and all that. The offices of the most profitable games studio in the world were transformed into a dystopian setting perfect for a DOOM clone, as described by a wide-eyed reporter from Wired magazine who came for a visit: “a maze of drywall and plastic sheeting, with plaster dust everywhere, loose acoustic tiles, and cables dangling from the ceiling. Almost every item not directly related to the completion of Quake was gone. The only privacy to be found was between the padded earpieces of headphones.”

Wired magazine’s August 1996 cover, showing John Carmack flanked by John Romero and Adrian Carmack, marked the end of an era. By the time it appeared on newsstands, Romero had already been fired.

Needless to say, it didn’t have the effect Carmack had hoped for. In his book-length history of id’s early life and times, journalist David Kushner paints a jittery, unnerving picture of the final months of Quake‘s development: they “became a blur of silent and intense all-nighters, punctuated by the occasional crash of a keyboard against a wall. The construction crew had turned the office into a heap. The guys were taking their frustrations out by hurling computer parts into the drywall like knives.” Michael Abrash is more succinct: “A month before shipping, we were sick to death of working on Quake.” And level designer Sandy Petersen, the old man of the group, who did his best to keep his head down and stay out of the intra-office cold war, is even more so: “[Quake] was not fun to do.”

Quake was finally finished in June of 1996. It would prove a transitional game in more ways than one, caught between where games had recently been and where they were going. Still staying true to that odd spirit of hacker idealism that coexisted with his lust for ever faster Ferraris, Carmack insisted that Quake be made available as shareware, so that people could try it out before plunking down its full price. The game accordingly got a confusing, staggered release, much to the chagrin of its official publisher GT Interactive. To kick things off, the first eight levels went up online. Shortly after, there appeared in stores a $10 CD of the full game that had to be unlocked by paying id an additional $50 in order to play beyond the eighth level. Only after that, in August of 1996, did the game appear in a conventional retail edition.

Predictably enough, it all turned into a bit of a fiasco. Crackers quickly reverse-engineered the algorithms used for generating the unlocking codes, which were markedly less sophisticated than the ones used to generate the 3D graphics on the disc. As a result, hundreds of thousands of people were able to get the entirety of the most hotly anticipated game of the year for $10. Meanwhile even many of those unwilling or unable to crack their shareware copies decided that eight levels was enough for them, especially given that the unregistered version could be used for multiplayer deathmatches. Carmack’s misplaced idealism cost id and GT Interactive millions, poisoning relations between them; the two companies soon parted ways.

So, the era of shareware as an underground pipeline of cutting-edge games came to an end with Quake. From now on, id would concentrate on boxed games selling for full price, as would all of their fellow survivors from that wild and woolly time. Gaming’s underground had become its establishment.

But its distribution model wasn’t the only sense in which Quake was as much a throwback as a step forward. It held fast as well to Carmack’s disinterest in the fictional context of id’s games, as illustrated by his famous claim that the story behind a game was no more important than the story behind a porn movie. It would be blatantly incorrect to claim that the DOOM clones which flooded the market between 1994 and 1996 represented some great exploding of the potential of interactive narrative, but they had begun to show some interest, if not precisely in elaborate set-piece storytelling in the way of adventure games, at least in the appeal of setting and texture. Dark Forces had been a pioneer in this respect, what with its between-levels cut scenes, its relatively fleshed-out main character, and most of all its environments that really did look and feel like the Star Wars films, from their brutalist architecture to John Williams’s unmistakable score. Even Duke Nukem 3D had the character of Duke, plus a distinctively seedy, neon-soaked post-apocalyptic Los Angeles for him to run around in. No one would accuse it of being an overly mature aesthetic vision, but it certainly was a unified one.

Quake, on the other hand,  displayed all the signs of its fractious process of creation, of half a dozen wayward designers all pulling in different directions. From a central hub, you took “slipgates” into alternate dimensions that contained a little bit of everything on the designers’ not-overly-discriminating pop-culture radar, from zombie flicks to Dungeons & Dragons, from Jaws to H.P. Lovecraft, from The Terminator to heavy-metal music, and so wound up not making much of a distinct impression at all.

Most creative works are stamped with the mood of the people who created them, no matter how hard the project managers try to separate the art from the artists. With its color palette dominated by shocks of orange and red, DOOM had almost literally burst off the monitor screen with the edgy joie de vivre of a group of young men whom nobody had expected to amount to much of anything, who suddenly found themselves on the verge of remaking the business of games in their own unkempt image. Quake felt tired by contrast. Even its attempts to blow past the barriers of good taste seemed more obligatory than inspired; the Satanic symbolism, elaborate torture devices, severed heads, and other forms of gore were outdone by other games that were already pushing the envelope even further. This game felt almost somber — not an emotion anyone had ever before associated with id. Its levels were slower and emptier than those of DOOM, with a color palette full of mournful browns and other earth tones. Even the much-vaunted soundtrack wound up rather underwhelming. It was bereft of the melodic hooks that had made Nine Inch Nails’s previous output more palatable for radio listeners than that of most other “extreme” bands; it was more an exercise in sound design than music composition. One couldn’t help but suspect that Trent Reznor had held back all of his good material for his band’s next real record.

At its worst, Quake felt like a tech demo waiting for someone to turn it into an actual game, proving that John Carmack needed John Romero as badly as Romero needed him. But that once-fruitful relationship was never to be rehabilitated: Carmack fired Romero within days of finishing Quake. The two would never work together again.

It was truly the end of an era at id. Sandy Petersen was soon let go as well, Michael Abrash went back to the comfortable bosom of Microsoft, and Jay Wilbur quit for the best of all possible reasons: because his son asked him, “How come all the other daddies go to the baseball games and you never do?” All of them left as exhausted as Quake looks and feels.

Of course, there was nary a hint of Quake‘s infelicities to be found in the press coverage that greeted its release. Even more so than most media industries, the games industry has always run on enthusiasm, and it had no desire at this particular juncture to eat its own by pointing out the flaws in the most important PC game ever made. The coverage in the magazines was marked by a cloying fan-boy fawning that was becoming ever more sadly prominent in gamer culture. “We are not even worthy to lick your toenails free of grit and fluffy sock detritus,” PC Zone wrote in a public letter to id. “We genuflect deeply and offer our bare chests for you to stab with a pair of scissors.” (Eww! A sense of proportion is as badly lacking as a sense of self-respect…) Even the usually sober-minded (by gaming-journalism standards) Computer Gaming World got a little bit creepy: “Describing Quake is like talking about sex. It must be experienced to be fully appreciated.”

Still, I would be a poor historian indeed if I called all the hyperbole of 1996 entirely unjustified. The fact is that the passage of time has tended to emphasize Quake‘s weaknesses, which are mostly in the realm of design and aesthetics, whilst obscuring its contemporary strengths, which were in the realm of technology. Although not quite the first game to graft a true 3D engine onto ultra-fast-action gameplay — Interplay’s Descent beat it to the market by more than a year — it certainly did so more flexibly and credibly than anything else to date, even if Carmack still wasn’t above cheating a bit when push came to shove. (By no means is the Quake engine entirely free of tricksy 2D sprites in places where proper 3D models are just too expensive to render.)

Nevertheless, it’s difficult to fully convey today just how revolutionary the granular details of Quake seemed in 1996: the way you could look up and down and all around you with complete freedom; the way its physics engine made guns kick so that you could almost feel it in your mouse hand; the way you could dive into water and experience the visceral sensation of actually swimming; the way the wood paneling of its walls glinted realistically under the overhead lighting. Such things are commonplace today, but Quake paved the way. Most of the complaints I’ve raised about it could be mitigated by the simple expedient of not even bothering with the lackluster single-player campaign, of just playing it with your mates in deathmatch.

But even if you preferred to play alone, Quake was a sign of better things to come. “It goes beyond the game and more into the engine and the possibilities,” says Rob Smith, who watched the Quake mania come and go as the editor of PC Gamer magazine. “Quake presented options to countless designers. The game itself doesn’t make many ‘all-time’ lists, but its impact [was] as a game changer for 3D gaming, [an] engine that allowed other game makers to express themselves.” For with the industry’s Master of 3D Algorithms John Carmack having shown what was possible and talking as freely as ever about how he had achieved it, with Michael Abrash soon to write an entire book about how he and Carmack had made the magic happen, more games of this type, ready and able to harness the technology of true 3D to more exciting designs, couldn’t be far behind. “We’ve pretty much decided that our niche is in first-person futuristic action games,” said John Carmack. “We stumble when we get away from the techno stuff.” The industry was settling into a model that would remain in place for years to come: id would show what was possible with the technology of 3D graphics, then leave it to other developers to bend it in more interesting directions.

Soon enough, then, titles like Jedi Knight and Half-Life would push the genre once known as DOOM clones, now trading under the more sustainable name of the first-person shooter, in more sophisticated directions in terms of storytelling and atmosphere, without losing the essence of what made their progenitors so much fun. They will doubtless feature in future articles.

Next time, however, I want to continue to focus on the technology, as we turn to another way in which Quake was a rough draft for a better gaming future: months after its initial release, it became one of the first games to display the potential of hardware acceleration for 3D graphics, marking the beginning of a whole new segment of the microcomputer industry, one worth many billions of dollars today.



Did you enjoy this article? If so, please think about pitching in to help me make many more like it. You can pledge any amount you like.



(Sources: the books Rocket Jump: Quake and the Golden Age of First-Person Shooters by David L. Craddock, The Graphics Programming Black Book by Michael Abrash, Masters of DOOM: How Two Guys Created an Empire and Transformed Pop Culture by David Kushner, Dungeons and Dreamers: The Rise of Computer Game Culture from Geek to Chic by Brad King and John Borland, Principles of Three-Dimensional Computer Animation by Michael O’Rourke, and Computer Graphics from Scratch: A Programmer’s Introduction by Gabriel Gambetta. PC Zone of May 1996; Computer Gaming World of July 1996 and October 1996; Wired of August 1996 and January 2010. Online sources include Michael Abrash’s “Ramblings in Realtime” for Blue’s News.

Quake is available as a digital purchase at GOG.com, as is Star Wars: Dark Forces. Duke Nukem 3D can be found on Steam.)

 
 

Tags: , , , , , , ,