RSS

Tag Archives: 3dfx

The Next Generation in Graphics, Part 3: Software Meets Hardware

The first finished devices to ship with the 3Dfx Voodoo chipset inside them were not add-on boards for personal computers, but rather standup arcade machines. That venerable segment of the videogames industry was enjoying its last lease on life in the mid-1990s; this was the last era when the graphics of the arcade machines were sufficiently better than those which home computers and consoles could generate as to make it worth getting up off the couch, driving into town, and dropping a quarter or two into a slot to see them. The Voodoo chips now became part and parcel of that, ironically just before they would do much to destroy the arcade market by bringing equally high-quality 3D graphics into homes. For now, though, they wowed players of arcade games like San Francisco Rush: Extreme Racing, Wayne Gretzky’s 3D Hockey, and NFL Blitz.

Still, Gary Tarolli, Scott Sellers, and Ross Smith were most excited by the potential of the add-on-board market. All too well aware of how the chicken-or-the-egg deadlock between game makers and players had doomed their earlier efforts with Pellucid and Media Vision, they launched an all-out charm offensive among game developers long before they had any actual hardware to show them. Smith goes so far as to call “connecting with the developers early on and evangelizing them” the “single most important thing we ever did” — more important, that is to say, than designing the Voodoo chips themselves, impressive as they were. Throughout 1995, somebody from 3Dfx was guaranteed to be present wherever developers got together to talk among themselves. While these evangelizers had no hardware as yet, they did have software simulations running on SGI workstations — simulations which, they promised, duplicated exactly the capabilities the real chips would have when they started arriving in quantity from Taiwan.

Our core trio realized early on that their task must involve software as much as hardware in another, more enduring sense: they had to make it as easy as possible to support the Voodoo chipset. In my previous article, I mentioned how their old employer SGI had created an open-source software library for 3D graphics, known as OpenGL. A team of programmers from 3Dfx now took this as the starting point of a slimmed-down, ultra-optimized MS-DOS library they called GLide; whereas OpenGL sported well over 300 individual function calls, GLide had less than 100. It was fast, it was lightweight, and it was easy to program. They had good reason to be proud of it. Its only drawback was that it would only work with the Voodoo chips — which was not necessarily a drawback at all in the eyes of its creators, given that they hoped and planned to dominate a thriving future market for hardware-accelerated 3D graphics on personal computers.

Yet that domination was by no means assured, for they were far from the only ones developing consumer-oriented 3D chipsets. One other company in particular gave every indication of being on the inside track to widespread acceptance. That company was Rendition, another small, venture-capital-funded startup that was doing all of the same things 3Dfx was doing — only Rendition had gotten started even earlier. It had actually been Rendition who announced a 3D chipset first, and they had been evangelizing it ever since every bit as tirelessly as 3Dfx.

The Voodoo chipset was technologically baroque in comparison to Rendition’s chips, which went under the name of Vérité. This meant that Voodoo should easily outperform them — eventually, once all of the logistics of East Asian chip fabricating had been dealt with and deals had been signed with board makers. In June of 1996, when the first Vérité-powered boards shipped, the Voodoo chipset quite literally didn’t exist as far as consumers were concerned. Those first Vérité boards were made by none other than Creative Labs, the 800-pound gorilla of the home-computer add-on market, maker of the ubiquitous Sound Blaster sound cards and many a “multimedia upgrade kit.” Such a partner must be counted as yet another early coup for Rendition.

The Vérité cards were followed by a flood of others whose slickly aggressive names belied their somewhat workmanlike designs: 3D Labs Permedia, S3 Virge, ATI 3D Rage, Matrox Mystique. And still Voodoo was nowhere.

What was everywhere was confusion; it was all but impossible for the poor, benighted gamer to make heads or tails of the situation. None of these chipsets were compatible with one another at the hardware level in the way that 2D graphics cards were; there were no hardware standards for 3D graphics akin to VGA, that last legacy of IBM’s era of dominance, much less the various SVGA standards defined by the Video Electronic Standards Association (VESA). Given that most action-oriented computer games still ran on MS-DOS, this was a serious problem.

For, being more of a collection of basic function calls than a proper operating system, MS-DOS was not known for its hardware agnosticism. Most of the folks making 3D chips did provide an MS-DOS software package for steering them, similar in concept to 3Dfx’s GLide, if seldom as optimized and elegant. But, just like GLide, such libraries worked only with the chipset for which they had been created. What was sorely needed was an intermediate layer of software to sit between games and the chipset-manufacturer-provided libraries, to automatically translate generic function calls into forms suitable for whatever particular chipset happened to exist on that particular computer. This alone could make it possible for one build of one game to run on multiple 3D chipsets. Yet such a level of hardware abstraction was far beyond the capabilities of bare-bones MS-DOS.

Absent a more reasonable solution, the only choice was to make separate versions of games for each of the various 3D chipsets. And so began the brief-lived, unlamented era of the 3D pack-in game. All of the 3D-hardware manufacturers courted the developers and publishers of popular software-rendered 3D games, dangling before them all sorts of enticements to create special versions that took advantage of their cards, more often than not to be included right in the box with them. Activision’s hugely successful giant-robot-fighting game MechWarrior 2 became the king of the pack-ins, with at least half a dozen different chipset-specific versions floating around, all paid for upfront by the board makers in cold, hard cash. (Whatever else can be said about him, Bobby Kotick has always been able to spot the seams in the gaming market where gold is waiting to be mined.)

It was an absurd, untenable situation; the game or games that came in the box were the only ones that the purchasers of some of the also-ran 3D contenders ever got a chance to play with their new toys. Gamers and chipset makers alike could only hope that, once Windows replaced MS-DOS as the gaming standard, their pain would go away.

In the meanwhile, the games studio that everyone with an interest in the 3D-acceleration sweepstakes was courting most of all was id Software — more specifically, id’s founder and tech guru, gaming’s anointed Master of 3D Algorithms, John Carmack. They all begged him for a version of Quake for their chipset.

And once again, it was Rendition that scored the early coup here. Carmack actually shared some of the Quake source code with them well before either the finished game or the finished Vérité chipset was available for purchase. Programmed by a pair of Rendition’s own staffers working with the advice and support of Carmack and Michael Abrash, the Vérité-rendered version of the game, commonly known as vQuake, came out very shortly after the software-rendered version. Carmack called it “the premier platform for Quake” — truly marketing copy to die for. Gamers too agreed that 3D acceleration made the original’s amazing graphics that much more amazing, while the makers of other 3D chipsets gnashed their teeth and seethed.

Quake with software rendering.

vQuake

Among these, of course, was the tardy 3Dfx. The first Voodoo cards appeared late, seemingly hopelessly so: well into the fall of 1996. Nor did they have the prestige and distribution muscle of a partner like Creative Labs behind them: the first two Voodoo boards rather came from smaller firms by the names of Diamond and Orchid. They sold for $300, putting them well up at the pricey end of the market —  and, unlike all of the competition’s cards, these required you to have another, 2D-graphics card in your computer as well. For all of these reasons, they seemed easy enough to dismiss as overpriced white elephants at first blush. But that impression lasted only until you got a look at them in action. The Voodoo cards came complete with a list of features that none of the competition could come close to matching in the aggregate: bilinear filtering, trilinear MIP-mapping, alpha blending, fog effects, accelerated light sources. If you don’t know what those terms mean, rest assured that they made games look better and play faster than anything else on the market. This was amply demonstrated by those first Voodoo boards’ pack-in title, an otherwise rather undistinguished, typical-of-its-time shooter called Hellbender. In its new incarnation, it suddenly looked stunning.

The Orchid Righteous 3D card, one of the first two to use the Voodoo chipset. (The only consumer category as fond of bro-dude phraseology like “extreme” and “righteous” as the makers of 3D cards was men’s razors.)

The battle lines were drawn between Rendition and 3Dfx. But sadly for the former, it quickly emerged that their chipset had one especially devastating weakness in comparison to its rival: its Z-buffering support left much to be desired. And what, you ask, is Z-buffering? Read on!

One of the non-obvious problems that 3D-graphics systems must solve is the need for objects in the foreground of a scene to realistically obscure those behind them. If, at the rendering stage, we were to simply draw the objects in whatever random order they came to us, we would wind up with a dog’s breakfast of overlapping shapes. We need to have a way of depth-sorting the objects if we want to end up with a coherent, correctly rendered scene.

The most straightforward way of depth-sorting is called the Painter’s Algorithm, because it duplicates the process a human artist usually goes through to paint a picture. Let’s say our artist wants to paint a still life of an apple sitting in front of a basket of other fruits. First she will paint the basket to her satisfaction, then paint the apple right over the top of it. Similarly, when we use a Painter’s Algorithm on the computer, we first sort the whole collection of objects into a hierarchy that begins with those that are farthest from our virtual camera and ends with those closest to it. Only after this has been done do we set about the task of actually drawing them to the screen, in our sorted order from the farthest away to the closest. And so we end up with a correctly rendered image.

But, as so often happens in matters like this, the most logically straightforward way is far from the most efficient way of depth-sorting a 3D scene. When the number of objects involved is few, the Painter’s Algorithm works reasonably well. When the numbers get into the hundreds or thousands, however, it results in much wasted effort, as the computer ends up drawing objects that are completely obscured by other objects in front of them — i.e., objects that don’t really need to be drawn at all. Even more importantly, the process of sorting all of the objects by depth beforehand is painfully time-consuming, a speed bump that stops the rendering process dead until it is completed. Even in the 1990s, when their technology was in a laughably primitive stage compared to today, GPUs tended to emphasize parallel processing — i.e., staying constantly busy with multiple tasks at the same time. The necessity of sorting every object in a scene by depth before even getting properly started on rendering it rather threw all that out the window.

Enter the Z-buffer. Under this approach, every object is rendered right away as soon as it comes down the pipeline, used to build the appropriate part of the raster of colored pixels that, once completed, will be sent to the monitor screen as a single frame. But there comes an additional wrinkle in the form of the Z-buffer itself: a separate, parallel raster containing not the color of each pixel but its distance from the camera. Before the GPU adds an entry to the raster of pixel colors, it compares the distance of that pixel from the camera with the number in that location in the Z-buffer. If the current distance is less than the one already found there, it knows that the pixel in question should be overwritten in the main raster and that the Z-buffer raster should be updated with that pixel’s new distance from the camera. Ditto if the Z-buffer contains a null value, indicating no object has yet been drawn at that pixel. But if the current distance is larger than the (non-null) number already found there, the GPU simply moves on without doing anything more, confident in the knowledge that what it had wanted to draw should actually be hidden by what it has already drawn.

There are plenty of occasions when the same pixel is drawn over twice — or many times — before reaching the screen even under this scheme, but it is nevertheless still vastly more efficient than the Painter’s Algorithm, because it keeps objects flowing through the pipeline steadily, with no hiccups caused by lengthy sorting operations. Z-buffering support was reportedly a last-minute addition to the Vérité chipset, and it showed. Turning depth-sorting on for 100-percent realistic rendering on these chips cut their throughput almost in half; the Voodoo chipset, by contrast, just said, “No worries!,” and kept right on trucking. This was an advantage of titanic proportions. It eventually emerged that the programmers at Rendition had been able to get Quake running acceptably on the Vérité chips only by kludging together their own depth-sorting algorithms in software. With Voodoo, programmers wouldn’t have to waste time with stuff like that.

But surprisingly, the game that blew open the doors for the Voodoo chipset wasn’t Quake or anything else from id. It was rather a little something called Tomb Raider, from the British studio Core Design, a game which used a behind-the-back third-person perspective rather than the more typical first-person view — the better to appreciate its protagonist, the buxom and acrobatic female archaeologist Lara Croft. In addition to Lara’s considerable assets, Tomb Raider attracted gamers with its unprecedentedly huge and wide-open 3D environments. (It will be the subject of my next article, for those interested in reading more about its massive commercial profile and somewhat controversial legacy.)

In November of 1996, when Tomb Raider been out for less than a month, Core put a  Voodoo patch for it up on their website. Gamers were blown away. “It’s a totally new game!” gushed one on Usenet. “It was playable but a little jerky without the patch, but silky smooth to play and beautiful to look at with the patch.” “The level of detail you get with the Voodoo chip is amazing!” enthused another. Or how about this for a ringing testimonial?

I had been playing the regular Tomb Raider on my PC for about two weeks
before I got the patch, with about ten people seeing the game, and not
really saying anything regarding how amazing it was. When I got the
accelerated patch, after about four days, every single person who has
seen the game has been in awe watching the graphics and how
smooth [and] lifelike the movement is. The feel is different, you can see
things much more clearly, it’s just a more enjoyable game now.

Tomb Raider became the biggest hit of the 1996 holiday season, and tens if not hundreds of thousands of Voodoo-based 3D cards joined it under Christmas trees.

Tomb Raider with software rendering.

Tomb Raider with a Voodoo card.

In January of 1997, id released GLQuake, a new version of that game that supported the Voodoo chipset. In telling contrast to the Vérité-powered vQuake, which had been coded by Rendition’s programmers, GLQuake had been taken on by John Carmack as a personal project. The proof was in the pudding; this Quake ran faster and looked better than either of the previous ones. Running on a machine with a 200 MHz Intel Pentium processor and a Voodoo card, GLQuake could manage 70 frames per second, compared to 41 frames for the software-rendered version, whilst appearing much more realistic and less pixelated.

GLQuake

One last stroke of luck put the finishing touch on 3Dfx’s destiny of world domination: the price of memory dropped precipitously, thanks to a number of new RAM-chip factories that came online all at once in East Asia. (The factories had been built largely to feed the memory demands of Windows 95, the straw that was stirring the drink of the entire computer industry.) The Voodoo chipset required 4 MB of memory to operate effectively — an appreciable quantity in those days, and a big reason why the cards that used it tended to cost almost as twice as much as those based on the Vérité chips, despite lacking the added complications and expense of 2D support. But with the drop in memory prices, it suddenly became practical to sell a Voodoo card for under $200. Rendition could also lower their prices somewhat thanks to the memory windfall, of course, but at these lower price points the dollar difference wasn’t as damaging to 3Dfx. After all, the Voodoo cards were universally acknowledged to be the class of the industry. They were surely worth paying a little bit of a premium for. By the middle of 1997, the Voodoo chipset was everywhere, the Vérité one left dead at the side of the road. “If you want full support for a gamut of games, you need to get a 3Dfx card,” wrote Computer Gaming World.

These were heady times at 3Dfx, which had become almost overnight the most hallowed name in hardcore action gaming outside of id Software, all whilst making an order of magnitude more money than id, whose business model under John Carmack was hardly fine-tuned to maximize revenues. In a comment he left recently on this site, reader Captain Kal said that, when it comes to 3D gaming in the late 1990s, “one company springs to my mind without even thinking: 3Dfx. Yes, we also had 3D solutions from ATI, NVIDIA, or even S3, but Voodoo cards created the kind of dedication that I hadn’t seen since the Amiga days.” The comparison strikes me as thoroughly apropos.

3Dfx brought in a high-profile CEO named Greg Ballard, formerly of Warner Music and the videogame giant Capcom, to oversee a smashingly successful initial public offering in June of 1997. He and the three thirty-something founders were the oldest people at the company. “Most of the software engineers were [in their] early twenties, gamers through and through, loved games,” says Scott Sellers. “Would code during the day and play games at night. It was a culture of fun.” Their offices stood at the eighth hole of a golf course in Sunnyvale, California. “We’d sit out there and drink beer,” says Ross Smith. “And you’d have to dodge incoming golf balls a bit. But the culture was great.” Every time he came down for a visit, says their investing angel Gordon Campbell,

they’d show you something new, a new demo, a new mapping technique. There was always something. It was a very creative environment. The work hard and play hard thing, that to me kind of was Silicon Valley. You went out and socialized with your crew and had beer fests and did all that kind of stuff. And a friendly environment where everybody knew everybody and everybody was not in a hierarchy so much as part of the group or the team.

I think the thing that was added here was, it’s the gaming industry. And that was a whole new twist on it. I mean, if you go to the trade shows, you’d have guys that would show up at our booth with Dracula capes and pointed teeth. I mean, it was just crazy.

Gary Tarolli, Scott Sellers, and Greg Ballard do battle with a dangerous houseplant. The 1990s were wild and crazy times, kids…

While the folks at 3Dfx were working hard and playing hard, an enormously consequential advancement in the field of software was on the verge of transforming the computer-games industry. As I noted previously, in 1996 most hardcore action games were still being released for MS-DOS. In 1997, however, that changed in a big way. With the exception of only a few straggling Luddites, game developers switched over to Windows 95 en masse. Quake had been an MS-DOS game; Quake II, which would ship at the end of 1997, ran under Windows. The same held true for the original Tomb Raider and its 1997 sequel, as it did for countless others.

Gaming was made possible on Windows 95 by Microsoft’s DirectX libraries, which finally let programmers do everything in Windows that they had once done in MS-DOS, with only a slight speed penalty if any, all while giving them the welcome luxury of hardware independence. That is to say, all of the fiddly details of disparate video and sound cards and all the rest were abstracted away into Windows device drivers that communicated automatically with DirectX to do the needful. It was an enormous burden lifted off of developers’ shoulders. Ditto gamers, who no longer had to futz about for hours with cryptic “autoexec.bat” and “config.sys” files, searching out the exact combination of arcane incantations that would allow each game they bought to run optimally on their precise machine. One no longer needed to be a tech-head simply to install a game.

In its original release of September 1995, the full DirectX suite consisted of DirectDraw for 2D pixel graphics, DirectSound for sound and music, DirectInput for managing joysticks and other game-centric input devices, and DirectPlay for networked multiplayer gaming. It provided no support for doing 3D graphics. But never fear, Microsoft said: 3D support was coming. Already in February of 1995, they had purchased a British company called RenderMorphics, the creator of Reality Lab, a hardware-agnostic 3D library. As promised, Microsoft added Direct3D to the DirectX collection with the latter’s 2.0 release, in June of 1996.

But, as the noted computer scientist Andrew Tanenbaum once said, “the nice thing about standards is that you have so many to choose from.” For the next several years, Direct3D would compete with another library serving the same purpose: a complete, hardware-agnostic Windows port of SGI’s OpenGL, whose most prominent booster was no less leading a light than John Carmack. Direct3D would largely win out in the end among game developers despite Carmack’s endorsement of its rival, but we need not concern ourselves overmuch with the details of that tempest in a teacup here. Suffice to say that even the most bitter partisans on one side of the divide or the other could usually agree that both Direct3D and OpenGL were vastly preferable to the bad old days of chipset-specific 3D games.

Unfortunately for them, 3Dfx, rather feeling their oats after all of their success, made in response to these developments the first of a series of bad decisions that would cause their time at the top of the 3D-graphics heap to be a relatively short one.

Like all of the others, the Voodoo chipset could be used under Windows with either Direct3D or OpenGL. But there were some features on the Voodoo chips that the current implementations of those libraries didn’t support. 3Dfx was worried, reasonably enough on the face of it, about a “least-common-denominator effect” which would cancel out the very real advantages of their 3D chipset and make one example of the breed more or less as good as any other. However, instead of working with the folks behind Direct3D and OpenGL to get support for the Voodoo chips’ special features into those libraries, they opted to release a Windows version of GLide, and to strongly encourage game developers to keep working with it instead of either of the more hardware-agnostic alternatives. “You don’t want to just have a title 80 percent as good as it could be because your competitors are all going to be at 100 percent,” they said pointedly. They went so far as to start speaking of Voodoo-equipped machines as a whole new platform unto themselves, separate from more plebeian personal computers.

It was the talk and actions of a company that had begun to take its own press releases a bit too much to heart. But for a time 3Dfx got away with it. Developers coded for GLide in addition to or instead of Direct3D or OpenGL, because you really could do a lot more with it and because the cachet of the “certified” 3Dfx logo that using GLide allowed them to put on their boxes really was huge.

In March of 1998, the first cards with a new 3Dfx chipset, known as Voodoo2, began to appear. Voodoo2 boasted twice the overall throughput of its predecessor, and could handle a screen resolution of 800 X 600 instead of just 640 X 480; you could even join two of the new cards together to get even better performance and higher resolutions. This latest chipset only seemed to cement 3Dfx’s position as the class of their field.

The bottom line reflected this. 3Dfx was, in the words of their new CEO Greg Ballard, “a rocket ship.” In 1995, they earned $4 million in revenue; in 1996, $44 million; in 1997, $210 million; and in 1998, their peak year, $450 million. And yet their laser focus on selling the Ferraris of 3D acceleration was blinding Ballard and his colleagues to the potential of 3D Toyotas, where the biggest money of all was waiting to be made.

Over the course of the second half of the 1990s, 3D GPUs went from being exotic pieces of kit known only to hardcore gamers to being just another piece of commodity hardware found in almost all computers. 3Dfx had nothing to do with this significant shift. Instead they all but ignored this so-called “OEM” (“Original Equipment Manufacturer”) side of the GPU equation: chipsets that weren’t the hottest or the sexiest on the market, but that were cheap and easy to solder right onto the motherboards of low-end and mid-range machines bearing such unsexy name plates as Compaq and Packard Bell. Ironically, Gordon Campbell had made a fortune with Chips & Technologies selling just such commodity-grade 2D graphics chipsets. But 3Dfx was obstinately determined to fly above the OEM segment, determined to offer “premium” products only. “It doesn’t matter if 20 million people have one of our competitors’ chips,” said Scott Sellers in 1997. “How many of those people are hardcore gamers? How many of those people are buying games?” “I can guarantee that 100 percent of 3Dfx owners are buying games,” chimed in a self-satisfied-sounding Gary Tarolli.

The obvious question to ask in response was why it should matter to 3Dfx how many games — or what types of games — the users of their chips were buying, as long as they were buying gadgets that contained their chips. While 3Dfx basked in their status as the hardcore gamer’s favorite, other companies were selling many more 3D chips, admittedly at much less of a profit on a chip-per-chip basis, at the OEM end of the market. Among these was a firm known as NVIDIA, which had been founded on the back of a napkin in a Denny’s diner in 1993. NVIDIA’s first attempt to compete head to head with 3Dfx at the high end was underwhelming at best: released well after the Voodoo2 chipset, the RIVA TNT ran so hot that it required a noisy onboard cooling fan, and yet still couldn’t match the Voodoo2’s performance. By that time, however, NVIDIA was already building a lucrative business out of cheaper, simpler chips on the OEM side, even as they were gaining the wisdom they would need to mount a more credible assault on the hardcore-gamer market. In late 1998, 3Dfx finally seemed to be waking up to the fact that they would need to reach beyond the hardcore to continue their rise, when they released a new chipset called Voodoo Banshee which wasn’t quite as powerful as the Voodoo2 chips but could do conventional 2D as well as 3D graphics, meaning its owners would not be forced to buy a second video card just in order to use their computers.

But sadly, they followed this step forward with an absolutely disastrous mistake. You’ll remember that prior to this point 3Dfx had sold their chips only to other companies, who then incorporated them into add-on boards of their own design, in the same way that Intel sold microprocessors to computer makers rather than directly to consumers (aside from the build-your-own-rig hobbyists, that is). This business model had made sense for 3Dfx when they were cash-strapped and hadn’t a hope of building retail-distribution channels equal to those of the established board makers. Now, though, they were flush with cash, and enjoyed far better name recognition than the companies that made the boards which used their chips; even the likes of Creative Labs, who had long since dropped Rendition and were now selling plenty of 3Dfx boards, couldn’t touch them in terms of prestige. Why not cut out all these middlemen by manufacturing their own boards using their own chips and selling them directly to consumers with only the 3Dfx name on the box? They decided to do exactly that with their third state-of-the-art 3D chipset, the predictably named Voodoo3, which was ready in the spring of 1999.

Those famous last words apply: “It seemed like a good idea at the time.” With the benefit of hindsight, we can see all too clearly what a terrible decision it actually was. The move into the board market became, says Scott Sellers, the “anchor” that would drag down the whole company in a rather breathtakingly short span of time: “We started competing with what used to be our own customers” — i.e., the makers of all those earlier Voodoo boards. Then, too, 3Dfx found that the logistics of selling a polished consumer product at retail, from manufacturing to distribution to advertising, were much more complex than they had reckoned with.

Still, they might — just might — have been able to figure it all out and make it work, if only the Voodoo3 chipset had been a bit better. As it was, it was an upgrade to be sure, but not quite as much of one as everyone had been expecting. In fact, some began to point out now that even the Voodoo2 chips hadn’t been that great a leap: they too were better than their predecessors, yes, but that was more down to ever-falling memory prices and ever-improving chip-fabrication technologies than any groundbreaking innovations in their fundamental designs. It seemed that 3Dfx had started to grow complacent some time ago.

NVIDIA saw their opening and made the most of it. They introduced a new line of their own, called the TNT2, which outdid its 3Dfx competitor in at least one key metric: it could do 24-bit color, giving it almost 17 million shades of onscreen nuance, compared to just over 65,000 in the case of Voodoo3. For the first time, 3Dfx’s chips were not the unqualified, undisputed technological leaders. To make matters worse, NVIDIA had been working closely with Microsoft in exactly the way that 3Dfx had never found it in their hearts to do, ensuring that every last feature of their chips was well-supported by the increasingly dominant Direct3D libraries.

And then, as the final nail in the coffin, there were all those third-party board makers 3Dfx had so rudely jilted when they decided to take over that side of the business themselves. These had nowhere left to go but into NVIDIA’s welcoming arms. And needless to say, these business partners spurned were highly motivated to make 3Dfx pay for their betrayal.

NVIDIA was on a roll now. They soon came out with yet another new chipset, the GeForce 256, which had a “Transform & Lighting” (T&L) engine built in, a major conceptual advance. And again, the new technology was accessible right from the start through Direct3D, thanks to NVIDIA’s tight relationship with Microsoft. Meanwhile the 3Dfx chips still needed GLide to perform at their best. With those chips’ sales now plummeting, more and more game developers decided the oddball library just wasn’t worth the trouble anymore. By the end of 1999, a 3Dfx death spiral that absolutely no one had seen coming at the start of the year was already well along. NVIDIA was rapidly sewing up both the high end and the low end, leaving 3Dfx with nothing.

In 2000, NVIDIA continued to go from strength to strength. Their biggest challenger at the hardcore-gamer level that year was not 3Dfx, but rather ATI, who arrived on the scene with a new architecture known as Radeon. 3Dfx attempted to right the ship with a two-pronged approach: a Voodoo4 chipset aimed at the long-neglected budget market, and a Voodoo5 aimed at the high end. Both had potential, but the company was badly strapped for cash by now, and couldn’t afford to give them the launch they deserved. In December of 2000, 3Dfx announced that they had agreed to sell out to NVIDIA, who thought they had spotted some bits and bobs in their more recent chips that they might be able to make use of. And that, as they say, was that.

3Dfx was a brief-burning comet by any standard, a company which did everything right up to the instant when someone somewhere flipped a switch and it suddenly started doing everything wrong instead. But whatever regrets Gary Tarolli, Scott Sellers, and Ross Smith may have about the way it all turned out, they can rest secure in the knowledge that they changed not just gaming but computing in general forever. Their vanquisher NVIDIA had revenues of almost $27 billion last year, on the strength of GPUs which are as far beyond the original Voodoo chips as an F-35 is beyond the Wright Brothers’ flier, which are at the forefront not just of 3D graphics but a whole new trend toward “massively parallel” computing.

And yet even today, the 3Dfx name and logo can still send a little tingle of excitement running down the spines of gamers of a certain age, just as that of the Amiga can among some just slightly older. For a brief few years there, over the course of one of most febrile, chaotic, and yet exciting periods in all of gaming history, having a Voodoo card in your computer meant that you had the best graphics money could buy. Most of us wouldn’t want to go back to the days of needing to constantly tinker with the innards of our computers, of dropping hundreds of dollars on the latest and the greatest and hoping that publishers would still be supporting it in six months, of poring over magazines trying to make sense of long lists of arcane bullet points that seemed like fragments of a particularly esoteric PhD thesis (largely because they originally were). No, we wouldn’t want to go back; those days were kind of ridiculous. But that doesn’t mean we can’t look back and smile at the extraordinary technological progression we were privileged to witness over such a disarmingly short period of time.



Did you enjoy this article? If so, please think about pitching in to help me make many more like it. You can pledge any amount you like.



(Sources: the books Renegades of the Empire: How Three Software Warriors Started a Revolution Behind the Walls of Fortress Microsoft by Michael Drummond, Masters of DOOM: How Two Guys Created an Empire and Transformed Pop Culture by David Kushner, and Principles of Three-Dimensional Computer Animation by Michael O’Rourke. Computer Gaming World of November 1995, January 1996, July 1996, November 1996, December 1996, September 1997, October 1997, November 1997, and April 1998; Next Generation of October 1997 and January 1998; Atomic of June 2003; Game Developer of December 1996/January 1997 and February/March 1997. Online sources include “3Dfx and Voodoo Graphics — The Technologies Within” at The Overclocker, former 3Dfx CEO Greg Ballard’s lecture for Stanford’s Entrepreneurial Thought Leader series, the Computer History Museum’s “oral history” with the founders of 3Dfx, Fabian Sanglard’s reconstruction of the workings of the Vérité chipset and the Voodoo 1 chipset, “Famous Graphics Chips: 3Dfx’s Voodoo” by Dr. Jon Peddie at the IEEE Computer Society’s site, and “A Fallen Titan’s Final Glory” by Joel Hruska at the long-defunct Sudhian Media. Also, the Usenet discussions that followed the release of the 3Dfx patch for Tomb Raider and Nicol Bolas’s crazily detailed reply to the Stack Exchange question “Why Do Game Developer Prefer Windows?”.)

 

Tags: , , , , , , , ,

The Next Generation in Graphics, Part 2: Three Dimensions in Hardware

Most of the academic papers about 3D graphics that John Carmack so assiduously studied during the 1990s stemmed from, of all times and places, the Salt Lake City, Utah, of the 1970s. This state of affairs was a credit to one man by the name of Dave Evans.

Born in Salt Lake City in 1924, Evans was a physicist by training and an electrical engineer by inclination, who found his way to the highest rungs of computing research by way of the aviation industry. By the early 1960s, he was at the University of California, Berkeley, where he did important work in the field of time-sharing, taking the first step toward the democratization of computing by making it possible for multiple people to use one of the ultra-expensive big computers of the day at the same time, each of them accessing it through a separate dumb terminal. During this same period, Evans befriended one Ivan Sutherland, who deserves perhaps more than any other person the title of Father of Computer Graphics as we know them today.

For, in the course of earning his PhD at MIT, Sutherland developed a landmark software application known as Sketchpad, the first interactive computer-based drawing program of any stripe. Sketchpad did not do 3D graphics. It did, however, record its user’s drawings as points and lines on a two-dimensional plane. The potential for adding a third dimension to its Flatland-esque world — a Z coordinate to go along with X and Y — was lost on no one, least of all Sutherland himself. His 1963 thesis on Sketchpad rocketed him into the academic stratosphere.

Sketchpad in action.

In 1964, at the ripe old age of 26, Sutherland succeeded J.C.R. Licklider as head of the computer division of the Defense Department’s Advanced Research Projects Agency (ARPA), the most remarkable technology incubator in computing history. Alas, he proved ill-suited to the role of administrator: he was too young, too introverted — just too nerdy, as a later generation would have put it. But during the unhappy year he spent there before getting back to the pure research that was his real passion, he put the University of Utah on the computing map, largely as a favor to his friend Dave Evans.

Evans may have left Salt Lake City more than a decade ago, but he remained a devout Mormon, who found the counterculture values of the Berkeley of the 1960s rather uncongenial. So, he had decided to take his old alma mater up on an offer to come home and build a computer-science department there. Sutherland now awarded said department a small ARPA contract, one fairly insignificant in itself. What was significant was that it brought the University of Utah into the ARPA club of elite research institutions that were otherwise clustered on the coasts. An early place on the ARPANET, the predecessor to the modern Internet, was not the least of the perks which would come its way as a result.

Evans looked for a niche for his university amidst the august company it was suddenly joining. The territory of time-sharing was pretty much staked; extensive research in that field was already going full steam ahead at places like MIT and Berkeley. Ditto networking and artificial intelligence and the nuts and bolts of hardware design. Computer graphics, though… that was something else. There were smart minds here and there working on them — count Ivan Sutherland as Exhibit Number One — but no real research hubs dedicated to them. So, it was settled: computer graphics would become the University of Utah’s specialty. In what can only be described as a fantastic coup, in 1968 Evans convinced Sutherland himself to abandon the East Coast prestige of Harvard, where he had gone after leaving his post as the head of ARPA, in favor of the Mormon badlands of Utah.

Things just snowballed from there. Evans and Sutherland assembled around them an incredible constellation of bright young sparks, who over the course of the next decade defined the terms and mapped the geography of the field of 3D graphics as we still know it today, writing papers that remain as relevant today as they were half a century ago — or perchance more so, given the rise of 3D games. For example, the two most commonly used algorithms for calculating the vagaries of light and shade in 3D games stem directly from the University of Utah: Gouraud shading was invented by a Utah student named Henri Gouraud in 1971, while Phong shading was invented by another named Bui Tuong Phong in 1973.

But of course, lots of other students passed through the university without leaving so indelible a mark. One of these was Jim Clark, who would still be semi-anonymous today if he hadn’t gone on to become an entrepreneur who co-founded two of the most important tech companies of the late twentieth century.



When you’ve written as many capsule biographies as I have, you come to realize that the idea of the truly self-made person is for the most part a myth. Certainly almost all of the famous names in computing history were, long before any of their other qualities entered into the equation, lucky: lucky in their time and place of birth, in their familial circumstances, perhaps in (sad as it is to say) their race and gender, definitely in the opportunities that were offered to them. This isn’t to disparage their accomplishments; they did, after all, still need to have the vision to grasp the brass ring of opportunity and the talent to make the most of it. Suffice to say, then, that luck is a prerequisite but the farthest thing from a guarantee.

Every once in a while, however, I come across someone who really did almost literally make something out of nothing. One of these folks is Jim Clark. If today as a soon-to-be octogenarian he indulges as enthusiastically as any of his Old White Guy peers in the clichéd trappings of obscene wealth, from the mansions, yachts, cars, and wine to the Victoria’s Secret model he has taken for a fourth wife, he can at least credibly claim to have pulled himself up to his current station in life entirely by his own bootstraps.

Clark was born in 1944, in a place that made Salt Lake City seem like a cosmopolitan metropolis by comparison: the small Texas Panhandle town of Plainview. He grew up dirt poor, the son of a single mother living well below the poverty line. Nobody expected much of anything from him, and he obliged their lack of expectations. “I thought the whole world was shit and I was living in the middle of it,” he recalls.

An indifferent student at best, he was expelled from high school his junior year for telling a teacher to go to hell. At loose ends, he opted for the classic gambit of running away to sea: he joined the Navy at age seventeen. It was only when the Navy gave him a standardized math test, and he scored the highest in his group of recruits on it, that it began to dawn on him that he might actually be good at something. Encouraged by a few instructors to pursue his aptitude, he enrolled in correspondence courses to fill his free time when out plying the world’s oceans as a crewman on a destroyer.

Ten years later, in 1971, the high-school dropout, now six years out of the Navy and married with children, found himself working on a physics PhD at Louisiana State University. Clark:

I noticed in Physics Today an article that observed that physicists getting PhDs from places like Harvard, MIT, Yale, and so on didn’t like the jobs they were getting. And I thought, well, what am I doing — I’m getting a PhD in physics from Louisiana State University! And I kept thinking, well, I’m married, and I’ve got these obligations. By this time, I had a second child, so I was real eager to get a good job, and I just got discouraged about physics. And a friend of mine pointed to the University of Utah as having a computer-graphics specialty. I didn’t know much about it, but I was good with geometry and physics, which involves a lot of geometry.

So, Clark applied for a spot at the University of Utah and was accepted.

But, as I already implied, he didn’t become a star there. His 1974 thesis was entitled “3D Design of Free-Form B-Spline Surfaces”; it was a solid piece of work addressing a practical problem, but not anything to really get the juices flowing. Afterward, he spent half a decade bouncing around from campus to campus as an adjunct professor: the Universities of California at Santa Cruz and Berkeley, the New York Institute of Technology, Stanford. He was fairly miserable throughout. As an academic of no special note, he was hired primarily as an instructor rather than a researcher, and he wasn’t at all cut out for the job, being too impatient, too irascible. Proving the old adage that the child is the father of the man, he was fired from at least one post for insubordination, just like that angry teenager who had once told off his high-school teacher. Meanwhile he went through not one but two wives. “I was in this kind of downbeat funk,” he says. “Dark, dark, dark.”

It was now early 1979. At Stanford, Clark was working right next door to Xerox’s famed Palo Alto Research Center (PARC), which was inventing much of the modern paradigm of computing, from mice and menus to laser printers and local-area networking. Some of the colleagues Clark had known at the University of Utah were happily ensconced over there. But he was still on the outside looking in. It was infuriating — and yet he was about to find a way to make his mark at last.

Hardware engineering at the time was in the throes of a revolution and its backlash, over a technology that went by the mild-mannered name of “Very Large Scale Integration” (VLSI). The integrated circuit, which packed multiple transistors onto a single microchip, had been invented at Texas Instruments at the end of the 1950s, and had become a staple of computer design already during the following decade. Yet those early implementations often put only a relative handful of transistors on a chip, meaning that they still required lots of chips to accomplish anything useful. A turning point came in 1971 with the Intel 4004, the world’s first microprocessor — i.e., the first time that anyone put the entire brain of a computer on a single chip. Barely remarked at the time, that leap would result in the first kit computers being made available for home users in 1975, followed by the Trinity of 1977, the first three plug-em-in-and-go personal computers suitable for the home. Even then, though, there were many in the academic establishment who scoffed at the idea of VLSI, which required a new, in some ways uglier approach to designing circuitry. In a vivid illustration that being a visionary in some areas doesn’t preclude one from being a reactionary in others, many of the folks at PARC were among the scoffers. Look how far we’ve come doing things one way, they said. Why change?

A PARC researcher named Lynn Conway was enraged by such hidebound thinking. A rare female hardware engineer, she had made scant progress to date getting her point of view through to the old boy’s club that surrounded her at PARC. So, broadening her line of attack, she wrote a paper about the basic techniques of modern chip design, and sent it out to a dozen or so universities along with a tempting offer: if any students or faculty wished to draw up schematics for a chip of their own and send them to her, she would arrange to have the chip fabricated in real silicon and sent back to its proud parent. The point of it all was just to get people to see the potential of VLSI, not to push forward the state of the art. And indeed, just as she had expected, almost all of the designs she received were trivially simple by the standards of even the microchip industry of 1979: digital time keepers, adding machines, and the like. But one was unexpectedly, even crazily complex. Alone among the submissions, it bore a precautionary notice of copyright, from one James Clark. He called his creation the Geometry Engine.

The Geometry Engine was the first and, it seems likely, only microchip that Jim Clark ever personally attempted to design in his life. It was created in response to a fundamental problem that had been vexing 3D modelers since the very beginning: that 3D graphics required shocking quantities of mathematical calculations to bring to life, scaling almost exponentially with the complexity of the scene to be depicted. And worse, the type of math they required was not the type that the researchers’ computers were especially good at.

Wait a moment, some of you might be saying. Isn’t math the very thing that computers do? It’s right there in the name: they compute things. Well, yes, but not all types of math are created equal. Modern computers are also digital devices, meaning they are naturally equipped to deal only with discrete things. Like the game of DOOM, theirs is a universe of stair steps rather than smooth slopes. They like integer numbers, not decimals. Even in the 1960s and 1970s, they could approximate the latter through a storage format known as floating point, but they dealt with these floating-point numbers at least an order of magnitude slower than they did whole numbers, as well as requiring a lot more memory to store them. For this reason, programmers avoided them whenever possible.

And it actually was possible to do so a surprisingly large amount of the time. Most of what computers were commonly used for could be accomplished using only whole numbers — for example, by using Euclidean division that yields a quotient and a remainder in place of decimal division. Even financial software could be built using integers only to count the total number of cents rather than floating-point values to represent dollars and cents. 3D-graphics software, however, was one place where you just couldn’t get around them. Creating a reasonably accurate mathematical representation of an analog 3D space forced you to use floating-point numbers. And this in turn made 3D graphics slow.

Jim Clark certainly wasn’t the first person to think about designing a specialized piece of hardware to lift some of the burden from general-purpose computer designs, an add-on optimized for doing the sorts of mathematical operations that 3D graphics required and nothing else. Various gadgets along these lines had been built already, starting a decade or more before his Geometry Engine. Clark was the first, however, to think of packing it all onto a single chip — or at worst a small collection of them — that could live on a microcomputer’s motherboard or on a card mounted in a slot, that could be mass-produced and sold in the thousands or millions. His description of his “slave processor” sounded disarmingly modest (not, it must be said, a quality for which Clark is typically noted): “It is a four-component vector, floating-point processor for accomplishing three basic operations in computer graphics: matrix transformations, clipping, and mapping to output-device coordinates [i.e., going from an analog world space to pixels in a digital raster].” Yet it was a truly revolutionary idea, the genesis of the graphical processing units (GPUs) of today, which are in some ways more technically complex than the CPUs they serve. The Geometry Engine still needed to use floating-point numbers — it was, after all, still a digital device — but the old engineering doctrine that specialization yields efficiency came into play: it was optimized to do only floating-point calculations, and only a tiny subset of all the ones possible at that, just as quickly as it could.

The Geometry Engine changed Clark’s life. At last, he had something exciting and uniquely his. “All of these people started coming up and wanting to be part of my project,” he remembers. Always an awkward fit in academia, he turned his thinking in a different direction, adopting the mindset of an entrepreneur. “He reinvented his relationship to the world in a way that is considered normal only in California,” writes journalist Michael Lewis in a book about Clark. “No one who had been in his life to that point would be in it ten years later. His wife, his friends, his colleagues, even his casual acquaintances — they’d all be new.” Clark himself wouldn’t hesitate to blast his former profession in later years with all the fury of a professor scorned.

I love the metric of business. It’s money. It’s real simple. You either make money or you don’t. The metric of the university is politics. Does that person like you? Do all these people like you enough to say, “Yeah, he’s worthy?”

But by whatever metric, success didn’t come easy. The Geometry Engine and all it entailed proved a harder sell with the movers and shakers in commercial computing than it had with his colleagues at Stanford. It wasn’t until 1982 that he was able to scrape together the funding to found a company called Silicon Graphics, Incorporated (SGI), and even then he was forced to give 85 percent of his company’s shares to others in order to make it a reality. Then it took another two years after that to actually ship the first hardware.

The market segment SGI was targeting is one that no longer really exists. The machines it made were technically microcomputers, being built around microprocessors, but they were not intended for the homes of ordinary consumers, nor even for the cubicles of ordinary office workers. These were much higher-end, more expensive machines than those, even if they could fit under a desk like one of them. They were called workstation computers. The typical customer spent tens or hundreds of thousands of dollars on them in the service of some highly demanding task or another.

In the case of the SGI machines, of course, that task was almost always related to graphics, usually 3D graphics. Their expense wasn’t bound up with their CPUs; in the beginning, these were fairly plebeian chips from the Motorola 68000 series, the same line used in such consumer-grade personal computers as the Apple Macintosh and the Commodore Amiga. No, the justification of their high price tags rather lay with their custom GPUs, which even in 1984 already went far beyond the likes of Clark’s old Geometry Engine. An SGI GPU was a sort of black box for 3D graphics: feed it all of the data that constituted a scene on one side, and watch a glorious visual representation emerge at the other, thanks to an array of specialized circuitry designed for that purpose and no other.

Now that it had finally gotten off the ground, SGI became very successful very quickly. Its machines were widely used in staple 3D applications like computer-aided industrial design (CAD) and flight simulation, whilst also opening up new vistas in video and film production. They drove the shift in Hollywood from special effects made using miniature models and stop-motion techniques dating back to the era of King Kong to the extensive use of computer-generated imagery (CGI) that we see even in the purportedly live-action films of today. (Steven Spielberg and George Lucas were among SGI’s first and best customers.) “When a moviegoer rubbed his eyes and said, ‘What’ll they think of next?’,” writes Michael Lewis, “it was usually because SGI had upgraded its machines.”

The company peaked in the early 1990s, when its graphics workstations were the key to CGI-driven blockbusters like Terminator 2 and Jurassic Park. Never mind the names that flashed by in the opening credits; everyone could agree that the computer-generated dinosaurs were the real stars of Jurassic Park. SGI was bringing in over $3 billion in annual revenue and had close to 15,000 employees by 1993, the year that movie was released. That same year, President Bill Clinton and Vice President Al Gore came out personally to SGI’s offices in Silicon Valley to celebrate this American success story.

SGI’s hardware subsystem for graphics, the beating heart of its business model, was known in 1993 as the RealityEngine2. This latest GPU was, wrote Byte magazine in a contemporary article, “richly parallel,” meaning that it could do many calculations simultaneously, in contrast to a traditional CPU, which could only execute one instruction at a time. (Such parallelism is the reason that modern GPUs are so often used for some math-intensive non-graphical applications, such as crypto-currency mining and machine learning.) To support this black box and deliver to its well-heeled customers a complete turnkey solution for all their graphics needs, SGI had also spearheaded an open-source software library for 3D applications, known as the Open Graphics Library, or OpenGL. Even the CPUs in its latest machines were SGI’s own; it had purchased a maker of same called MIPS Technologies in 1990.

But all of this success did not imply a harmonious corporation. Jim Clark was convinced that he had been hard done by back in 1982, when he was forced to give up 85 percent of his brainchild in order to secure the funding he needed, then screwed over again when he was compelled by his board to give up the CEO post to a former Hewlett Packard executive named Ed McCracken in 1984. The two men had been at vicious loggerheads for years; Clark, who could be downright mean when the mood struck him, reduced McCracken to public tears on at least one occasion. At one memorable corporate retreat intended to repair the toxic atmosphere in the board room, recalls Clark, “the psychologist determined that everyone else on the executive committee was passive aggressive. I was just aggressive.”

Clark claims that the most substantive bone of contention was McCracken’s blasé indifference to the so-called low-end market, meaning all of those non-workstation-class personal computers that were proliferating in the millions during the 1980s and early 1990s. If SGI’s machines were advancing by leaps and bounds, these consumer-grade computers were hopscotching on a rocket. “You could see a time when the PC would be able to do the sort of graphics that [our] machines did,” says Clark. But McCracken, for one, couldn’t see it, was content to live fat and happy off of the high prices and high profit margins of SGI’s current machines.

He did authorize some experiments at the lower end, but his heart was never in it. In 1990, SGI deigned to put a limited subset of the RealityEngine smorgasbord onto an add-on card for Intel-based personal computers. Calling it IrisVision, it hopefully talked up its price of “under $5000,” which really was absurdly low by the company’s usual standards. What with its complete lack of software support and its way-too-high price for this marketplace, IrisVision went nowhere, whereupon McCracken took the failure as a vindication of his position. “This is a low-margin business, and we’re a high-margin company, so we’re going to stop doing that,” he said.

Despite McCracken’s indifference, Clark eventually managed to broker a deal with Nintendo to make a MIPS microprocessor and an SGI GPU the heart of the latter’s Nintendo 64 videogame console. But he quit after yet another shouting match with McCracken in 1994, two years before it hit the street.

He had been right all along about the inevitable course of the industry, however undiplomatically he may have stated his case over the years. Personal computers did indeed start to swallow the workstation market almost at the exact point in time that Clark bailed. The profits from the Nintendo deal were rich, but they were largely erased by another of McCracken’s pet projects, an ill-advised acquisition of the struggling supercomputer maker Cray. Meanwhile, with McCracken so obviously more interested in selling a handful of supercomputers for millions of dollars each than millions upon millions of consoles for a few hundred dollars each, a group of frustrated SGI employees left the company to help Nintendo make the GameCube, the followup to the Nintendo 64, on their own. It was all downhill for SGI after that, bottoming out in a 2009 bankruptcy and liquidation.

As for Clark, he would go on to a second entrepreneurial act as remarkable as his first, abandoning 3D graphics to make a World Wide Web browser with Marc Andreessen. We will say farewell to him here, but you can read the story of his second company Netscape’s meteoric rise and fall elsewhere on this site.



Now, though, I’d like to return to the scene of SGI’s glory days, introducing in the process three new starring players. Gary Tarolli and Scott Sellers were talented young engineers who were recruited to SGI in the 1980s; Ross Smith was a marketing and business-development type who initially worked for MIPS Technologies, then ended up at SGI when it acquired that company in 1990. The three became fast friends. Being of a younger generation, they didn’t share the contempt for everyday personal computers that dominated among their company’s upper management. Whereas the latter laughed at the primitiveness of games like Wolfenstein 3D and Ultima Underworld, if they bothered to notice them at all, our trio saw a brewing revolution in gaming, and thought about how much it could be helped along by hardware-accelerated 3D graphics.

Convinced that there was a huge opportunity here, they begged their managers to get into the gaming space. But, still smarting from the recent failure of IrisVision, McCracken and his cronies rejected their pleas out of hand. (One of the small mysteries in this story is why their efforts never came to the attention of Jim Clark, why an alliance was never formed. The likely answer is that Clark had, by his own admission, largely removed himself from the day-to-day running of SGI by this time, being more commonly seen on his boat than in his office.) At last, Tarolli, Sellers, Smith, and some like-minded colleagues ran another offer up the flagpole. You aren’t doing anything with IrisVision, they said. Let us form a spinoff company of our own to try to sell it. And much to their own astonishment, this time management agreed.

They decided to call their new company Pellucid — not the best name in the world, sounding as it did rather like a medicine of some sort, but then they were still green at all this. The technology they had to peddle was a couple of years old, but it still blew just about anything else in the MS-DOS/Windows space out of the water, being able to display 16 million colors at a resolution of 1024 X 768, with 3D acceleration built-in. (Contrast this with the SVGA card found in the typical home computer of the time, which could do 256 colors at 640 X 480, with no 3D affordances). Pellucid rebranded the old IrisVision the ProGraphics 1024. Thanks to the relentless march of chip-fabrication technology, they found that they could now manufacture it cheaply enough to be able to sell it for as little as $1000 — still pricey, to be sure, but a price that some hardcore gamers, as well as others with a strong interest in having the best graphics possible, might just be willing to pay.

The problem, the folks at Pellucid soon came to realize, was a well-nigh intractable deadlock between the chicken and the egg. Without software written to take advantage of its more advanced capabilities, the ProGraphics 1024 was just another SVGA graphics card, selling for a ridiculously high price. So, consumers waited for said software to arrive. Meanwhile software developers, seeing the as-yet non-existent installed base, saw no reason to begin supporting the card. Breaking this logjam must require a concentrated public-relations and developer-outreach effort, the likes of which the shoestring spinoff couldn’t possibly afford.

They thought they had done an end-run around the problem in May of 1993, when they agreed, with the blessing of SGI, to sell Pellucid kit and caboodle to a major up-and-comer in consumer computing known as Media Vision, which currently sold “multimedia upgrade kits” consisting of CD-ROM drives and sound cards. But Media Vision’s ambitions knew no bounds: they intended to branch out into many other kinds of hardware and software. With proven people like Stan Cornyn, a legendary hit-maker from the music industry, on their management rolls and with millions and millions of dollars on hand to fund their efforts, Media Vision looked poised to dominate.

It seemed the perfect landing place for Pellucid; Media Vision had all the enthusiasm for the consumer market that SGI had lacked. The new parent company’s management said, correctly, that the ProGraphics 1024 was too old by now and too expensive to ever become a volume product, but that 3D acceleration’s time would come as soon as the current wave of excitement over CD-ROM and multimedia began to ebb and people started looking for the next big thing. When that happened, Media Vision would be there with a newer, more reasonably priced 3D card, thanks to the people who had once called themselves Pellucid. It sounded pretty good, even if in the here and now it did seem to entail more waiting around than anything else.

The ProGraphics 1024 board in Media Vision livery.

There was just one stumbling block: “Media Vision was run by crooks,” as Scott Sellers puts it. In April of 1994, a scandal erupted in the business pages of the nation’s newspapers. It turned out that Media Vision had been an experiment in “fake it until you make it” on a gigantic scale. Its founders had engaged in just about every form of malfeasance imaginable, creating a financial house of cards whose honest revenues were a minuscule fraction of what everyone had assumed them to be. By mid-summer, the company had blown away like so much dust in the wind, still providing income only for the lawyers who were left to pick over the corpse. (At least two people would eventually be sent to prison for their roles in the conspiracy.) The former Pellucid folks were left as high and dry as everyone else who had gotten into bed with Media Vision. All of their efforts to date had led to the sale of no more than 2000 graphics cards.

That same summer of 1994, a prominent Silicon Valley figure named Gordon Campbell was looking for interesting projects in which to invest. Campbell had earned his reputation as one of the Valley’s wise men through a company called Chips and Technologies (C&T), which he had co-founded in 1984. One of those hidden movers in the computer industry, C&T had largely invented the concept of the chipset: chips or small collections of them that could be integrated directly into a computer’s motherboard to perform functions that used to be placed on add-on cards. C&T had first made a name for itself by reducing IBM’s bulky nineteen-chip EGA graphics card to just four chips that were cheaper to make and consumed less power. Campbell’s firm thrived alongside the cost-conscious PC clone industry, which by the beginning of the 1990s was rendering IBM itself, the very company whose products it had once so unabashedly copied, all but irrelevant. Onboard video, onboard sound, disk controllers, basic firmware… you name it, C&T had a cheap, good-enough-for-the-average-consumer chipset to handle it.

But now Campbell had left C&T “in pursuit of new opportunities,” as they say in Valley speak. Looking for a marketing person for one of the startups in which he had invested a stake, he interviewed a young man named Ross Smith who had SGI on his résumé — always a plus. But the interview didn’t go well. Campbell:

It was the worst interview I think I’ve ever had. And so finally, I just turned to him and I said, “Okay, your heart’s not in this interview. What do you really want to do?”

And he kind of looks surprised and says, well, there are these two other guys, and we want to start a 3D-graphics company. And the next thing I know, we had set up a meeting. And we had, over a lot of beers, a discussion which led these guys to all come and work at my office. And that set up the start of 3Dfx.

It seemed to all of them that, after all of the delays and blind alleys, it truly was now or never to make a mark. For hardware-accelerated 3D graphics were already beginning to trickle down into the consumer space. In standup arcades, games like Daytona USA and Virtua Fighter were using rudimentary GPUs. Ditto the Sega Saturn and the Sony PlayStation, the latest in home-videogame consoles, both which were on the verge of release in Japan, with American debuts expected in 1995. Meanwhile the software-only, 2.5D graphics of DOOM were taking the world of hardcore computer gamers by storm. The men behind 3Dfx felt that the next move must surely seem obvious to many other people besides themselves. The only reason the masses of computer-game players and developers weren’t clamoring for 3D graphics cards already was that they didn’t yet realize what such gadgets could do for them.

Still, they were all wary of getting back into the add-on board market, where they had been burned so badly before. Selling products directly to consumers required retail access and marketing muscle that they still lacked. Instead, following in the footsteps of C&T, they decided to sell a 3D chipset only to other companies, who could then build it into add-on boards for personal computers, standup-arcade machines, whatever they wished.

At the same time, though, they wanted their technology to be known, in exactly the way that the anonymous chipsets made by C&T were not. In the pursuit of this aspiration, Gordon Campbell found inspiration from another company that had become a household name despite selling very little directly to consumers. Intel had launched the “Intel Inside” campaign in 1990, just as the era of the PC clone was giving way to a more amorphous commodity architecture. The company introduced a requirement that the makers of computers which used its CPUs include the Intel Inside logo on their packaging and on the cases of the computers themselves, even as it made the same logo the centerpiece of a standalone advertising campaign in print and on television. The effort paid off; Intel became almost as identified with the Second Home Computer Revolution in the minds of consumers as was Microsoft, whose own logo showed up on their screens every time they booted into Windows. People took to calling the emerging duopoly the “Wintel” juggernaut, a name which has stuck around to this day.

So, it was decided: a requirement to display a similarly snazzy 3Dfx logo would be written into that company’s contracts as well. The 3Dfx name itself was a vast improvement over Pellucid. As time went on, 3Dfx would continue to display a near-genius for catchy branding: “Voodoo” for the chipset itself, “GLide” for the software library that controlled it. All of this reflected a business savvy the likes of which hadn’t been seen from Pellucid, that was a credit both to Campbell’s steady hand and the accumulating experience of the other three partners.

But none of it would have mattered without the right product. Campbell told his trio of protégés in no uncertain terms that they were never going to make a dent in computer gaming with a $1000 video card; they needed to get the price down to a third of that at the most, which meant the chipset itself could cost the manufacturers who used it in their products not much more than $100 a pop. That was a tall order, especially considering that gamers’ expectations of graphical fidelity weren’t diminishing. On the contrary: the old Pellucid card hadn’t even been able to do 3D texture mapping, a failing that gamers would never accept post-DOOM.

It was left to Gary Tarolli and Scott Sellers to figure out what absolutely had to be in there, such as the aforementioned texture mapping, and what they could get away with tossing overboard. Driven by the remorseless logic of chip-fabrication costs, they wound up going much farther with the tossing than they ever could have imagined when they started out. There could be no talk of 24-bit color or unusually high resolutions: 16-bit color (offering a little over 65,000 onscreen shades) at a resolution of 640 X 480 would be the limit.[1]A resolution of 800 X 600 was technically possible using the Voodoo chipset, but using this resolution meant that the programmer could not use a vital affordance known as Z-buffering. For this reason, it was almost never seen in the wild. Likewise, they threw out the capability of handling any polygons except for the simplest of them all, the humble triangle. For, they realized, you could make almost any solid you liked by combining triangular surfaces together. With enough triangles in your world — and their chipset would let you have up to 1 million of them — you needn’t lament the absence of the other polygons all that much.

Sellers had another epiphany soon after. Intel’s latest CPU, to which gamers were quickly migrating, was the Pentium. It had a built-in floating-point co-processor which was… not too shabby, actually. It should therefore be possible to take the first phase of the 3D-graphics pipeline — the modeling phase — out of the GPU entirely and just let the CPU handle it. And so another crucial decision was made: they would concern themselves only with the rendering or rasterization phase, which was a much greater challenge to tackle in software alone, even with a Pentium. Another huge piece of the puzzle was thus neatly excised — or rather outsourced back to the place where it was already being done in current games. This would have been heresy at SGI, whose ethic had always been to do it all in the GPU. But then, they were no longer at SGI, were they?

Undoubtedly their bravest decision of all was to throw out any and all 2D-graphics capabilities — i.e., the neat rasters of pixels used to display Windows desktops and word processors and all of those earlier, less exciting games. Makers of Voodoo boards would have to include a cable to connect the existing, everyday graphics cards inside their customers’ machines to their new 3D ones. When you ran non-3D applications, the Voodoo card would simply pass the video signal on to the monitor unchanged. But when you fired up a 3D game, it would take over from the other board. A relay inside made a distinctly audible click when this happened. Far from a bug, gamers would soon come to consider the noise a feature.”Because you knew it was time to have fun,” as Ross Smith puts it.

It was a radical plan, to be sure. These new cards would be useful only for games, would have no other purpose whatsoever; there would be no justifying this hardware purchase to the parents or the spouse with talk of productivity or educational applications. Nevertheless, the cost savings seemed worth it. After all, almost everyone who initially went out to buy the new cards would already have a perfectly good 2D video card in their computer. Why make them pay extra to duplicate those functions?

The final design used just two custom chips. One of them, internally known as the T-Rex (Jurassic Park was still in the air), was dedicated exclusively to the texture mapping that had been so conspicuously missing from the Pellucid board. Another, called the FBI (“Frame Buffer Interface”), did everything else required in the rendering phase. Add to this pair a few less exciting off-the-shelf chips and four megabytes worth of RAM chips, put it on a board with the appropriate connectors, and you had yourself a 3Dfx Voodoo GPU.

Needless to say, getting this far took some time. Tarolli, Sellers, and Smith spent the last half of 1994 camped out in Campbell’s office, deciding what they wanted to do and how they wanted to do it and securing the funding they needed to make it happen. Then they spent all of 1995 in offices of their own, hiring about a dozen people to help them, praying all the time that no other killer product would emerge to make all of their efforts moot. While they worked, the Sega Saturn and Sony PlayStation did indeed arrive on American shores, becoming the first gaming devices equpped with 3D GPUs to reach American homes in quantity. The 3Dfx crew were not overly impressed by either console — and yet they found the public’s warm reception of the PlayStation in particular oddly encouraging. “That showed, at a very rudimentary level, what could be done with 3D graphics with very crude texture mapping,” says Scott Sellers. “And it was pretty abysmal quality. But the consumers were just eating it up.”

They got their first finished chipsets back from their Taiwanese fabricator at the end of January 1996, then spent Super Bowl weekend soldering them into place and testing them. There were a few teething problems, but in the end everything came together as expected. They had their 3D chipset, at the beginning of a year destined to be dominated by the likes of Duke Nukem 3D and Quake. It seemed the perfect product for a time when gamers couldn’t get enough 3D mayhem. “If it had been a couple of years earlier,” says Gary Tarolli, “it would have been too early. If it had been a couple of years later, it would have been too late.” As it was, they were ready to go at the Goldilocks moment. Now they just had to sell their chipset to gamers — which meant they first had to sell it to game developers and board makers.



Did you enjoy this article? If so, please think about pitching in to help me make many more like it. You can pledge any amount you like.



(Sources: the books The Dream Machine by M. Mitchell Waldrop Dealers of Lightning: Xerox PARC and the Dawn of the Computer Age by Michael A. Hiltzik, and The New New Thing: A Silicon Valley Story by Michael Lewis; Byte of May 1992 and November 1993; InfoWorld of April 22 1991 and May 31 1993; Next Generation of October 1997; ACM’s Computer Graphics journal of July 1982; Wired of January 1994 and October 1994. Online sources include the Computer History Museum’s “oral histories” with Jim Clark, Forest Baskett, and the founders of 3Dfx; Wayne Carlson’s “Critical History of Computer Graphics and Animation”; “Fall of Voodoo” by Ernie Smith at Tedium; Fabian Sanglard’s reconstruction of the workings of the Voodoo 1 chips; “Famous Graphics Chips: 3Dfx’s Voodoo” by Dr. Jon Peddie at the IEEE Computer Society’s site; an internal technical description of the Voodoo technology archived at bitsavers.org.)

Footnotes

Footnotes
1 A resolution of 800 X 600 was technically possible using the Voodoo chipset, but using this resolution meant that the programmer could not use a vital affordance known as Z-buffering. For this reason, it was almost never seen in the wild.
 

Tags: ,